AWS Glue 只写入最新的分区 parquet
AWS Glue write only newest partitions parquet
我有一个粘合数据库,它有两个 table,每个都具有相同的数据,只是分区不同。我正在尝试编写一个每晚运行的作业,从一个 table 读取数据,然后使用更新的分区写入新数据。我可以使用以下代码做到这一点:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql.functions import lit
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
datasource0 = glueContext.create_dynamic_frame.from_catalog(
database = "Database",
table_name = "Table",
transformation_ctx = "datasource0"
)
datasource0 = datasource0.toDF()
datasource0.write.partitionBy("Key1","Key2").parquet(OutputFilePath)
但这会占用并写入整个数据帧。我只想写新分区,所以我在 AWS 网站上找到了以下片段:
glue_context.write_dynamic_frame.from_options(
frame = projectedEvents,
connection_type = "s3",
connection_options = {"path": "$outpath", "partitionKeys": ["type"]},
format = "parquet")
但这也只是重写了整个数据框。我怎样才能只重写最新的分区?
也许看看书签,它的工作原理类似于检查点机制,以避免重新处理以前处理过的数据:https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html
这可以通过 push_down_predicate 参数来完成。数据本来是按年月日小时划分的,所以我只减去一天然后使用push_down_predicate如下:
timestamp = (datetime.datetime.now() - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
s1 = timestamp.split('-')
pdp = "partition_0 = " + s1[0] + " and partition_1 = " + s1[1] + " and partition_2 = " + s1[2]
datasource0 = glueContext.create_dynamic_frame.from_catalog(
database = "mailfiles_standardized",
table_name = "firehoseoutput",
push_down_predicate = pdp
)
glueContext.write_dynamic_frame.from_options(
frame = datasource2,
connection_type = "s3",
connection_options = {
"path": Bucket,
"partitionKeys": ["Key1","Key2"]
},
format = "parquet")
我有一个粘合数据库,它有两个 table,每个都具有相同的数据,只是分区不同。我正在尝试编写一个每晚运行的作业,从一个 table 读取数据,然后使用更新的分区写入新数据。我可以使用以下代码做到这一点:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql.functions import lit
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
datasource0 = glueContext.create_dynamic_frame.from_catalog(
database = "Database",
table_name = "Table",
transformation_ctx = "datasource0"
)
datasource0 = datasource0.toDF()
datasource0.write.partitionBy("Key1","Key2").parquet(OutputFilePath)
但这会占用并写入整个数据帧。我只想写新分区,所以我在 AWS 网站上找到了以下片段:
glue_context.write_dynamic_frame.from_options(
frame = projectedEvents,
connection_type = "s3",
connection_options = {"path": "$outpath", "partitionKeys": ["type"]},
format = "parquet")
但这也只是重写了整个数据框。我怎样才能只重写最新的分区?
也许看看书签,它的工作原理类似于检查点机制,以避免重新处理以前处理过的数据:https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html
这可以通过 push_down_predicate 参数来完成。数据本来是按年月日小时划分的,所以我只减去一天然后使用push_down_predicate如下:
timestamp = (datetime.datetime.now() - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
s1 = timestamp.split('-')
pdp = "partition_0 = " + s1[0] + " and partition_1 = " + s1[1] + " and partition_2 = " + s1[2]
datasource0 = glueContext.create_dynamic_frame.from_catalog(
database = "mailfiles_standardized",
table_name = "firehoseoutput",
push_down_predicate = pdp
)
glueContext.write_dynamic_frame.from_options(
frame = datasource2,
connection_type = "s3",
connection_options = {
"path": Bucket,
"partitionKeys": ["Key1","Key2"]
},
format = "parquet")