Beam streaming 管道不将文件写入桶
Beam streaming pipeline does not write files to bucket
UI 在 GCP Dataflow 上有一个 python 流管道,它从 PubSub 读取数千条消息,如下所示:
with beam.Pipeline(options=pipeline_options) as p:
lines = p | "read" >> ReadFromPubSub(topic=str(job_options.inputTopic))
lines = lines | "decode" >> beam.Map(decode_message)
lines = lines | "Parse" >> beam.Map(parse_json)
lines = lines | beam.WindowInto(beam.window.FixedWindows(1*60))
lines = lines | "Add device id key" >> beam.Map(lambda elem: (elem.get('id'), elem))
lines = lines | "Group by key" >> beam.GroupByKey()
lines = lines | "Abandon key" >> beam.Map(flatten)
lines | "WriteToAvro" >> beam.io.WriteToAvro(job_options.outputLocation, schema=schema, file_name_suffix='.avro', mime_type='application/x-avro')
管道运行良好,只是它从不产生任何输出。有什么想法吗?
您的代码似乎有一些问题。首先,关于 null/None(您已经修复)和 ints/floats(在评论中指出)有一些格式错误的数据。最后,WriteToAvro transform cannot write unbounded PCollections. There is a work-around in which you define a new sink and use that with the WriteToFiles 转换能够写入无界 PCollections。
请注意,截至撰写本文时 post (2020-06-18),此方法不适用于 Apache Beam Python SDK <= 2.23。这是因为 Python pickler 无法反序列化 pickled Avro 模式(参见 BEAM-6522)。在这种情况下,这会强制解决方案改用 FastAvro。如果手动升级 dill,则可以使用 Avro
至 >= 0.3.1.1 and Avro 至 >= 1.9.0,但要小心,因为目前尚未测试。
考虑到注意事项,这里是解决方法:
from apache_beam.io.fileio import FileSink
from apache_beam.io.fileio import WriteToFiles
import fastavro
class AvroFileSink(FileSink):
def __init__(self, schema, codec='deflate'):
self._schema = schema
self._codec = codec
def open(self, fh):
# This is called on every new bundle.
self.writer = fastavro.write.Writer(fh, self._schema, self._codec)
def write(self, record):
# This is called on every element.
self.writer.write(record)
def flush(self):
self.writer.flush()
这个新接收器的用法如下:
import apache_beam as beam
# Replace the following with your schema.
schema = fastavro.schema.parse_schema({
'name': 'row',
'namespace': 'test',
'type': 'record',
'fields': [
{'name': 'a', 'type': 'int'},
],
})
# Create the sink. This will be used by the WriteToFiles transform to write
# individual elements to the Avro file.
sink = AvroFileSink(schema=schema)
with beam.Pipeline(...) as p:
lines = p | beam.ReadFromPubSub(...)
lines = ...
# This is where your new sink gets used. The WriteToFiles transform takes
# the sink and uses it to write to a directory defined by the path
# argument.
lines | WriteToFiles(path=job_options.outputLocation, sink=sink)
UI 在 GCP Dataflow 上有一个 python 流管道,它从 PubSub 读取数千条消息,如下所示:
with beam.Pipeline(options=pipeline_options) as p:
lines = p | "read" >> ReadFromPubSub(topic=str(job_options.inputTopic))
lines = lines | "decode" >> beam.Map(decode_message)
lines = lines | "Parse" >> beam.Map(parse_json)
lines = lines | beam.WindowInto(beam.window.FixedWindows(1*60))
lines = lines | "Add device id key" >> beam.Map(lambda elem: (elem.get('id'), elem))
lines = lines | "Group by key" >> beam.GroupByKey()
lines = lines | "Abandon key" >> beam.Map(flatten)
lines | "WriteToAvro" >> beam.io.WriteToAvro(job_options.outputLocation, schema=schema, file_name_suffix='.avro', mime_type='application/x-avro')
管道运行良好,只是它从不产生任何输出。有什么想法吗?
您的代码似乎有一些问题。首先,关于 null/None(您已经修复)和 ints/floats(在评论中指出)有一些格式错误的数据。最后,WriteToAvro transform cannot write unbounded PCollections. There is a work-around in which you define a new sink and use that with the WriteToFiles 转换能够写入无界 PCollections。
请注意,截至撰写本文时 post (2020-06-18),此方法不适用于 Apache Beam Python SDK <= 2.23。这是因为 Python pickler 无法反序列化 pickled Avro 模式(参见 BEAM-6522)。在这种情况下,这会强制解决方案改用 FastAvro。如果手动升级 dill,则可以使用 Avro 至 >= 0.3.1.1 and Avro 至 >= 1.9.0,但要小心,因为目前尚未测试。
考虑到注意事项,这里是解决方法:
from apache_beam.io.fileio import FileSink
from apache_beam.io.fileio import WriteToFiles
import fastavro
class AvroFileSink(FileSink):
def __init__(self, schema, codec='deflate'):
self._schema = schema
self._codec = codec
def open(self, fh):
# This is called on every new bundle.
self.writer = fastavro.write.Writer(fh, self._schema, self._codec)
def write(self, record):
# This is called on every element.
self.writer.write(record)
def flush(self):
self.writer.flush()
这个新接收器的用法如下:
import apache_beam as beam
# Replace the following with your schema.
schema = fastavro.schema.parse_schema({
'name': 'row',
'namespace': 'test',
'type': 'record',
'fields': [
{'name': 'a', 'type': 'int'},
],
})
# Create the sink. This will be used by the WriteToFiles transform to write
# individual elements to the Avro file.
sink = AvroFileSink(schema=schema)
with beam.Pipeline(...) as p:
lines = p | beam.ReadFromPubSub(...)
lines = ...
# This is where your new sink gets used. The WriteToFiles transform takes
# the sink and uses it to write to a directory defined by the path
# argument.
lines | WriteToFiles(path=job_options.outputLocation, sink=sink)