当时间戳列包含年份 < 1900 时无法从 BigQuery 读取数据
Unable to read data from BigQuery when a timestamp column contains year < 1900
在使用 Python 2.2.0 的最新 Apache Beam SDK 定义的管道上,当 运行 读取和写入 BigQuery table 的简单管道时出现此错误。
由于一些行的时间戳小于 1900,因此读取操作失败。我该如何修补这个 dataflow_worker 软件包?
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
(4d31192aa4aec063): Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 582, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 167, in execute
op.start()
File "dataflow_worker/native_operations.py", line 38, in dataflow_worker.native_operations.NativeReadOperation.start
def start(self):
File "dataflow_worker/native_operations.py", line 39, in dataflow_worker.native_operations.NativeReadOperation.start
with self.scoped_start_state:
File "dataflow_worker/native_operations.py", line 44, in dataflow_worker.native_operations.NativeReadOperation.start
with self.spec.source.reader() as reader:
File "dataflow_worker/native_operations.py", line 48, in dataflow_worker.native_operations.NativeReadOperation.start
for value in reader:
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativefileio.py", line 198, in __iter__
for record in self.read_next_block():
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativeavroio.py", line 95, in read_next_block
yield self.decode_record(record)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativebigqueryavroio.py", line 110, in decode_record
record, self.source.table_schema)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativebigqueryavroio.py", line 104, in _fix_field_values
record[field.name], field)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativebigqueryavroio.py", line 83, in _fix_field_value
return dt.strftime('%Y-%m-%d %H:%M:%S.%f UTC')
ValueError: year=200 is before 1900; the datetime strftime() methods require year >= 1900
不幸的是,您无法修补它以使用时间戳,因为这是 Google 的 Apache Beam 运行器的内部实现:数据流。所以你必须等到 Google 修复这个问题(如果这被识别为一个错误)。请尽快报告,因为这更多是 Python 所用版本的限制,而不是错误。
问题来自 strftime
,正如您在错误中看到的那样。 documentation 明确提到它不适用于 1900 年之前的任何年份。
不过,就您而言,一种解决方法是将时间戳转换为字符串(您可以按照 documentation 中指定的方式在 BigQuery 中执行此操作)。然后在您的 Beam 管道中,您可以再次将其重新转换为某个时间戳或任何最适合您的时间。
您还有一个示例,说明如何将 datetime
对象转换为字符串作为 answer. In the same question there is another answer 中错误的模板,该示例解释了此错误发生的情况以及如何解决该错误(在 Python 中)以及您可以做什么。不幸的是,该解决方案似乎完全避免使用 strftime
,而是使用一些替代方法。
在使用 Python 2.2.0 的最新 Apache Beam SDK 定义的管道上,当 运行 读取和写入 BigQuery table 的简单管道时出现此错误。
由于一些行的时间戳小于 1900,因此读取操作失败。我该如何修补这个 dataflow_worker 软件包?
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
(4d31192aa4aec063): Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 582, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 167, in execute
op.start()
File "dataflow_worker/native_operations.py", line 38, in dataflow_worker.native_operations.NativeReadOperation.start
def start(self):
File "dataflow_worker/native_operations.py", line 39, in dataflow_worker.native_operations.NativeReadOperation.start
with self.scoped_start_state:
File "dataflow_worker/native_operations.py", line 44, in dataflow_worker.native_operations.NativeReadOperation.start
with self.spec.source.reader() as reader:
File "dataflow_worker/native_operations.py", line 48, in dataflow_worker.native_operations.NativeReadOperation.start
for value in reader:
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativefileio.py", line 198, in __iter__
for record in self.read_next_block():
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativeavroio.py", line 95, in read_next_block
yield self.decode_record(record)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativebigqueryavroio.py", line 110, in decode_record
record, self.source.table_schema)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativebigqueryavroio.py", line 104, in _fix_field_values
record[field.name], field)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/nativebigqueryavroio.py", line 83, in _fix_field_value
return dt.strftime('%Y-%m-%d %H:%M:%S.%f UTC')
ValueError: year=200 is before 1900; the datetime strftime() methods require year >= 1900
不幸的是,您无法修补它以使用时间戳,因为这是 Google 的 Apache Beam 运行器的内部实现:数据流。所以你必须等到 Google 修复这个问题(如果这被识别为一个错误)。请尽快报告,因为这更多是 Python 所用版本的限制,而不是错误。
问题来自 strftime
,正如您在错误中看到的那样。 documentation 明确提到它不适用于 1900 年之前的任何年份。
不过,就您而言,一种解决方法是将时间戳转换为字符串(您可以按照 documentation 中指定的方式在 BigQuery 中执行此操作)。然后在您的 Beam 管道中,您可以再次将其重新转换为某个时间戳或任何最适合您的时间。
您还有一个示例,说明如何将 datetime
对象转换为字符串作为 answer. In the same question there is another answer 中错误的模板,该示例解释了此错误发生的情况以及如何解决该错误(在 Python 中)以及您可以做什么。不幸的是,该解决方案似乎完全避免使用 strftime
,而是使用一些替代方法。