当 运行 在云中时,Dataflow DoFn 中的数据存储查询会减慢管道
Datastore queries in Dataflow DoFn slow down pipeline when run in the cloud
我正在尝试通过在 DoFn 步骤中查询数据存储来增强管道中的数据。
来自 Class CustomClass 的对象的字段用于对数据存储 table 进行查询,返回值用于增强物体。
代码如下所示:
public class EnhanceWithDataStore extends DoFn<CustomClass, CustomClass> {
private static Datastore datastore = DatastoreOptions.defaultInstance().service();
private static KeyFactory articleKeyFactory = datastore.newKeyFactory().kind("article");
@Override
public void processElement(ProcessContext c) throws Exception {
CustomClass event = c.element();
Entity article = datastore.get(articleKeyFactory.newKey(event.getArticleId()));
String articleName = "";
try{
articleName = article.getString("articleName");
} catch(Exception e) {}
CustomClass enhanced = new CustomClass(event);
enhanced.setArticleName(articleName);
c.output(enhanced);
}
当它在本地 运行 时,这很快,但是当它在云中 运行 时,此步骤会显着减慢管道速度。这是什么原因造成的?有没有解决方法或更好的方法来做到这一点?
管道的图片可以在这里找到(最后一步是增强步骤):
pipeline architecture
您在这里所做的是将您的输入 PCollection<CustomClass>
与 Datastore 中的增强功能连接起来。
对于 PCollection
的每个分区,对 Datastore 的调用将是单线程的,因此会产生大量延迟。我希望这在 DirectPipelineRunner
和 InProcessPipelineRunner
中也会很慢。通过自动缩放和动态工作重新平衡,当数据流服务 运行 时,您应该会看到并行性,除非您的结构导致我们对其优化不佳,因此您可以尝试增加 --maxNumWorkers
。但是您仍然无法从批量操作中受益。
最好在您的管道中表达此连接,使用 DatastoreIO.readFrom(...)
后跟 CoGroupByKey
转换。这样,Dataflow 将对所有增强功能进行批量并行读取,并使用高效的 GroupByKey
机制将它们与事件对齐。
// Here are the two collections you want to join
PCollection<CustomClass> events = ...;
PCollection<Entity> articles = DatastoreIO.readFrom(...);
// Key them both by the common id
PCollection<KV<Long, CustomClass>> keyedEvents =
events.apply(WithKeys.of(event -> event.getArticleId()))
PCollection<KV<Long, Entity>> =
articles.apply(WithKeys.of(article -> article.getKey().getId())
// Set up the join by giving tags to each collection
TupleTag<CustomClass> eventTag = new TupleTag<CustomClass>() {};
TupleTag<Entity> articleTag = new TupleTag<Entity>() {};
KeyedPCollectionTuple<Long> coGbkInput =
KeyedPCollectionTuple
.of(eventTag, keyedEvents)
.and(articleTag, keyedArticles);
PCollection<CustomClass> enhancedEvents = coGbkInput
.apply(CoGroupByKey.create())
.apply(MapElements.via(CoGbkResult joinResult -> {
for (CustomClass event : joinResult.getAll(eventTag)) {
String articleName;
try {
articleName = joinResult.getOnly(articleTag).getString("articleName");
} catch(Exception e) {
articleName = "";
}
CustomClass enhanced = new CustomClass(event);
enhanced.setArticleName(articleName);
return enhanced;
}
});
另一种可能性,如果只有很少的文章足以将查找存储在内存中,则使用 DatastoreIO.readFrom(...)
然后通过 View.asMap()
将它们全部读取为地图端输入并查找它们在本地 table.
// Here are the two collections you want to join
PCollection<CustomClass> events = ...;
PCollection<Entity> articles = DatastoreIO.readFrom(...);
// Key the articles and create a map view
PCollectionView<Map<Long, Entity>> = articleView
.apply(WithKeys.of(article -> article.getKey().getId())
.apply(View.asMap());
// Do a lookup join by side input to a ParDo
PCollection<CustomClass> enhanced = events
.apply(ParDo.withSideInputs(articles).of(new DoFn<CustomClass, CustomClass>() {
@Override
public void processElement(ProcessContext c) {
Map<Long, Entity> articleLookup = c.sideInput(articleView);
String articleName;
try {
articleName =
articleLookup.get(event.getArticleId()).getString("articleName");
} catch(Exception e) {
articleName = "";
}
CustomClass enhanced = new CustomClass(event);
enhanced.setArticleName(articleName);
return enhanced;
}
});
根据您的数据,这两个选项中的任何一个都可能是更好的选择。
经过一些检查,我设法查明了问题所在:项目位于欧盟(因此,数据存储是位于欧盟区;与 AppEningine 区相同),而 Dataflow 作业 本身(以及工人)默认情况下在美国托管(当不覆盖区域选项时)。
性能差异为 25-30 倍:~40 elements/s 与 15 名工人的~1200 elements/s 相比。
我正在尝试通过在 DoFn 步骤中查询数据存储来增强管道中的数据。 来自 Class CustomClass 的对象的字段用于对数据存储 table 进行查询,返回值用于增强物体。
代码如下所示:
public class EnhanceWithDataStore extends DoFn<CustomClass, CustomClass> {
private static Datastore datastore = DatastoreOptions.defaultInstance().service();
private static KeyFactory articleKeyFactory = datastore.newKeyFactory().kind("article");
@Override
public void processElement(ProcessContext c) throws Exception {
CustomClass event = c.element();
Entity article = datastore.get(articleKeyFactory.newKey(event.getArticleId()));
String articleName = "";
try{
articleName = article.getString("articleName");
} catch(Exception e) {}
CustomClass enhanced = new CustomClass(event);
enhanced.setArticleName(articleName);
c.output(enhanced);
}
当它在本地 运行 时,这很快,但是当它在云中 运行 时,此步骤会显着减慢管道速度。这是什么原因造成的?有没有解决方法或更好的方法来做到这一点?
管道的图片可以在这里找到(最后一步是增强步骤): pipeline architecture
您在这里所做的是将您的输入 PCollection<CustomClass>
与 Datastore 中的增强功能连接起来。
对于 PCollection
的每个分区,对 Datastore 的调用将是单线程的,因此会产生大量延迟。我希望这在 DirectPipelineRunner
和 InProcessPipelineRunner
中也会很慢。通过自动缩放和动态工作重新平衡,当数据流服务 运行 时,您应该会看到并行性,除非您的结构导致我们对其优化不佳,因此您可以尝试增加 --maxNumWorkers
。但是您仍然无法从批量操作中受益。
最好在您的管道中表达此连接,使用 DatastoreIO.readFrom(...)
后跟 CoGroupByKey
转换。这样,Dataflow 将对所有增强功能进行批量并行读取,并使用高效的 GroupByKey
机制将它们与事件对齐。
// Here are the two collections you want to join
PCollection<CustomClass> events = ...;
PCollection<Entity> articles = DatastoreIO.readFrom(...);
// Key them both by the common id
PCollection<KV<Long, CustomClass>> keyedEvents =
events.apply(WithKeys.of(event -> event.getArticleId()))
PCollection<KV<Long, Entity>> =
articles.apply(WithKeys.of(article -> article.getKey().getId())
// Set up the join by giving tags to each collection
TupleTag<CustomClass> eventTag = new TupleTag<CustomClass>() {};
TupleTag<Entity> articleTag = new TupleTag<Entity>() {};
KeyedPCollectionTuple<Long> coGbkInput =
KeyedPCollectionTuple
.of(eventTag, keyedEvents)
.and(articleTag, keyedArticles);
PCollection<CustomClass> enhancedEvents = coGbkInput
.apply(CoGroupByKey.create())
.apply(MapElements.via(CoGbkResult joinResult -> {
for (CustomClass event : joinResult.getAll(eventTag)) {
String articleName;
try {
articleName = joinResult.getOnly(articleTag).getString("articleName");
} catch(Exception e) {
articleName = "";
}
CustomClass enhanced = new CustomClass(event);
enhanced.setArticleName(articleName);
return enhanced;
}
});
另一种可能性,如果只有很少的文章足以将查找存储在内存中,则使用 DatastoreIO.readFrom(...)
然后通过 View.asMap()
将它们全部读取为地图端输入并查找它们在本地 table.
// Here are the two collections you want to join
PCollection<CustomClass> events = ...;
PCollection<Entity> articles = DatastoreIO.readFrom(...);
// Key the articles and create a map view
PCollectionView<Map<Long, Entity>> = articleView
.apply(WithKeys.of(article -> article.getKey().getId())
.apply(View.asMap());
// Do a lookup join by side input to a ParDo
PCollection<CustomClass> enhanced = events
.apply(ParDo.withSideInputs(articles).of(new DoFn<CustomClass, CustomClass>() {
@Override
public void processElement(ProcessContext c) {
Map<Long, Entity> articleLookup = c.sideInput(articleView);
String articleName;
try {
articleName =
articleLookup.get(event.getArticleId()).getString("articleName");
} catch(Exception e) {
articleName = "";
}
CustomClass enhanced = new CustomClass(event);
enhanced.setArticleName(articleName);
return enhanced;
}
});
根据您的数据,这两个选项中的任何一个都可能是更好的选择。
经过一些检查,我设法查明了问题所在:项目位于欧盟(因此,数据存储是位于欧盟区;与 AppEningine 区相同),而 Dataflow 作业 本身(以及工人)默认情况下在美国托管(当不覆盖区域选项时)。
性能差异为 25-30 倍:~40 elements/s 与 15 名工人的~1200 elements/s 相比。