通过 Kafka Connect HDFS Sink 中的多个嵌套字段进行分区
Partition By Multiple Nested Fields in Kafka Connect HDFS Sink
我们是运行 kafka hdfs sink连接器(版本5.2.1),需要HDFS数据被多个嵌套分区fields.The主题中的数据存储为Avro并且嵌套elements.How ever connect 无法识别嵌套字段并抛出错误,该字段不能是 found.Below 是我们正在使用的连接器配置。 hdfs sink connect 不支持按嵌套字段进行分区吗?我可以使用非嵌套字段进行分区
{
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"topics.dir": "/projects/test/kafka/logdata/coss",
"avro.codec": "snappy",
"flush.size": "200",
"connect.hdfs.principal": "test@DOMAIN.COM",
"rotate.interval.ms": "500000",
"logs.dir": "/projects/test/kafka/tmp/wal/coss4",
"hdfs.namenode.principal": "hdfs/_HOST@HADOOP.DOMAIN",
"hadoop.conf.dir": "/etc/hdfs",
"topics": "test1",
"connect.hdfs.keytab": "/etc/hdfs-qa/test.keytab",
"hdfs.url": "hdfs://nameservice1:8020",
"hdfs.authentication.kerberos": "true",
"name": "hdfs_connector_v1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://myschema:8081",
"partition.field.name": "meta.ID,meta.source,meta.HH",
"partitioner.class": "io.confluent.connect.storage.partitioner.FieldPartitioner"
}
我为 TimestampPartitioner 添加了嵌套字段支持,但 FieldPartitioner 仍然有出色的 PR
https://github.com/confluentinc/kafka-connect-storage-common/pull/67
我们是运行 kafka hdfs sink连接器(版本5.2.1),需要HDFS数据被多个嵌套分区fields.The主题中的数据存储为Avro并且嵌套elements.How ever connect 无法识别嵌套字段并抛出错误,该字段不能是 found.Below 是我们正在使用的连接器配置。 hdfs sink connect 不支持按嵌套字段进行分区吗?我可以使用非嵌套字段进行分区
{
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"topics.dir": "/projects/test/kafka/logdata/coss",
"avro.codec": "snappy",
"flush.size": "200",
"connect.hdfs.principal": "test@DOMAIN.COM",
"rotate.interval.ms": "500000",
"logs.dir": "/projects/test/kafka/tmp/wal/coss4",
"hdfs.namenode.principal": "hdfs/_HOST@HADOOP.DOMAIN",
"hadoop.conf.dir": "/etc/hdfs",
"topics": "test1",
"connect.hdfs.keytab": "/etc/hdfs-qa/test.keytab",
"hdfs.url": "hdfs://nameservice1:8020",
"hdfs.authentication.kerberos": "true",
"name": "hdfs_connector_v1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://myschema:8081",
"partition.field.name": "meta.ID,meta.source,meta.HH",
"partitioner.class": "io.confluent.connect.storage.partitioner.FieldPartitioner"
}
我为 TimestampPartitioner 添加了嵌套字段支持,但 FieldPartitioner 仍然有出色的 PR
https://github.com/confluentinc/kafka-connect-storage-common/pull/67