无法连接和描述 Kafka 集群。阿帕奇卡夫卡连接

Failed to connect to and describe Kafka cluster. Apache kafka connect

我已经在 aws 中设置了 MSK 集群并在同一个 vpn 中创建了一个 EC2 实例。

我尝试了 kafka-console-consumer.sh 和 kafka-console-producer.sh 并且工作正常。我能够在消费者中看到生产者发送的消息

1) 我已经下载了 s3 连接器 (https://docs.confluent.io/current/connect/kafka-connect-s3/index.html)

2) 将文件解压缩到 /home/ec2-user/plugins/

3)已创建连接-standalone.properties,内容如下

bootstrap.servers=<my brokers>
plugin.path=/home/ec2-user/kafka-plugins
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets

4) 创建了具有以下内容的 s3-sink.properties。

name=s3-sink
connector.class=io.confluent.connect.s3.S3SinkConnector
tasks.max=1
topics=<My Topic>
s3.region=us-east-1
s3.bucket.name=vk-ingestion-dev
s3.part.size=5242880
flush.size=1
storage.class=io.confluent.connect.s3.storage.S3Storage
format.class=io.confluent.connect.s3.format.json.JsonFormat
schema.generator.class=io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
schema.compatibility=NONE

当我 运行 connect-standlone.sh 与以上两个 prop 文件时,它正在等待一段时间并抛出以下错误。

[AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:237)
org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2019-10-22 19:28:36,789] INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:237)
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.
[2019-10-22 19:28:36,796] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone:124)
org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
    at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
    at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:45)
    at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:81)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
    at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
    at org.apache.kafka.common.internals.KafkaFutureImpl.access[=12=]0(KafkaFutureImpl.java:32)
    at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
    at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
    at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:58)
    ... 2 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.

是否有任何我需要寻找的安全事项?

添加以下 ssl 配置后它起作用了。

security.protocol=SSL
ssl.truststore.location=/tmp/kafka.client.truststore.jks

添加上述参数后,连接器启动时没有错误,但数据未上传到 s3。

单独添加生产者和消费者配置参数有效。

示例:

producer.security.protocol=SSL
producer.ssl.truststore.location=/tmp/kafka.client.truststore.jks

consumer.security.protocol=SSL
consumer.ssl.truststore.location=/tmp/kafka.client.truststore.jks