kafka SASL/SCRAM 身份验证失败

kafka SASL/SCRAM Failed authentication

我尝试为我的 kafka 集群添加安全性,我遵循了文档:

我使用这个添加用户:

kafka-configs.sh --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

我修改了server.properties:

broker.id=1
listeners=SASL_PLAINTEXT://kafka1:9092
advertised.listeners=SASL_PLAINTEXT://kafka1:9092
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
default.replication.factor=3
min.insync.replicas=2
log.dirs=/var/lib/kafka
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

创建了 jaas 文件:

KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin-secret"
};

在 /etc/profile.d:

中创建了文件 kafka_opts.sh
export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf

但是当我启动kafka时它会抛出以下错误:

[2020-05-04 10:54:08,782] INFO [Controller id=1, targetBrokerId=1] Failed authentication with kafka1/kafka1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)

我使用的不是 kafka1、kafka2、kafka3、zookeeper1、zookeeper2 和 zookeeper3 分别是每个服务器的 ip,有人可以帮我解决我的问题吗?

我的主要问题是这个配置:

zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka

server.properties 中的这个配置需要在 zookeeper 创建 kafka 信息的方式中有顺序,但这会影响我需要执行命令的方式 kafka-configs.sh,所以我将解释我需要遵循的步骤

  1. 首先修改zookeeper

我已经从官网下载了zookeeperhttps://zookeeper.apache.org/releases.html

我修改了zoo.cfg文件并添加了安全配置:

tickTime=2000
dataDir=/var/lib/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=zookeeper1:2888:3888
server.2=zookeeper2:2888:3888
server.3=zookeeper3:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl

我为 zookeeper 创建了 jaas 文件:

Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    user_admin="admin_secret";
};

我在 /conf/ 上创建了文件 java.env 并添加了以下内容:

SERVER_JVMFLAGS="-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf"

通过这个文件,你告诉 zookeeper 使用 jaas 文件让 kafka 向 zookeeper 进行身份验证,以验证 zookeeper 正在使用你只需要 运行:

的文件
zkServer.sh print-cmd

它将响应:

/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg
"java"  -Dzookeeper.log.dir="/opt/apache-zookeeper-3.6.0-bin/bin/../logs" ........-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf....... "/opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg" > "/opt/apache-zookeeper-3.6.0-bin/bin/../logs/zookeeper.out" 2>&1 < /dev/null
  1. 修改kafka

我已经从官网下载了kafkahttps://www.apache.org/dyn/closer.cgi?path=/kafka/2.5.0/kafka_2.12-2.5.0.tgz

我modifed/added在server.properties文件中进行如下配置:

listeners=SASL_PLAINTEXT://kafka1:9092
advertised.listeners=SASL_PLAINTEXT://kafka1:9092
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=false
super.users=User:admin

我为 kafka 创建了 jaas 文件:

KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin_secret";
};
Client {
   org.apache.zookeeper.server.auth.DigestLoginModule required
   username="admin"
   password="admin_secret";
};

你需要明白一件重要的事情,Client部分需要和zookeeper中的jaas文件一样,KafkaServer部分是为了interbroker通信。

我还需要告诉kafka使用jaas文件,这可以通过设置变量KAFKA_OPTS:

来完成
export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf
  1. 为 kafka 代理创建用户管理员

运行以下命令:

kafka-configs.sh --zookeeper zookeeper:2181/kafka --alter --add-config 'SCRAM-SHA-256=[password=admin_secret]' --entity-type users --entity-name admin

正如我之前提到的,我的错误是我没有将 /kafka 部分添加到 zookeeper ip(注意,所有使用 zookeeper 的东西都需要在 ip 末尾添加 /kafka 部分),现在如果你启动 zookeeper 和 kafka 一切都会很好。