Spring Cloud Stream for Kafka with consumer/producer API exactly once semantics with transaction-id-prefix 没有按预期工作
Spring Cloud Stream for Kafka with consumer/producer API exactly once semantics with transaction-id-prefix is not working as expected
我遇到了不同行为的场景。喜欢总共 3 种不同的服务
- 第一个服务将从 Solace 队列中监听并将其生成到 kafka
topic-1(启用事务)
- 第二个服务将从上面的kafka topic-1监听并将其写入另一个kafka topic-2(我们没有手动提交,事务
启用生成其他主题,自动提交偏移量为 false &
isolation.level 设置为 read_commited)
前删除
- Third Service 将从 kafka topic-2 监听并将其写回 Solace 队列(我们没有手动提交,自动提交偏移量为
false & isolation.level 设置为 read_commited).
现在我在第二个服务中启用事务和隔离级别后无法读取任何消息的问题,如果我在第二个服务中禁用事务则能够读取所有消息。
- 我们能否在一项服务中启用事务和隔离级别
- 如果我的服务只是生产者或消费者,它是如何工作的(这些服务如何保证 EoS)
已编辑:
下面是我的 yml 的样子
- kafka:
- binder:
- transaction:
- transaction-id-prefix:
- brokers:
- configuration:
all my consumer properties (ssl, sasl)
已更新(带有 spring 云的 yml):
spring:
cloud.stream:
bindings:
input:
destination: test_input
content-type: application/json
group: test_group
output:
destination: test_output
content-type: application/json
kafka.binder:
configuration:
isolation.level: read_committed
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
ssl.truststore.location: jks
ssl.truststore.password:
ssl.endpoint.identification.algorithm: null
brokers: broker1:9092,broker2:9092,broker3:9092
auto-create-topics: false
transaction:
transaction-id-prefix: trans-2
producer:
configuration:
retries: 2000
acks: all
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
ssl.truststore.location: jks
ssl.truststore.password:
ssl.endpoint.identification.algorithm: null
已更新(带有 spring kafka 的 yml):
spring:
kafka:
bootstrap-servers: broker1:9092,broker2:9092,broker3:9092
consumer:
properties:
isolation.level: read_committed
ssl.truststore.location: truststore.jks
ssl.truststore.password:
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
producer:
transaction-id-prefix: trans-2
retries: 2000
acks: all
properties:
ssl.truststore.location: truststore.jks
ssl.truststore.password:
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
admin:
properties:
ssl.truststore.location: truststore.jks
ssl.truststore.password:
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
更新了动态目的地
Caused by: java.lang.IllegalStateException: Cannot perform operation after producer has been closed
at org.apache.kafka.clients.producer.KafkaProducer.throwIfProducerClosed(KafkaProducer.java:810) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:819) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:803) ~[kafka-clients-2.0.0.jar:na]
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:423) ~[spring-kafka-2.2.0.RELEASE.jar:2.2.0.RELEASE]
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:351) ~[spring-kafka-2.2.0.RELEASE.jar:2.2.0.RELEASE]
at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:209) ~[spring-kafka-2.2.0.RELEASE.jar:2.2.0.RELEASE]
at org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler.handleRequestMessage(KafkaProducerMessageHandler.java:382) ~[spring-integration-kafka-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:123) [spring-integration-core-5.1.0.RELEASE.jar:5.1.0.RELEASE]
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:169) [spring-integration-core-5.1.0.RELEASE.jar:5.1.0.RELEASE]
针对动态目标解析器问题尝试了两种方法:
dynamic destination resolver
对我来说效果很好;这些都在同一个应用程序中,但这不会有什么不同...
@SpringBootApplication
@EnableBinding(Channels.class)
public class So55419549Application {
public static void main(String[] args) {
SpringApplication.run(So55419549Application.class, args);
}
@Bean
public IntegrationFlow service1(MessageChannel out1) {
return IntegrationFlows.from(() -> "foo", e -> e
.poller(Pollers.fixedDelay(Duration.ofSeconds(5))))
.log(Level.INFO, m -> "s1 " + m.getPayload())
.channel(out1)
.get();
}
@StreamListener("in2")
@SendTo("out2")
public String service2(String in) {
System.out.println("s2 " + in);
return in.toUpperCase();
}
@StreamListener("in3")
public void service3(String in) {
System.out.println("s3 " + in);
}
}
interface Channels {
@Output
MessageChannel out1();
@Input
MessageChannel in2();
@Output
MessageChannel out2();
@Input
MessageChannel in3();
}
和
spring:
cloud:
stream:
bindings:
out1:
destination: topic1
in2:
group: s2
destination: topic1
out2:
destination: topic2
in3:
group: s3
destination: topic2
kafka:
binder:
transaction:
transaction-id-prefix: tx
bindings:
in2:
consumer:
configuration:
isolation:
level: read_committed
in3:
consumer:
configuration:
isolation:
level: read_committed
kafka:
producer:
# needed again here so boot declares a TM for us
transaction-id-prefix: tx
retries: 10
acks: all
logging:
level:
org.springframework.kafka.transaction: debug
和
2019-03-29 12:57:08.345 INFO 75700 --- [ask-scheduler-1] o.s.integration.handler.LoggingHandler
: s1 foo
2019-03-29 12:57:08.353 DEBUG 75700 --- [container-0-C-1] o.s.k.t.KafkaTransactionManager : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2019-03-29 12:57:08.353 DEBUG 75700 --- [container-0-C-1] o.s.k.t.KafkaTransactionManager : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@6790c874, txId=txs2.topic1.0]]
s2 foo
2019-03-29 12:57:08.357 DEBUG 75700 --- [container-0-C-1] o.s.k.t.KafkaTransactionManager : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2019-03-29 12:57:08.358 DEBUG 75700 --- [container-0-C-1] o.s.k.t.KafkaTransactionManager : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@820ef3d, txId=txs3.topic2.0]]
s3 FOO
编辑
活页夹未在事务管理器上启用事务同步。作为解决方法,添加
TransactionSynchronizationManager.setActualTransactionActive(true);
给你的 @StreamListener
.
我对着活页夹打开了a bug
我遇到了不同行为的场景。喜欢总共 3 种不同的服务
- 第一个服务将从 Solace 队列中监听并将其生成到 kafka topic-1(启用事务)
- 第二个服务将从上面的kafka topic-1监听并将其写入另一个kafka topic-2(我们没有手动提交,事务 启用生成其他主题,自动提交偏移量为 false & isolation.level 设置为 read_commited) 前删除
- Third Service 将从 kafka topic-2 监听并将其写回 Solace 队列(我们没有手动提交,自动提交偏移量为 false & isolation.level 设置为 read_commited).
现在我在第二个服务中启用事务和隔离级别后无法读取任何消息的问题,如果我在第二个服务中禁用事务则能够读取所有消息。
- 我们能否在一项服务中启用事务和隔离级别
- 如果我的服务只是生产者或消费者,它是如何工作的(这些服务如何保证 EoS)
已编辑: 下面是我的 yml 的样子
- kafka:
- binder:
- transaction:
- transaction-id-prefix:
- brokers:
- configuration:
all my consumer properties (ssl, sasl)
已更新(带有 spring 云的 yml):
spring:
cloud.stream:
bindings:
input:
destination: test_input
content-type: application/json
group: test_group
output:
destination: test_output
content-type: application/json
kafka.binder:
configuration:
isolation.level: read_committed
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
ssl.truststore.location: jks
ssl.truststore.password:
ssl.endpoint.identification.algorithm: null
brokers: broker1:9092,broker2:9092,broker3:9092
auto-create-topics: false
transaction:
transaction-id-prefix: trans-2
producer:
configuration:
retries: 2000
acks: all
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
ssl.truststore.location: jks
ssl.truststore.password:
ssl.endpoint.identification.algorithm: null
已更新(带有 spring kafka 的 yml):
spring:
kafka:
bootstrap-servers: broker1:9092,broker2:9092,broker3:9092
consumer:
properties:
isolation.level: read_committed
ssl.truststore.location: truststore.jks
ssl.truststore.password:
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
producer:
transaction-id-prefix: trans-2
retries: 2000
acks: all
properties:
ssl.truststore.location: truststore.jks
ssl.truststore.password:
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
admin:
properties:
ssl.truststore.location: truststore.jks
ssl.truststore.password:
security.protocol: SASL_SSL
sasl.mechanism: GSSAPI
sasl.kerberos.service.name: kafka
更新了动态目的地
Caused by: java.lang.IllegalStateException: Cannot perform operation after producer has been closed
at org.apache.kafka.clients.producer.KafkaProducer.throwIfProducerClosed(KafkaProducer.java:810) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:819) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:803) ~[kafka-clients-2.0.0.jar:na]
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:423) ~[spring-kafka-2.2.0.RELEASE.jar:2.2.0.RELEASE]
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:351) ~[spring-kafka-2.2.0.RELEASE.jar:2.2.0.RELEASE]
at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:209) ~[spring-kafka-2.2.0.RELEASE.jar:2.2.0.RELEASE]
at org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler.handleRequestMessage(KafkaProducerMessageHandler.java:382) ~[spring-integration-kafka-3.1.0.RELEASE.jar:3.1.0.RELEASE]
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:123) [spring-integration-core-5.1.0.RELEASE.jar:5.1.0.RELEASE]
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:169) [spring-integration-core-5.1.0.RELEASE.jar:5.1.0.RELEASE]
针对动态目标解析器问题尝试了两种方法: dynamic destination resolver
对我来说效果很好;这些都在同一个应用程序中,但这不会有什么不同...
@SpringBootApplication
@EnableBinding(Channels.class)
public class So55419549Application {
public static void main(String[] args) {
SpringApplication.run(So55419549Application.class, args);
}
@Bean
public IntegrationFlow service1(MessageChannel out1) {
return IntegrationFlows.from(() -> "foo", e -> e
.poller(Pollers.fixedDelay(Duration.ofSeconds(5))))
.log(Level.INFO, m -> "s1 " + m.getPayload())
.channel(out1)
.get();
}
@StreamListener("in2")
@SendTo("out2")
public String service2(String in) {
System.out.println("s2 " + in);
return in.toUpperCase();
}
@StreamListener("in3")
public void service3(String in) {
System.out.println("s3 " + in);
}
}
interface Channels {
@Output
MessageChannel out1();
@Input
MessageChannel in2();
@Output
MessageChannel out2();
@Input
MessageChannel in3();
}
和
spring:
cloud:
stream:
bindings:
out1:
destination: topic1
in2:
group: s2
destination: topic1
out2:
destination: topic2
in3:
group: s3
destination: topic2
kafka:
binder:
transaction:
transaction-id-prefix: tx
bindings:
in2:
consumer:
configuration:
isolation:
level: read_committed
in3:
consumer:
configuration:
isolation:
level: read_committed
kafka:
producer:
# needed again here so boot declares a TM for us
transaction-id-prefix: tx
retries: 10
acks: all
logging:
level:
org.springframework.kafka.transaction: debug
和
2019-03-29 12:57:08.345 INFO 75700 --- [ask-scheduler-1] o.s.integration.handler.LoggingHandler
: s1 foo
2019-03-29 12:57:08.353 DEBUG 75700 --- [container-0-C-1] o.s.k.t.KafkaTransactionManager : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2019-03-29 12:57:08.353 DEBUG 75700 --- [container-0-C-1] o.s.k.t.KafkaTransactionManager : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@6790c874, txId=txs2.topic1.0]]
s2 foo
2019-03-29 12:57:08.357 DEBUG 75700 --- [container-0-C-1] o.s.k.t.KafkaTransactionManager : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2019-03-29 12:57:08.358 DEBUG 75700 --- [container-0-C-1] o.s.k.t.KafkaTransactionManager : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@820ef3d, txId=txs3.topic2.0]]
s3 FOO
编辑
活页夹未在事务管理器上启用事务同步。作为解决方法,添加
TransactionSynchronizationManager.setActualTransactionActive(true);
给你的 @StreamListener
.
我对着活页夹打开了a bug