filebeat 无法解析 kafka 输出连接器中的复杂主机参数
filebeat does not resolve complicated hosts parameter in kafka output connector
我正在使用 Filebeat -> Kafka 输出连接器,我想根据当时 filebeat 正在处理的消息中传输的信息构建主机和主题参数。
令我惊讶的是,指定完全相同的表达式会导致它针对主题而不是主机字段进行解析。关于如何实现我的目标有什么建议吗?
我的配置如下:
kafka.yaml: |
processors:
- add_kubernetes_metadata:
namespace: {{ .Release.Namespace }}
# Drop all log lines that don't contain kubernetes.labels.entry field
- drop_event:
when:
not:
regexp:
kubernetes.labels.entry: ".*"
filebeat.config_dir: /conf/
output.kafka:
hosts: '%{[kubernetes][labels][entry]}'
topic: '%{[kubernetes][labels][entry]}'
required_acks: 1
version: 0.11.0.0
client_id: filebeat
bulk_max_size: 100
max_message_bytes: 20480
这是我从 filebeat 收到的错误消息:
2018/05/09 01:54:29.805431 log.go:36: INFO Failed to connect to broker [[%{[kubernetes][labels][entry]} dial tcp: address %{[kubernetes][labels][entry]}: missing port in address]]: %!s(MISSING)
我确实尝试在上面的配置中添加端口,然后错误消息仍然显示该字段尚未解析:
2018/05/09 02:13:41.392742 log.go:36: INFO client/metadata fetching metadata for all topics from broker [[%{[kubernetes][labels][entry]}:9092]]
2018/05/09 02:13:41.392854 log.go:36: INFO Failed to connect to broker [[%{[kubernetes][labels][entry]}:9092 dial tcp: address %{[kubernetes][labels][entry]}:9092: unexpected '[' in address]]: %!s(MISSING)
我在 Elastic 论坛上找到 the answer:
You cannot control hosts or files (in the case of the file output) via variables. Doing so would require Beats to manage state and connections to each different host. You can only use variables to control the destination topic, but not the broker.
所以我现在想做的事情是不可能实现的。
我正在使用 Filebeat -> Kafka 输出连接器,我想根据当时 filebeat 正在处理的消息中传输的信息构建主机和主题参数。
令我惊讶的是,指定完全相同的表达式会导致它针对主题而不是主机字段进行解析。关于如何实现我的目标有什么建议吗?
我的配置如下:
kafka.yaml: |
processors:
- add_kubernetes_metadata:
namespace: {{ .Release.Namespace }}
# Drop all log lines that don't contain kubernetes.labels.entry field
- drop_event:
when:
not:
regexp:
kubernetes.labels.entry: ".*"
filebeat.config_dir: /conf/
output.kafka:
hosts: '%{[kubernetes][labels][entry]}'
topic: '%{[kubernetes][labels][entry]}'
required_acks: 1
version: 0.11.0.0
client_id: filebeat
bulk_max_size: 100
max_message_bytes: 20480
这是我从 filebeat 收到的错误消息:
2018/05/09 01:54:29.805431 log.go:36: INFO Failed to connect to broker [[%{[kubernetes][labels][entry]} dial tcp: address %{[kubernetes][labels][entry]}: missing port in address]]: %!s(MISSING)
我确实尝试在上面的配置中添加端口,然后错误消息仍然显示该字段尚未解析:
2018/05/09 02:13:41.392742 log.go:36: INFO client/metadata fetching metadata for all topics from broker [[%{[kubernetes][labels][entry]}:9092]]
2018/05/09 02:13:41.392854 log.go:36: INFO Failed to connect to broker [[%{[kubernetes][labels][entry]}:9092 dial tcp: address %{[kubernetes][labels][entry]}:9092: unexpected '[' in address]]: %!s(MISSING)
我在 Elastic 论坛上找到 the answer:
You cannot control hosts or files (in the case of the file output) via variables. Doing so would require Beats to manage state and connections to each different host. You can only use variables to control the destination topic, but not the broker.
所以我现在想做的事情是不可能实现的。