如何使 Logstash TCP 输入来分隔它在 TCP 端口上侦听的消息?

How can I make the Logstash TCP input to separate the messages that it listens on a TCP port?

我想使用 TCP protocol 将数据发送到 Logstash。为了发送数据,我正在使用 Node-RED。一个简单的配置如下所示:

Logstash 文件夹中,我创建了一个名为 nodered.conf 的文件,内容如下:

input {
    tcp {
        port => "3999"
    }   
}

output {
        stdout { codec => rubydebug }
}

现在,我只想在屏幕上打印 Logstash 正在接收的信息。这就是我在输出中使用 stdout { codec => rubydebug } 的原因。

因此,在 Logstash 文件夹中,我使用以下命令启动了 Logstash

bin/logstash -f nodered.conf --config.reload.automatic

问题是我用 Node-RED 发送到 Logstash 的所有消息都聚合成一条消息。例如,如果我将带有 Node-RED 的 5 条消息注入我的 TCP port 3999,在重新部署 Node-RED 之后,我会在我的 Logstash 终端上收到以下内容:

user@computer:/home/Dados/ELK/logstash-5.4.0$ bin/logstash -f nodered.conf --config.reload.automatic
Sending Logstash's logs to /home/Dados/ELK/logstash-5.4.0/logs which is now configured via log4j2.properties
[2017-05-29T15:14:52,388][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-05-29T15:14:52,417][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:3999"}
[2017-05-29T15:14:52,430][INFO ][logstash.pipeline        ] Pipeline main started
[2017-05-29T15:14:52,513][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
{
    "@timestamp" => 2017-05-29T18:19:33.277Z,
          "port" => 54316,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "hellohellohellohellohello"
}

我真的很想看到这样的东西而不必重新部署:

user@computer:/home/Dados/ELK/logstash-5.4.0$ bin/logstash -f nodered.conf --config.reload.automatic
Sending Logstash's logs to /home/Dados/ELK/logstash-5.4.0/logs which is now configured via log4j2.properties
[2017-05-29T15:27:24,168][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-05-29T15:27:24,191][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:3999"}
[2017-05-29T15:27:24,200][INFO ][logstash.pipeline        ] Pipeline main started
[2017-05-29T15:27:24,260][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
{
    "@timestamp" => 2017-05-29T18:27:48.394Z,
          "port" => 54518,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "hello"
}
{
    "@timestamp" => 2017-05-29T18:27:51.657Z,
          "port" => 54546,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "hello"
}
{
    "@timestamp" => 2017-05-29T18:27:58.691Z,
          "port" => 54600,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "hello"
}
{
    "@timestamp" => 2017-05-29T18:28:06.330Z,
          "port" => 54656,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "hello"
}
{
    "@timestamp" => 2017-05-29T18:28:14.347Z,
          "port" => 54682,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "hello"
}

结论是我不知道如何让 Logstash 将每条消息解释为唯一的消息,而不是将它收到的所有内容串联起来。我尝试在我的 nodered.conf 文件中使用不同的编解码器,但没有成功。有谁知道如何让 Logstash 将它在 TCP 端口上侦听的每条消息都视为一条消息?

由于 Node-RED 只是通过 TCP 节点发送字节流,因此 Logstash 没有任何指示记录结束的信息。

如评论中所述,您可以使用函数节点将新行字符 (/n) 添加到字符串的末尾,这应该向 Logstash 发出信号,表明这是一个完整的记录。

msg.payload += "\n";
return msg;