为什么logstash不产生日志?
Why doesn't logstash produce logs?
我阅读了以下文章以了解logstash技术我建立了ELK环境。
https://tpodolak.com/blog/tag/kibana/
input {
file {
path => ["C:/logs/*.log"]
start_position => beginning
ignore_older => 0
}
}
filter {
grok {
match => { "message" => "TimeStamp=%{TIMESTAMP_ISO8601:logdate} CorrelationId=%{UUID:correlationId} Level=%{LOGLEVEL:logLevel} Message=%{GREEDYDATA:logMessage}" }
}
# set the event timestamp from the log
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
date {
match => [ "logdate", "yyyy-MM-dd HH:mm:ss.SSSS" ]
target => "@timestamp"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
}
stdout {}
}
我添加了输入路径 C/logs/*.log in logstash.conf。我有 test.log 个不为空的文件,它有:
TimeStamp=2016-07-20 21:22:46.0079 CorrelationId=dc665fe7-9734-456a-92ba-3e1b522f5fd4 Level=INFO Message=About
TimeStamp=2016-07-20 21:22:46.0079 CorrelationId=dc665fe7-9734-456a-92ba-3e1b522f5fd4 Level=INFO Message=About
TimeStamp=2016-11-01 00:13:01.1669 CorrelationId=77530786-8e6b-45c2-bbc1-31837d911c14 Level=INFO Message=Request completed with status code: 200
根据上面的文章。我必须在 elasticsearch 中查看我的日志。
(来自“https://tpodolak.com/blog/tag/kibana/”示例结果)
但是如果我将这个地址写入我的浏览器,我的结果是:http://localhost:9200/_cat/indices?v 我在 elasticsearch 中看不到 logstash 日志?弹性搜索中的 logstash 日志存储在哪里? logstash.conf 看起来不错。但没有满意的结果。因此。我想通过 logstash 从 C/logs/*.log 下获取所有日志到弹性?但是我的 logstash.conf 错误是什么?
我的日志(C:\monitoring\logstash\logs\C:\monitoring\logstash\logs.log):
[2017-03-13T10:47:17,849][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T11:46:35,123][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T11:48:20,023][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T11:55:10,808][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-03-13T11:55:10,871][INFO ][logstash.pipeline ] Pipeline main started
[2017-03-13T11:55:11,316][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-03-13T12:00:52,188][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T12:02:48,309][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T12:06:33,270][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Expected one of #, => at line 1, column 52 (byte 52) after output { elasticsearch { hosts "}
[2017-03-13T12:08:51,636][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Expected one of #, => at line 1, column 22 (byte 22) after input { file { path "}
[2017-03-13T12:09:48,114][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Expected one of #, => at line 1, column 22 (byte 22) after input { file { path "}
[2017-03-13T12:11:40,200][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Expected one of #, => at line 1, column 22 (byte 22) after input { file { path "}
[2017-03-13T12:19:17,622][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
首先你有几个配置问题:
- Elasticsearch 中的主机应该是一个数组(例如
hosts => ["myHost:myPort3]
),参见the doc
- Windows 上使用通配符的文件应使用正斜杠而不是反斜杠(参见 this issue)
- 您的日期过滤器正在查找字段 "logdate",而它应该查找字段 "TimeStamp"(给定您的日志文件)
- 为了方便起见,我有一个设置是
sincedb_path
,因为 Logstash 不会尝试再次解析它已经解析过的文件(默认情况下,它会检查 .sincedb 以查看它是否已经解析过文件位于 $HOME/.sincedb,当你使用相同的日志文件测试时,你需要在解析之间删除它)
这就是为什么经过一些研究(实际上是很多,不是 windows 用户),我可以想出这个有效的配置:
input {
file {
path => "C:/some/log/dir/*"
start_position => beginning
ignore_older => 0
sincedb_path => "NIL" #easier to remove from the current directory, the file will be NIL.sincedb
}
}
filter {
grok {
match => { "message" => "TimeStamp=%{TIMESTAMP_ISO8601:logdate} CorrelationId=%{UUID:correlationId} Level=%{LOGLEVEL:logLevel} Message=%{GREEDYDATA:logMessage}" }
}
# set the event timestamp from the log
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
date {
match => [ "TimeStamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
target => "@timestamp"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout {}
}
我阅读了以下文章以了解logstash技术我建立了ELK环境。 https://tpodolak.com/blog/tag/kibana/
input {
file {
path => ["C:/logs/*.log"]
start_position => beginning
ignore_older => 0
}
}
filter {
grok {
match => { "message" => "TimeStamp=%{TIMESTAMP_ISO8601:logdate} CorrelationId=%{UUID:correlationId} Level=%{LOGLEVEL:logLevel} Message=%{GREEDYDATA:logMessage}" }
}
# set the event timestamp from the log
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
date {
match => [ "logdate", "yyyy-MM-dd HH:mm:ss.SSSS" ]
target => "@timestamp"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
}
stdout {}
}
我添加了输入路径 C/logs/*.log in logstash.conf。我有 test.log 个不为空的文件,它有:
TimeStamp=2016-07-20 21:22:46.0079 CorrelationId=dc665fe7-9734-456a-92ba-3e1b522f5fd4 Level=INFO Message=About
TimeStamp=2016-07-20 21:22:46.0079 CorrelationId=dc665fe7-9734-456a-92ba-3e1b522f5fd4 Level=INFO Message=About
TimeStamp=2016-11-01 00:13:01.1669 CorrelationId=77530786-8e6b-45c2-bbc1-31837d911c14 Level=INFO Message=Request completed with status code: 200
根据上面的文章。我必须在 elasticsearch 中查看我的日志。
(来自“https://tpodolak.com/blog/tag/kibana/”示例结果)
我的日志(C:\monitoring\logstash\logs\C:\monitoring\logstash\logs.log):
[2017-03-13T10:47:17,849][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T11:46:35,123][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T11:48:20,023][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T11:55:10,808][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-03-13T11:55:10,871][INFO ][logstash.pipeline ] Pipeline main started
[2017-03-13T11:55:11,316][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-03-13T12:00:52,188][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T12:02:48,309][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[2017-03-13T12:06:33,270][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Expected one of #, => at line 1, column 52 (byte 52) after output { elasticsearch { hosts "}
[2017-03-13T12:08:51,636][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Expected one of #, => at line 1, column 22 (byte 22) after input { file { path "}
[2017-03-13T12:09:48,114][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Expected one of #, => at line 1, column 22 (byte 22) after input { file { path "}
[2017-03-13T12:11:40,200][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"Expected one of #, => at line 1, column 22 (byte 22) after input { file { path "}
[2017-03-13T12:19:17,622][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
首先你有几个配置问题:
- Elasticsearch 中的主机应该是一个数组(例如
hosts => ["myHost:myPort3]
),参见the doc - Windows 上使用通配符的文件应使用正斜杠而不是反斜杠(参见 this issue)
- 您的日期过滤器正在查找字段 "logdate",而它应该查找字段 "TimeStamp"(给定您的日志文件)
- 为了方便起见,我有一个设置是
sincedb_path
,因为 Logstash 不会尝试再次解析它已经解析过的文件(默认情况下,它会检查 .sincedb 以查看它是否已经解析过文件位于 $HOME/.sincedb,当你使用相同的日志文件测试时,你需要在解析之间删除它)
这就是为什么经过一些研究(实际上是很多,不是 windows 用户),我可以想出这个有效的配置:
input {
file {
path => "C:/some/log/dir/*"
start_position => beginning
ignore_older => 0
sincedb_path => "NIL" #easier to remove from the current directory, the file will be NIL.sincedb
}
}
filter {
grok {
match => { "message" => "TimeStamp=%{TIMESTAMP_ISO8601:logdate} CorrelationId=%{UUID:correlationId} Level=%{LOGLEVEL:logLevel} Message=%{GREEDYDATA:logMessage}" }
}
# set the event timestamp from the log
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
date {
match => [ "TimeStamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
target => "@timestamp"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout {}
}