Logstash 不读取文件输入
Logstash not reading from file input
我是 运行 this ELK 堆栈的实现,它非常简单且易于配置。
我可以像这样使用 netcat 通过堆栈推送 TCP 输入:
nc localhost 5000 < /Users/me/path/to/logs/appOne.log
nc localhost 5000 < /Users/me/path/to/logs/appOneStackTrace.log
nc localhost 5000 < /Users/me/path/to/logs/appTwo.log
nc localhost 5000 < /Users/me/path/to/logs/appTwoStackTrace.log
但是我无法让 Logstash 读取我在配置中指定的文件路径:
input {
tcp {
port => 5000
}
file {
path => [
"/Users/me/path/to/logs/appOne.log",
"/Users/me/path/to/logs/appOneStackTrace.log",
"/Users/me/path/to/logs/appTwo.log",
"/Users/me/path/to/logs/appTwoStackTrace.log"
]
type => "log"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
这是关于 logstash 输入的堆栈启动输出:
logstash_1 | [2019-01-28T17:44:33,206][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2019-01-28T17:44:34,037][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_a1605b28f1bc77daf785a8805c32f578", :path=>["/Users/me/path/to/logs/appOne.log", "/Users/me/path/to/logs/appOneStackTrace.log", "/Users/me/path/to/logs/appTwo.log", "/Users/me/path/to/logs/appTwoStackTrace.log"]}
没有迹象表明管道在启动时有任何问题。
我还检查了自显示 TCP 输入以来日志文件是否已更新,它们已经更新了。来自 ELK 堆栈的最后一个 Logstash 特定日志来自启动或 TCP 输入。
这是我的整个 Logstash 启动日志记录,以备不时之需:
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2019-01-29T13:32:19,391][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
logstash_1 | [2019-01-29T13:32:19,415][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
logstash_1 | [2019-01-29T13:32:23,989][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
logstash_1 | [2019-01-29T13:32:24,648][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2019-01-29T13:32:24,908][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2019-01-29T13:32:25,046][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
logstash_1 | [2019-01-29T13:32:25,051][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
logstash_1 | [2019-01-29T13:32:25,108][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2019-01-29T13:32:25,229][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash_1 | [2019-01-29T13:32:25,276][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
logstash_1 | [2019-01-29T13:32:25,327][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2019-01-29T13:32:25,924][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_143c07d174c46eeab78b902edb3b1289", :path=>["/Users/me/path/to/logs/appOne.log", "/Users/me/path/to/logs/appOneStackTrace.log", "/Users/me/path/to/logs/appTwo.log", "/Users/me/path/to/logs/appTwoStackTrace.log"]}
logstash_1 | [2019-01-29T13:32:25,976][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4d1515ce run>"}
logstash_1 | [2019-01-29T13:32:26,088][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash_1 | [2019-01-29T13:32:26,106][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2019-01-29T13:32:26,432][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
我发现了问题 - 我需要将日志文件从主机映射到容器(Docker 菜鸟)。我在 Logstash 配置中指定的本地路径适用于 TCP,但在没有卷映射的容器 内部 中不可用。
首先,我在 Logstash 的 Docker 文件中创建了容器的内部日志目录:
RUN mkdir /usr/share/appOneLogs
RUN mkdir /usr/share/appTwoLogs
然后我将主机的日志目录卷映射到配置 Logstash 的 docker-elk/docker-compose.yml 文件中:
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- /Users/me/path/to/appOne/logs:/usr/share/appOneLogs # this bit
- /Users/me/path/to/appTwo/logs:/usr/share/appTwoLogs # and this bit
ports:
- "5000:5000"
...
最后,我将 logstash/pipelines/logstash.config 中的路径替换为 Docker 文件中创建的目录:
file {
path => [
"/usr/share/appOneLogs",
"/usr/share/appTwoLogs",
]
}
另外请注意,我从文件输入定义中删除了 start_position => "beginning"
,因为这会覆盖默认行为 treat files like live streams and thus start at the end。
我是 运行 this ELK 堆栈的实现,它非常简单且易于配置。
我可以像这样使用 netcat 通过堆栈推送 TCP 输入:
nc localhost 5000 < /Users/me/path/to/logs/appOne.log
nc localhost 5000 < /Users/me/path/to/logs/appOneStackTrace.log
nc localhost 5000 < /Users/me/path/to/logs/appTwo.log
nc localhost 5000 < /Users/me/path/to/logs/appTwoStackTrace.log
但是我无法让 Logstash 读取我在配置中指定的文件路径:
input {
tcp {
port => 5000
}
file {
path => [
"/Users/me/path/to/logs/appOne.log",
"/Users/me/path/to/logs/appOneStackTrace.log",
"/Users/me/path/to/logs/appTwo.log",
"/Users/me/path/to/logs/appTwoStackTrace.log"
]
type => "log"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
这是关于 logstash 输入的堆栈启动输出:
logstash_1 | [2019-01-28T17:44:33,206][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2019-01-28T17:44:34,037][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_a1605b28f1bc77daf785a8805c32f578", :path=>["/Users/me/path/to/logs/appOne.log", "/Users/me/path/to/logs/appOneStackTrace.log", "/Users/me/path/to/logs/appTwo.log", "/Users/me/path/to/logs/appTwoStackTrace.log"]}
没有迹象表明管道在启动时有任何问题。
我还检查了自显示 TCP 输入以来日志文件是否已更新,它们已经更新了。来自 ELK 堆栈的最后一个 Logstash 特定日志来自启动或 TCP 输入。
这是我的整个 Logstash 启动日志记录,以备不时之需:
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2019-01-29T13:32:19,391][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
logstash_1 | [2019-01-29T13:32:19,415][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.5.4"}
logstash_1 | [2019-01-29T13:32:23,989][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
logstash_1 | [2019-01-29T13:32:24,648][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2019-01-29T13:32:24,908][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash_1 | [2019-01-29T13:32:25,046][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
logstash_1 | [2019-01-29T13:32:25,051][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
logstash_1 | [2019-01-29T13:32:25,108][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2019-01-29T13:32:25,229][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash_1 | [2019-01-29T13:32:25,276][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
logstash_1 | [2019-01-29T13:32:25,327][INFO ][logstash.inputs.tcp ] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
logstash_1 | [2019-01-29T13:32:25,924][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_143c07d174c46eeab78b902edb3b1289", :path=>["/Users/me/path/to/logs/appOne.log", "/Users/me/path/to/logs/appOneStackTrace.log", "/Users/me/path/to/logs/appTwo.log", "/Users/me/path/to/logs/appTwoStackTrace.log"]}
logstash_1 | [2019-01-29T13:32:25,976][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4d1515ce run>"}
logstash_1 | [2019-01-29T13:32:26,088][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash_1 | [2019-01-29T13:32:26,106][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2019-01-29T13:32:26,432][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
我发现了问题 - 我需要将日志文件从主机映射到容器(Docker 菜鸟)。我在 Logstash 配置中指定的本地路径适用于 TCP,但在没有卷映射的容器 内部 中不可用。
首先,我在 Logstash 的 Docker 文件中创建了容器的内部日志目录:
RUN mkdir /usr/share/appOneLogs
RUN mkdir /usr/share/appTwoLogs
然后我将主机的日志目录卷映射到配置 Logstash 的 docker-elk/docker-compose.yml 文件中:
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
- /Users/me/path/to/appOne/logs:/usr/share/appOneLogs # this bit
- /Users/me/path/to/appTwo/logs:/usr/share/appTwoLogs # and this bit
ports:
- "5000:5000"
...
最后,我将 logstash/pipelines/logstash.config 中的路径替换为 Docker 文件中创建的目录:
file {
path => [
"/usr/share/appOneLogs",
"/usr/share/appTwoLogs",
]
}
另外请注意,我从文件输入定义中删除了 start_position => "beginning"
,因为这会覆盖默认行为 treat files like live streams and thus start at the end。