logstash 提取嵌套字段并将其移动到新的父字段中
logstash extract and move nested fields into new parent field
如果在我的日志中打印给定点的纬度和经度,我如何捕获此信息以便在弹性搜索中将其作为地理空间数据处理?
下面我展示了一个日志行对应的Elasticsearch文档的例子:
{
"_index": "memo-logstash-2018.05",
"_type": "doc",
"_id": "DDCARGMBfvaBflicTW4-",
"_version": 1,
"_score": null,
"_source": {
"type": "elktest",
"message": "LON: 12.5, LAT: 42",
"@timestamp": "2018-05-09T10:44:09.046Z",
"host": "f6f9fd66cd6c",
"path": "/usr/share/logstash/logs/docker-elk-master.log",
"@version": "1"
},
"fields": {
"@timestamp": [
"2018-05-09T10:44:09.046Z"
]
},
"highlight": {
"type": [
"@kibana-highlighted-field@elktest@/kibana-highlighted-field@"
]
},
"sort": [
1525862649046
]
}
你可以先把LON
和LAT
分成各自的字段如下,
grok {
match => {"message" => "LON: %{NUMBER:LON}, LAT: %{NUMBER:LAT}"}
}
一旦它们分开,您就可以使用 mutate 过滤器在它们周围创建一个父字段,就像这样,
filter {
mutate {
rename => { "LON" => "[location][LON]" }
rename => { "LAT" => "[location][LAT]" }
}
}
如果有帮助请告诉我。
如果在我的日志中打印给定点的纬度和经度,我如何捕获此信息以便在弹性搜索中将其作为地理空间数据处理?
下面我展示了一个日志行对应的Elasticsearch文档的例子:
{
"_index": "memo-logstash-2018.05",
"_type": "doc",
"_id": "DDCARGMBfvaBflicTW4-",
"_version": 1,
"_score": null,
"_source": {
"type": "elktest",
"message": "LON: 12.5, LAT: 42",
"@timestamp": "2018-05-09T10:44:09.046Z",
"host": "f6f9fd66cd6c",
"path": "/usr/share/logstash/logs/docker-elk-master.log",
"@version": "1"
},
"fields": {
"@timestamp": [
"2018-05-09T10:44:09.046Z"
]
},
"highlight": {
"type": [
"@kibana-highlighted-field@elktest@/kibana-highlighted-field@"
]
},
"sort": [
1525862649046
]
}
你可以先把LON
和LAT
分成各自的字段如下,
grok {
match => {"message" => "LON: %{NUMBER:LON}, LAT: %{NUMBER:LAT}"}
}
一旦它们分开,您就可以使用 mutate 过滤器在它们周围创建一个父字段,就像这样,
filter {
mutate {
rename => { "LON" => "[location][LON]" }
rename => { "LAT" => "[location][LAT]" }
}
}
如果有帮助请告诉我。