在 Logstash 中,如何限制在 Elasticsearch 中转换为索引字段的日志中 JSON 属性的深度?
In Logstash, how do I limit the depth of JSON properties in my logs that are turned into Index fields in Elasticsearch?
我是 Elastic Stack 的新手。我正在使用 Logstash 6.4.0 将 JSON 日志数据从 Filebeat 6.4.0 加载到 Elasticsearch 6.4.0 中。我发现我将太多 JSON 属性转换为字段一旦我开始使用 Kibana 6.4.0。
我知道这一点,因为当我导航到 Kibana Discover 并输入 logstash-*
的索引时,我收到一条错误消息,指出:
Discover: Trying to retrieve too many docvalue_fields. Must be less than or equal to: [100] but was [106]. This limit can be set by changing the [index.max_docvalue_fields_search] index level setting.
如果我导航到 Management > Kibana > Index Patterns
,我会看到我有 940 个字段。似乎我的根 JSON 对象的每个子 属性(其中许多子属性都将 JSON 对象作为值,等等)被自动解析并用于在中创建字段我的 Elasticsearch logstash-*
索引。
所以这是我的问题 – 如何限制这种自动创建?是否可以通过 属性 深度来做到这一点?是否可以通过其他方式做到这一点?
这是我的 Filebeat 配置(减去注释):
filebeat.inputs:
- type: log
enabled: true
paths:
- d:/clients/company-here/rpms/logs/rpmsdev/*.json
json.keys_under_root: true
json.add_error_key: true
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.logstash:
hosts: ["localhost:5044"]
这是我当前的 Logstash 管道配置:
input {
beats {
port => "5044"
}
}
filter {
date {
match => [ "@timestamp" , "ISO8601"]
}
}
output {
stdout {
#codec => rubydebug
}
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
这是我发送的单个日志消息的示例(我的日志文件的一行)——请注意 JSON 是完全动态的,可以根据记录的内容进行更改:
{
"@timestamp": "2018-09-06T14:29:32.128",
"level": "ERROR",
"logger": "RPMS.WebAPI.Filters.LogExceptionAttribute",
"message": "Log Exception: RPMS.WebAPI.Entities.LogAction",
"eventProperties": {
"logAction": {
"logActionId": 26268916,
"performedByUserId": "b36778be-6181-4b69-a0fe-e3a975ddcdd7",
"performedByUserName": "test.sga.danny@domain.net",
"performedByFullName": "Mike Manley",
"controller": "RpmsToMainframeOperations",
"action": "UpdateStoreItemPricing",
"actionDescription": "Exception while updating store item pricing for store item with storeItemId: 146926. An error occurred while sending the request. InnerException: Unable to connect to the remote server InnerException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.1.1.133:8800",
"url": "http://localhost:49399/api/RpmsToMainframeOperations/UpdateStoreItemPricing/146926",
"verb": "PUT",
"statusCode": 500,
"status": "Internal Server Error - Exception",
"request": {
"itemId": 648,
"storeId": 13,
"storeItemId": 146926,
"changeType": "price",
"book": "C",
"srpCode": "",
"multi": 0,
"price": "1.27",
"percent": 40,
"keepPercent": false,
"keepSrp": false
},
"response": {
"exception": {
"ClassName": "System.Net.Http.HttpRequestException",
"Message": "An error occurred while sending the request.",
"Data": null,
"InnerException": {
"ClassName": "System.Net.WebException",
"Message": "Unable to connect to the remote server",
"Data": null,
"InnerException": {
"NativeErrorCode": 10060,
"ClassName": "System.Net.Sockets.SocketException",
"Message": "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond",
"Data": null,
"InnerException": null,
"HelpURL": null,
"StackTraceString": " at System.Net.Sockets.Socket.InternalEndConnect(IAsyncResult asyncResult)\r\n at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)\r\n at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)",
"RemoteStackTraceString": null,
"RemoteStackIndex": 0,
"ExceptionMethod": "8\nInternalEndConnect\nSystem, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\nSystem.Net.Sockets.Socket\nVoid InternalEndConnect(System.IAsyncResult)",
"HResult": -2147467259,
"Source": "System",
"WatsonBuckets": null
},
"HelpURL": null,
"StackTraceString": " at System.Net.HttpWebRequest.EndGetRequestStream(IAsyncResult asyncResult, TransportContext& context)\r\n at System.Net.Http.HttpClientHandler.GetRequestStreamCallback(IAsyncResult ar)",
"RemoteStackTraceString": null,
"RemoteStackIndex": 0,
"ExceptionMethod": "8\nEndGetRequestStream\nSystem, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\nSystem.Net.HttpWebRequest\nSystem.IO.Stream EndGetRequestStream(System.IAsyncResult, System.Net.TransportContext ByRef)",
"HResult": -2146233079,
"Source": "System",
"WatsonBuckets": null
},
"HelpURL": null,
"StackTraceString": " at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\r\n at RPMS.WebAPI.Infrastructure.RpmsToMainframe.RpmsToMainframeOperationsManager.<PerformOperationInternalAsync>d__14.MoveNext() in D:\Century\Clients\PigglyWiggly\RPMS\PWADC.RPMS\RPMSDEV\RPMS.WebAPI\Infrastructure\RpmsToMainframe\RpmsToMainframeOperationsManager.cs:line 114\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\r\n at RPMS.WebAPI.Infrastructure.RpmsToMainframe.RpmsToMainframeOperationsManager.<PerformOperationAsync>d__13.MoveNext() in D:\Century\Clients\PigglyWiggly\RPMS\PWADC.RPMS\RPMSDEV\RPMS.WebAPI\Infrastructure\RpmsToMainframe\RpmsToMainframeOperationsManager.cs:line 96\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\r\n at RPMS.WebAPI.Controllers.RpmsToMainframe.RpmsToMainframeOperationsController.<UpdateStoreItemPricing>d__43.MoveNext() in D:\Century\Clients\PigglyWiggly\RPMS\PWADC.RPMS\RPMSDEV\RPMS.WebAPI\Controllers\RpmsToMainframe\RpmsToMainframeOperationsController.cs:line 537\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.AuthorizationFilterAttribute.<ExecuteAuthorizationFilterAsyncCore>d__3.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.AuthenticationFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ExceptionFilterResult.<ExecuteAsync>d__6.MoveNext()",
"RemoteStackTraceString": null,
"RemoteStackIndex": 0,
"ExceptionMethod": "8\nThrowForNonSuccess\nmscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\nSystem.Runtime.CompilerServices.TaskAwaiter\nVoid ThrowForNonSuccess(System.Threading.Tasks.Task)",
"HResult": -2146233088,
"Source": "mscorlib",
"WatsonBuckets": null,
"SafeSerializationManager": {
"m_serializedStates": [{
}]
},
"CLR_SafeSerializationManager_RealType": "System.Net.Http.HttpRequestException, System.Net.Http, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
}
},
"performedAt": "2018-09-06T14:29:32.1195316-05:00"
}
},
"logAction": "RPMS.WebAPI.Entities.LogAction"
}
我从未最终找到一种方法来限制自动字段创建的深度。我也 posted my question in the Elastic forums 但从未得到答案。从 post 到现在,我对 Logstash 有了更多的了解。
我的最终解决方案是提取我需要的 JSON 属性作为字段,然后我在 grok
过滤器中使用 GREEDYDATA
模式将其余属性放入一个 unextractedJson
字段,这样我仍然可以在 Elasticsearch 中查询该字段内的值。
这是我的新 Filebeat 配置(减去注释):
filebeat.inputs:
- type: log
enabled: true
paths:
- d:/clients/company-here/rpms/logs/rpmsdev/*.json
#json.keys_under_root: true
json.add_error_key: true
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.logstash:
hosts: ["localhost:5044"]
请注意,我注释掉了 json.keys_under_root
设置,该设置告诉 Filebeat 将 JSON 格式的日志条目放入发送到 Logstash 的 json
字段中。
这是我的新 Logstash 管道配置的片段:
#...
filter {
###########################################################################
# common date time extraction
date {
match => ["[json][time]", "ISO8601"]
remove_field => ["[json][time]"]
}
###########################################################################
# configuration for the actions log
if [source] =~ /actionsCurrent.json/ {
if ("" in [json][eventProperties][logAction][performedByUserName]) {
mutate {
add_field => {
"performedByUserName" => "%{[json][eventProperties][logAction][performedByUserName]}"
"performedByFullName" => "%{[json][eventProperties][logAction][performedByFullName]}"
}
remove_field => [
"[json][eventProperties][logAction][performedByUserName]",
"[json][eventProperties][logAction][performedByFullName]"]
}
}
mutate {
add_field => {
"logFile" => "actions"
"logger" => "%{[json][logger]}"
"level" => "%{[json][level]}"
"performedAt" => "%{[json][eventProperties][logAction][performedAt]}"
"verb" => "%{[json][eventProperties][logAction][verb]}"
"url" => "%{[json][eventProperties][logAction][url]}"
"controller" => "%{[json][eventProperties][logAction][controller]}"
"action" => "%{[json][eventProperties][logAction][action]}"
"actionDescription" => "%{[json][eventProperties][logAction][actionDescription]}"
"statusCode" => "%{[json][eventProperties][logAction][statusCode]}"
"status" => "%{[json][eventProperties][logAction][status]}"
}
remove_field => [
"[json][logger]",
"[json][level]",
"[json][eventProperties][logAction][performedAt]",
"[json][eventProperties][logAction][verb]",
"[json][eventProperties][logAction][url]",
"[json][eventProperties][logAction][controller]",
"[json][eventProperties][logAction][action]",
"[json][eventProperties][logAction][actionDescription]",
"[json][eventProperties][logAction][statusCode]",
"[json][eventProperties][logAction][status]",
"[json][logAction]",
"[json][message]"
]
}
mutate {
convert => {
"statusCode" => "integer"
}
}
grok {
match => { "json" => "%{GREEDYDATA:unextractedJson}" }
remove_field => ["json"]
}
}
# ...
注意 mutate
命令中的 add_field
配置选项将属性提取到命名字段中,然后 remove_field
配置选项从 JSON 中删除这些属性.在筛选器片段的末尾,请注意 grok
命令吞噬了 JSON 的其余部分并将其放置在 unextractedJson
字段中。最后,也是最重要的一点,我删除了 Filebeat 提供的 json
字段。最后一点使我免于将所有 JSON 数据暴露给 Elasticsearch/Kibana.
此解决方案采用如下所示的日志条目:
{ "time": "2018-09-13T13:36:45.376", "level": "DEBUG", "logger": "RPMS.WebAPI.Filters.LogActionAttribute", "message": "Log Action: RPMS.WebAPI.Entities.LogAction", "eventProperties": {"logAction": {"logActionId":26270372,"performedByUserId":"83fa1d72-fac2-4184-867e-8c2935a262e6","performedByUserName":"rpmsadmin@domain.net","performedByFullName":"Super Admin","clientIpAddress":"::1","controller":"Account","action":"Logout","actionDescription":"Logout.","url":"http://localhost:49399/api/Account/Logout","verb":"POST","statusCode":200,"status":"OK","request":null,"response":null,"performedAt":"2018-09-13T13:36:45.3707739-05:00"}}, "logAction": "RPMS.WebAPI.Entities.LogAction" }
并将它们变成如下所示的 Elasticsearch 索引:
{
"_index": "actions-2018.09.13",
"_type": "doc",
"_id": "xvA41GUBIzzhuC5epTZG",
"_version": 1,
"_score": null,
"_source": {
"level": "DEBUG",
"tags": [
"beats_input_raw_event"
],
"@timestamp": "2018-09-13T18:36:45.376Z",
"status": "OK",
"unextractedJson": "{\"eventProperties\"=>{\"logAction\"=>{\"performedByUserId\"=>\"83fa1d72-fac2-4184-867e-8c2935a262e6\", \"logActionId\"=>26270372, \"clientIpAddress\"=>\"::1\"}}}",
"action": "Logout",
"source": "d:\path\actionsCurrent.json",
"actionDescription": "Logout.",
"offset": 136120,
"@version": "1",
"verb": "POST",
"statusCode": 200,
"controller": "Account",
"performedByFullName": "Super Admin",
"logger": "RPMS.WebAPI.Filters.LogActionAttribute",
"input": {
"type": "log"
},
"url": "http://localhost:49399/api/Account/Logout",
"logFile": "actions",
"host": {
"name": "Development5"
},
"prospector": {
"type": "log"
},
"performedAt": "2018-09-13T13:36:45.3707739-05:00",
"beat": {
"name": "Development5",
"hostname": "Development5",
"version": "6.4.0"
},
"performedByUserName": "rpmsadmin@domain.net"
},
"fields": {
"@timestamp": [
"2018-09-13T18:36:45.376Z"
],
"performedAt": [
"2018-09-13T18:36:45.370Z"
]
},
"sort": [
1536863805376
]
}
可以直接在弹性搜索中为每个索引设置深度限制。
ElascticSearch 字段映射文档:https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html#mapping-limit-settings
来自文档:
index.mapping.depth.limit
The maximum depth for a field, which is measured as the number of inner objects. For instance, if all fields are defined at the root object level, then the depth is 1. If there is one object mapping, then the depth is 2, etc. Default is 20.
相关回答:Limiting the nested fields in Elasticsearch
我是 Elastic Stack 的新手。我正在使用 Logstash 6.4.0 将 JSON 日志数据从 Filebeat 6.4.0 加载到 Elasticsearch 6.4.0 中。我发现我将太多 JSON 属性转换为字段一旦我开始使用 Kibana 6.4.0。
我知道这一点,因为当我导航到 Kibana Discover 并输入 logstash-*
的索引时,我收到一条错误消息,指出:
Discover: Trying to retrieve too many docvalue_fields. Must be less than or equal to: [100] but was [106]. This limit can be set by changing the [index.max_docvalue_fields_search] index level setting.
如果我导航到 Management > Kibana > Index Patterns
,我会看到我有 940 个字段。似乎我的根 JSON 对象的每个子 属性(其中许多子属性都将 JSON 对象作为值,等等)被自动解析并用于在中创建字段我的 Elasticsearch logstash-*
索引。
所以这是我的问题 – 如何限制这种自动创建?是否可以通过 属性 深度来做到这一点?是否可以通过其他方式做到这一点?
这是我的 Filebeat 配置(减去注释):
filebeat.inputs:
- type: log
enabled: true
paths:
- d:/clients/company-here/rpms/logs/rpmsdev/*.json
json.keys_under_root: true
json.add_error_key: true
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.logstash:
hosts: ["localhost:5044"]
这是我当前的 Logstash 管道配置:
input {
beats {
port => "5044"
}
}
filter {
date {
match => [ "@timestamp" , "ISO8601"]
}
}
output {
stdout {
#codec => rubydebug
}
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
这是我发送的单个日志消息的示例(我的日志文件的一行)——请注意 JSON 是完全动态的,可以根据记录的内容进行更改:
{
"@timestamp": "2018-09-06T14:29:32.128",
"level": "ERROR",
"logger": "RPMS.WebAPI.Filters.LogExceptionAttribute",
"message": "Log Exception: RPMS.WebAPI.Entities.LogAction",
"eventProperties": {
"logAction": {
"logActionId": 26268916,
"performedByUserId": "b36778be-6181-4b69-a0fe-e3a975ddcdd7",
"performedByUserName": "test.sga.danny@domain.net",
"performedByFullName": "Mike Manley",
"controller": "RpmsToMainframeOperations",
"action": "UpdateStoreItemPricing",
"actionDescription": "Exception while updating store item pricing for store item with storeItemId: 146926. An error occurred while sending the request. InnerException: Unable to connect to the remote server InnerException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.1.1.133:8800",
"url": "http://localhost:49399/api/RpmsToMainframeOperations/UpdateStoreItemPricing/146926",
"verb": "PUT",
"statusCode": 500,
"status": "Internal Server Error - Exception",
"request": {
"itemId": 648,
"storeId": 13,
"storeItemId": 146926,
"changeType": "price",
"book": "C",
"srpCode": "",
"multi": 0,
"price": "1.27",
"percent": 40,
"keepPercent": false,
"keepSrp": false
},
"response": {
"exception": {
"ClassName": "System.Net.Http.HttpRequestException",
"Message": "An error occurred while sending the request.",
"Data": null,
"InnerException": {
"ClassName": "System.Net.WebException",
"Message": "Unable to connect to the remote server",
"Data": null,
"InnerException": {
"NativeErrorCode": 10060,
"ClassName": "System.Net.Sockets.SocketException",
"Message": "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond",
"Data": null,
"InnerException": null,
"HelpURL": null,
"StackTraceString": " at System.Net.Sockets.Socket.InternalEndConnect(IAsyncResult asyncResult)\r\n at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)\r\n at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)",
"RemoteStackTraceString": null,
"RemoteStackIndex": 0,
"ExceptionMethod": "8\nInternalEndConnect\nSystem, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\nSystem.Net.Sockets.Socket\nVoid InternalEndConnect(System.IAsyncResult)",
"HResult": -2147467259,
"Source": "System",
"WatsonBuckets": null
},
"HelpURL": null,
"StackTraceString": " at System.Net.HttpWebRequest.EndGetRequestStream(IAsyncResult asyncResult, TransportContext& context)\r\n at System.Net.Http.HttpClientHandler.GetRequestStreamCallback(IAsyncResult ar)",
"RemoteStackTraceString": null,
"RemoteStackIndex": 0,
"ExceptionMethod": "8\nEndGetRequestStream\nSystem, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\nSystem.Net.HttpWebRequest\nSystem.IO.Stream EndGetRequestStream(System.IAsyncResult, System.Net.TransportContext ByRef)",
"HResult": -2146233079,
"Source": "System",
"WatsonBuckets": null
},
"HelpURL": null,
"StackTraceString": " at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\r\n at RPMS.WebAPI.Infrastructure.RpmsToMainframe.RpmsToMainframeOperationsManager.<PerformOperationInternalAsync>d__14.MoveNext() in D:\Century\Clients\PigglyWiggly\RPMS\PWADC.RPMS\RPMSDEV\RPMS.WebAPI\Infrastructure\RpmsToMainframe\RpmsToMainframeOperationsManager.cs:line 114\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\r\n at RPMS.WebAPI.Infrastructure.RpmsToMainframe.RpmsToMainframeOperationsManager.<PerformOperationAsync>d__13.MoveNext() in D:\Century\Clients\PigglyWiggly\RPMS\PWADC.RPMS\RPMSDEV\RPMS.WebAPI\Infrastructure\RpmsToMainframe\RpmsToMainframeOperationsManager.cs:line 96\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()\r\n at RPMS.WebAPI.Controllers.RpmsToMainframe.RpmsToMainframeOperationsController.<UpdateStoreItemPricing>d__43.MoveNext() in D:\Century\Clients\PigglyWiggly\RPMS\PWADC.RPMS\RPMSDEV\RPMS.WebAPI\Controllers\RpmsToMainframe\RpmsToMainframeOperationsController.cs:line 537\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Filters.AuthorizationFilterAttribute.<ExecuteAuthorizationFilterAsyncCore>d__3.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.AuthenticationFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ExceptionFilterResult.<ExecuteAsync>d__6.MoveNext()",
"RemoteStackTraceString": null,
"RemoteStackIndex": 0,
"ExceptionMethod": "8\nThrowForNonSuccess\nmscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\nSystem.Runtime.CompilerServices.TaskAwaiter\nVoid ThrowForNonSuccess(System.Threading.Tasks.Task)",
"HResult": -2146233088,
"Source": "mscorlib",
"WatsonBuckets": null,
"SafeSerializationManager": {
"m_serializedStates": [{
}]
},
"CLR_SafeSerializationManager_RealType": "System.Net.Http.HttpRequestException, System.Net.Http, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
}
},
"performedAt": "2018-09-06T14:29:32.1195316-05:00"
}
},
"logAction": "RPMS.WebAPI.Entities.LogAction"
}
我从未最终找到一种方法来限制自动字段创建的深度。我也 posted my question in the Elastic forums 但从未得到答案。从 post 到现在,我对 Logstash 有了更多的了解。
我的最终解决方案是提取我需要的 JSON 属性作为字段,然后我在 grok
过滤器中使用 GREEDYDATA
模式将其余属性放入一个 unextractedJson
字段,这样我仍然可以在 Elasticsearch 中查询该字段内的值。
这是我的新 Filebeat 配置(减去注释):
filebeat.inputs:
- type: log
enabled: true
paths:
- d:/clients/company-here/rpms/logs/rpmsdev/*.json
#json.keys_under_root: true
json.add_error_key: true
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.logstash:
hosts: ["localhost:5044"]
请注意,我注释掉了 json.keys_under_root
设置,该设置告诉 Filebeat 将 JSON 格式的日志条目放入发送到 Logstash 的 json
字段中。
这是我的新 Logstash 管道配置的片段:
#...
filter {
###########################################################################
# common date time extraction
date {
match => ["[json][time]", "ISO8601"]
remove_field => ["[json][time]"]
}
###########################################################################
# configuration for the actions log
if [source] =~ /actionsCurrent.json/ {
if ("" in [json][eventProperties][logAction][performedByUserName]) {
mutate {
add_field => {
"performedByUserName" => "%{[json][eventProperties][logAction][performedByUserName]}"
"performedByFullName" => "%{[json][eventProperties][logAction][performedByFullName]}"
}
remove_field => [
"[json][eventProperties][logAction][performedByUserName]",
"[json][eventProperties][logAction][performedByFullName]"]
}
}
mutate {
add_field => {
"logFile" => "actions"
"logger" => "%{[json][logger]}"
"level" => "%{[json][level]}"
"performedAt" => "%{[json][eventProperties][logAction][performedAt]}"
"verb" => "%{[json][eventProperties][logAction][verb]}"
"url" => "%{[json][eventProperties][logAction][url]}"
"controller" => "%{[json][eventProperties][logAction][controller]}"
"action" => "%{[json][eventProperties][logAction][action]}"
"actionDescription" => "%{[json][eventProperties][logAction][actionDescription]}"
"statusCode" => "%{[json][eventProperties][logAction][statusCode]}"
"status" => "%{[json][eventProperties][logAction][status]}"
}
remove_field => [
"[json][logger]",
"[json][level]",
"[json][eventProperties][logAction][performedAt]",
"[json][eventProperties][logAction][verb]",
"[json][eventProperties][logAction][url]",
"[json][eventProperties][logAction][controller]",
"[json][eventProperties][logAction][action]",
"[json][eventProperties][logAction][actionDescription]",
"[json][eventProperties][logAction][statusCode]",
"[json][eventProperties][logAction][status]",
"[json][logAction]",
"[json][message]"
]
}
mutate {
convert => {
"statusCode" => "integer"
}
}
grok {
match => { "json" => "%{GREEDYDATA:unextractedJson}" }
remove_field => ["json"]
}
}
# ...
注意 mutate
命令中的 add_field
配置选项将属性提取到命名字段中,然后 remove_field
配置选项从 JSON 中删除这些属性.在筛选器片段的末尾,请注意 grok
命令吞噬了 JSON 的其余部分并将其放置在 unextractedJson
字段中。最后,也是最重要的一点,我删除了 Filebeat 提供的 json
字段。最后一点使我免于将所有 JSON 数据暴露给 Elasticsearch/Kibana.
此解决方案采用如下所示的日志条目:
{ "time": "2018-09-13T13:36:45.376", "level": "DEBUG", "logger": "RPMS.WebAPI.Filters.LogActionAttribute", "message": "Log Action: RPMS.WebAPI.Entities.LogAction", "eventProperties": {"logAction": {"logActionId":26270372,"performedByUserId":"83fa1d72-fac2-4184-867e-8c2935a262e6","performedByUserName":"rpmsadmin@domain.net","performedByFullName":"Super Admin","clientIpAddress":"::1","controller":"Account","action":"Logout","actionDescription":"Logout.","url":"http://localhost:49399/api/Account/Logout","verb":"POST","statusCode":200,"status":"OK","request":null,"response":null,"performedAt":"2018-09-13T13:36:45.3707739-05:00"}}, "logAction": "RPMS.WebAPI.Entities.LogAction" }
并将它们变成如下所示的 Elasticsearch 索引:
{
"_index": "actions-2018.09.13",
"_type": "doc",
"_id": "xvA41GUBIzzhuC5epTZG",
"_version": 1,
"_score": null,
"_source": {
"level": "DEBUG",
"tags": [
"beats_input_raw_event"
],
"@timestamp": "2018-09-13T18:36:45.376Z",
"status": "OK",
"unextractedJson": "{\"eventProperties\"=>{\"logAction\"=>{\"performedByUserId\"=>\"83fa1d72-fac2-4184-867e-8c2935a262e6\", \"logActionId\"=>26270372, \"clientIpAddress\"=>\"::1\"}}}",
"action": "Logout",
"source": "d:\path\actionsCurrent.json",
"actionDescription": "Logout.",
"offset": 136120,
"@version": "1",
"verb": "POST",
"statusCode": 200,
"controller": "Account",
"performedByFullName": "Super Admin",
"logger": "RPMS.WebAPI.Filters.LogActionAttribute",
"input": {
"type": "log"
},
"url": "http://localhost:49399/api/Account/Logout",
"logFile": "actions",
"host": {
"name": "Development5"
},
"prospector": {
"type": "log"
},
"performedAt": "2018-09-13T13:36:45.3707739-05:00",
"beat": {
"name": "Development5",
"hostname": "Development5",
"version": "6.4.0"
},
"performedByUserName": "rpmsadmin@domain.net"
},
"fields": {
"@timestamp": [
"2018-09-13T18:36:45.376Z"
],
"performedAt": [
"2018-09-13T18:36:45.370Z"
]
},
"sort": [
1536863805376
]
}
可以直接在弹性搜索中为每个索引设置深度限制。
ElascticSearch 字段映射文档:https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html#mapping-limit-settings
来自文档:
index.mapping.depth.limit
The maximum depth for a field, which is measured as the number of inner objects. For instance, if all fields are defined at the root object level, then the depth is 1. If there is one object mapping, then the depth is 2, etc. Default is 20.
相关回答:Limiting the nested fields in Elasticsearch