elasticsearch 7 未能将映射放在索引 logstash 类型事件上并拒绝映射更新

elasticsearch 7 failed to put mappings on indices logstash type events and Rejecting mapping update

我们有一个带有 7.3.2 版本的 elasticsearch 的新集群。我们使用 rsyslog 将客户端日志发送到 elasticsearch 节点。

我通过删除版本 7 中弃用的事件和类型,将 logstash 模板从 6 更新到 7。

根据我遇到的错误。

[2020-03-17T09:13:08,861][DEBUG][o.e.c.s.MasterService    ] [prod-apm-elasticsearch103.example.com] processing [put-mapping[events]]: took [2ms] no change in cluster state
[2020-03-17T09:13:08,964][DEBUG][o.e.c.s.MasterService    ] [prod-apm-elasticsearch103.example.com] processing [put-mapping[events]]: execute
[2020-03-17T09:13:08,967][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [prod-apm-elasticsearch103.example.com] failed to put mappings on indices [[[logstash-2020.03.17/7_uGNP-iSCOxGczC2_xvfA]]], type [events]
java.lang.IllegalArgumentException: Rejecting mapping update to [logstash-2020.03.17] as the final mapping would have more than 1 type: [_doc, events]
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:272) ~[elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:238) ~[elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687) ~[elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310) ~[elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.3.2.jar:7.3.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.3.2.jar:7.3.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:835) [?:?]

以下是 rsyslog 转发器配置。

#This is the same as the default template, but allows for tags longer than 32 characters.
#See http://www.rsyslog.com/sende-messages-with-tags-larger-than-32-characters/ for an explanation
template (name="LongTagForwardFormat" type="string" string="<%PRI%>%TIMEGENERATED:::date-rfc3339% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%")

# this is for index names to be like: logstash-YYYY.MM.DD
template(name="logstash-index" type="list") {
  constant(value="logstash-")
  property(name="timereported" dateFormat="rfc3339" position.from="1" position.to="4")
  constant(value=".")
  property(name="timereported" dateFormat="rfc3339" position.from="6" position.to="7")
  constant(value=".")
  property(name="timereported" dateFormat="rfc3339" position.from="9" position.to="10")
}


action(type="mmjsonparse")

template(name="plain-syslog" type="list") {
    constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"containerId\":\"") property(name="hostname")
      constant(value="\",\"host\":\"")        property(name="$myhostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"tag\":\"")         property(name="programname" format="json") #name of process
      constant(value="\",")
      property(name="$!all-json" position.from="2")
}


# ship logs to Elasticsearch, contingent on having an applog_es_server defined
local0.* action(type="omelasticsearch"
  template="plain-syslog"
  #template="logstash"
  searchIndex="logstash-index"
  dynSearchIndex="on"
  #asyncrepl="off"
  bulkmode="on"
  queue.dequeuebatchsize="250"
  queue.type="linkedlist"
  queue.filename="syslog-elastic"
  queue.maxdiskspace="1G"
  queue.highwatermark="10000"
  queue.lowwatermark="5000"
  queue.size="2000000"
  queue.timeoutEnqueue="0"
  queue.timeoutshutdown="5000"
  queue.saveonshutdown="on"
  action.resumeretrycount="1"
  server=["prod-routing101.example.com:9200"]

rsyslogd 8.1908.0(又名 2019.08)编译: 平台:x86_64-pc-linux-gnu 平台(lsb_release -d): FEATURE_REGEXP:是的 GSSAPI Kerberos 5 支持:否 FEATURE_DEBUG(调试构建,慢代码):否 支持 32 位原子操作:是 支持 64 位原子操作:是 内存分配器:系统默认 运行时检测(慢代码):否 uuid支持:是 系统支持:否 配置文件:/etc/rsyslog.conf PID 文件:/var/run/rsyslogd.pid RainerScript 整数的位数:64

请帮我解决这个问题。

有关我使用的模板的信息,请参阅

提前致谢。

在 rsyslog conf(applog forwader)中添加 searchType="_doc" 后工作。

# ship logs to Elasticsearch, contingent on having an applog_es_server defined
local0.* action(type="omelasticsearch"
  template="plain-syslog"
  #template="logstash"
  searchIndex="logstash-index"
  dynSearchIndex="on"
  searchType="_doc"
  #asyncrepl="off"
  bulkmode="on"
  queue.dequeuebatchsize="250"
  queue.type="linkedlist"
  ...
  ...
  ...