配置 Rsyslog (Docker->TCP->Rsyslog->ElasticSearch)

Configure Rsyslog (Docker->TCP->Rsyslog->ElasticSearch)

我是 rsyslog、远程日志记录和 elasticsearch 的新手。

我配置了一个 python 脚本(来自 docker 个容器的 运行)通过 TCP 将日志记录发送到 $HOST:$PORT。

我已经安装了 rsyslog、mmnormalize 模块和 omelasticsearch 模块。

现在我想了解我的 rsyslog.conf(在主机上)应该如何使用 elasticsearch 收集日志(来自 172.17.0.0/16)。

谢谢!

我是这样解决问题的:

# /etc/rsyslog.d/docker.rb
version=2
# My sample record
# [Apr 25 12:00]$CONTAINER_HOSTNAME:INFO:Package.Module.Sub-Module:Hello World
#
# Here there is the rule to parse the log records into trees
rule=:[%date:char-to:]%]%hostname:char-to::%:%level:char-to::%:%file:char-to::%:%message:rest%
#
# alternative to set date field in rfc3339 format
# rule=:[%date:date-rfc3339%]%hostname:char-to::%:%level:char-to::%:%file:char-to::%:%message:rest%

# /etc/rsyslog.conf
module(load="mmnormalize")
module(load="omelasticsearch")
module(load="imtcp")

# apply to log records coming from address:port the specified rule
input(type="imtcp"
      address="127.0.0.1" # $HOST
      port="514"          # $PORT
      ruleset="docker-rule")

# define the rule in two actions; parsing the log record into a tree with
# root $! ($!son-node!grandson-node...) and adding to the elasticsearch index
# 'docker-logs' the parsed tree, but in a JSON format (specified in a template)
ruleset(name="docker-rule"){
    action(type="mmnormalize"
           rulebase="/etc/rsyslog.d/docker.rb"
           useRawMsg="on"
           path="$!")
    action(type="omelasticsearch"
           template="docker-template"
           searchIndex="docker-logs"
           bulkmode="on"
           action.resumeretrycount="-1")
}

# define the template:
# 'constants' are simply putting into the record JSON delimiters as '{' or ','
# 'properties' are simply putting the values of the parsed tree into fields
# named in the previous constant statements through 'value="..."'
# the result is a JSON record like:
# { "@timestamp":"foo",
#   "hostname":"bar",
#   "level":"foo",
#   "file":"bar",
#   "message":"foo"
# }
template(name="docker-template" type="list"){
    constant(value="{")
        constant(value="\"@timestamp\":")
            constant(value="\"")
                # because kibana would use '$!date' as string not as date
                # that is the only field not from the parsed tree
                property(name="timereported" dateFormat="rfc3339")
            constant(value="\"")
        constant(value=",")
        constant(value="\"hostname\":")
            constant(value="\"")
                property(name="$!hostname")
            constant(value="\"")
        constant(value=",")
        constant(value="\"level\":")
            constant(value="\"")
                property(name="$!level")
            constant(value="\"")
        constant(value=",")
        constant(value="\"file\":")
            constant(value="\"")
                property(name="$!file")
            constant(value="\"")
        constant(value=",")
        constant(value="\"message\":")
            constant(value="\"")
                property(name="$!message")
            constant(value="\"")
    constant(value="}")
}

接下来可以 "Configure an index pattern" 安装 kibana,只需设置:"Index name or pattern" 到 "docker-logs" 和 "Time-field name" 到“@timestamp”

注意日志的来源没有控制(172.17.0.0/16);如果正确解析,发送到 $HOST:$PORT 的每条日志记录都将插入到 elasticsearch 索引中。