典型的 ElasticSearch/Logstash/Kibana 部署模型是什么样的
What's a typical ElasticSearch/Logstash/Kibana deployment model look like
作为 docker/elastic 搜索领域的新手,我正在尝试在我的一个项目中构建一个通过容器使用弹性搜索的部署模型。
我有几个应用服务器,每个都有一些日志。我想将所有这些日志放在一个地方。以下是我的想法。
所有应用程序服务器都安装 filebeat 以将数据推送到 Logstash 服务器(在 docker 图像中)。此 LogStash 服务器将这些日志转发到也有 kibana 的 elasticsearch docker 图像。
这有意义吗?在一张图片中使用 logstash 而在另一张图片中使用 ElasticSearch/Kibana 是否可以?有没有pros/cons这种做法?有什么替代方法可以构建这个?
Docker 的政策是 1 个容器做 1 件事和 1 件事。所以我会为 ElasticSearch 选择一个 docker 图像,1 个用于 Kibana,1 个用于 LogStash。将它们与 docker 组合在一起。
Each container should have only one concern
Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers. For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the web application, database, and an in-memory cache in a decoupled manner.
You may have heard that there should be “one process per container”. While this mantra has good intentions, it is not necessarily true that there should be only one operating system process per container. In addition to the fact that containers can now be spawned with an init process, some programs might spawn additional processes of their own accord. For instance, Celery can spawn multiple worker processes, or Apache might create a process per request. While “one process per container” is frequently a good rule of thumb, it is not a hard and fast rule. Use your best judgment to keep containers as clean and modular as possible.
If containers depend on each other, you can use Docker container networks to ensure that these containers can communicate.
作为 docker/elastic 搜索领域的新手,我正在尝试在我的一个项目中构建一个通过容器使用弹性搜索的部署模型。 我有几个应用服务器,每个都有一些日志。我想将所有这些日志放在一个地方。以下是我的想法。
所有应用程序服务器都安装 filebeat 以将数据推送到 Logstash 服务器(在 docker 图像中)。此 LogStash 服务器将这些日志转发到也有 kibana 的 elasticsearch docker 图像。
这有意义吗?在一张图片中使用 logstash 而在另一张图片中使用 ElasticSearch/Kibana 是否可以?有没有pros/cons这种做法?有什么替代方法可以构建这个?
Docker 的政策是 1 个容器做 1 件事和 1 件事。所以我会为 ElasticSearch 选择一个 docker 图像,1 个用于 Kibana,1 个用于 LogStash。将它们与 docker 组合在一起。
Each container should have only one concern
Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers. For instance, a web application stack might consist of three separate containers, each with its own unique image, to manage the web application, database, and an in-memory cache in a decoupled manner.
You may have heard that there should be “one process per container”. While this mantra has good intentions, it is not necessarily true that there should be only one operating system process per container. In addition to the fact that containers can now be spawned with an init process, some programs might spawn additional processes of their own accord. For instance, Celery can spawn multiple worker processes, or Apache might create a process per request. While “one process per container” is frequently a good rule of thumb, it is not a hard and fast rule. Use your best judgment to keep containers as clean and modular as possible.
If containers depend on each other, you can use Docker container networks to ensure that these containers can communicate.