为什么 Elasticsearch Cluster JVM 内存压力一直在增加?
Why does Elasticsearch Cluster JVM Memory Pressure keep increasing?
我的 AWS Elasticsearch 集群的 JVM 内存压力一直在增加。我最近 3 天看到的模式是每 1 小时增加 1.1%。这是我配置的 3 个主节点之一。
所有其他指标似乎都在正常范围内。 CPU 不到 10%,几乎没有执行任何索引或搜索操作。
我已尝试为 this document 中提到的所有索引清除 fielddata
的缓存,但这没有帮助。
任何人都可以帮助我了解这可能是什么原因吗?
从 AWS Support 得到这个答案
I checked the particular metric and can also see the JVM increasing from the last few days. However, I do not think this is an issue as JVM is expected to increase over time. Also, the garbage collection in ES runs once the JVM reaches 75% (currently its around 69%), after which you would see a drop in the JVM metric of your cluster. If JVM is being continuously > 75 % and not coming down after GC's is a problem and should be investigated.
The other thing which you mentioned about clearing the cache for
fielddata for all indices was not helping in reducing JVM, that is
because the dedicated master nodes do not hold any indices data and
their related caches. Clearing caches should help in reducing JVM on
the data nodes.
我的 AWS Elasticsearch 集群的 JVM 内存压力一直在增加。我最近 3 天看到的模式是每 1 小时增加 1.1%。这是我配置的 3 个主节点之一。
所有其他指标似乎都在正常范围内。 CPU 不到 10%,几乎没有执行任何索引或搜索操作。
我已尝试为 this document 中提到的所有索引清除 fielddata
的缓存,但这没有帮助。
任何人都可以帮助我了解这可能是什么原因吗?
从 AWS Support 得到这个答案
I checked the particular metric and can also see the JVM increasing from the last few days. However, I do not think this is an issue as JVM is expected to increase over time. Also, the garbage collection in ES runs once the JVM reaches 75% (currently its around 69%), after which you would see a drop in the JVM metric of your cluster. If JVM is being continuously > 75 % and not coming down after GC's is a problem and should be investigated.
The other thing which you mentioned about clearing the cache for fielddata for all indices was not helping in reducing JVM, that is because the dedicated master nodes do not hold any indices data and their related caches. Clearing caches should help in reducing JVM on the data nodes.