AWS ElasticSearch 集群的堆大小

Heap size of AWS ElasticSearch cluster

我有一个带有 2 个节点 运行 的 AWS ElasticSearch t2.medium 实例,几乎没有任何负载。它仍然一直在崩溃。

我看到下图的 JVMMemoryPressure 指标:

当我转到 Kibana 时,我看到以下错误消息:

问题:

  1. 我是否正确解释了机器只有 64 MB 可用内存,而不是应该与此实例类型相关联的 4 GB?有没有别的地方可以验证堆内存的绝对量,而不是只有在出错的时候才在Kibana上验证?
  2. 如果是这样,我该如何改变这种行为?
  3. 如果这是正常的,当内存占用达到 100% 时,我在哪里可以找到 ElasticSearch 崩溃的可能原因。我的实例负载很小。

在实例的日志记录中,我看到了很多警告,例如下面的那些。他们没有提供任何关于从哪里开始调试问题的线索。

[2018-08-15T07:36:37,021][WARN ][r.suppressed ] path: __PATH__ params:
{}

org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [__PATH__ master];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.handleBlockExceptions(TransportBulkAction.java:387) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.doRun(TransportBulkAction.java:273) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.action.bulk.TransportBulkAction$BulkOperation.onTimeout(TransportBulkAction.java:421) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:317) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:244) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:578) [elasticsearch-6.0.1.jar:6.0.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-6.0.1.jar:6.0.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_172]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]

[2018-08-15T07:36:37,691][WARN ][o.e.d.z.ZenDiscovery ] [U1DMgyE] not enough master nodes discovered during pinging (found [[Candidate{node={U1DMgyE}{U1DMgyE1Rn2gId2aRgRDtw}{F-tqTFGDRZaovQF8ILC44w}{__IP__}{__IP__}{__AMAZON_INTERNAL__, __AMAZON_INTERNAL__}, clusterStateVersion=207939}]], but needed [2]), pinging again

[2018-08-15T07:36:42,303][WARN ][o.e.t.n.Netty4Transport ] [U1DMgyE] write and flush on the network layer failed (channel: [id: 0x385d3b63, __PATH__ ! __PATH__])
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method) ~[?:1.8.0_172]
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51) ~[?:1.8.0_172]
at sun.nio.ch.IOUtil.write(IOUtil.java:148) ~[?:1.8.0_172]
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504) ~[?:1.8.0_172]
at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:432) ~[netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:856) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.forceFlush(AbstractNioChannel.java:368) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:638) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]

我了解到这个数字不正确。我不知道它是从哪里来的。要获得正确的内存使用情况,请运行以下查询:

GET "<es_url>:9200/_nodes/stats"

如果您只查找内存使用情况,请使用 GET /"<es_url>:9200/_cat/nodes?h=heap* - 它会提供更易读的响应,如下所示。

{
"payload": [
{
"heap.current": "4.1gb",
"heap.max": "15.9gb",
"heap.percent": "25"
},
{
"heap.current": "3.9gb",
"heap.max": "15.9gb",
"heap.percent": "24"
},
...
}
不过,

_nodes/stats 也详细说明了所有其他细节。