java.lang.OutOfMemoryError: native memory exhausted

java.lang.OutOfMemoryError: native memory exhausted

我正在 运行ning 一个 jar 文件,将数据从 oracle 复制到目标服务器 (ElasticSearch)。 link text 我正在 运行在 AIX 机器上安装这个 jar :

/oradata/slscrmit/tally> oslevel -s

7100-04-03-1642

uname -a AIX mila 1 7 00F79AB04C00.

当我 运行 jar 文件时出现此错误: 用于 运行 的命令: java -Xms3g -Xmx3g -Xmn1g -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:MetaspaceSize=500m -XX:MaxMetaspaceSize=500m -XX:SurvivorRatio=2 -jar -Dlog4j.configurationFile=file:log4j2.xml -Dfile.encoding=UTF-8 BoltESTally-1.4.3-Ver-1.0.jar 初始化加载。

====================错误===============

JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2017/12/15 08:24:21 - please wait. JVMDUMP032I JVM requested System dump using '/oradata/slscrmit/tally/core.20171215.082421.39781194.0001.dmp' in response to an event Note: "Enable full CORE dump" in smit is set to FALSE and as a result there will be limited threading information in core file. JVMDUMP010I System dump written to /oradata/slscrmit/tally/core.20171215.082421.39781194.0001.dmp JVMDUMP032I JVM requested Heap dump using '/oradata/slscrmit/tally/heapdump.20171215.082421.39781194.0002.phd' in response to an event JVMDUMP010I Heap dump written to /oradata/slscrmit/tally/heapdump.20171215.082421.39781194.0002.phd JVMDUMP032I JVM requested Java dump using '/oradata/slscrmit/tally/javacore.20171215.082421.39781194.0003.txt' in response to an event JVMDUMP010I Java dump written to /oradata/slscrmit/tally/javacore.20171215.082421.39781194.0003.txt JVMDUMP032I JVM requested Snap dump using '/oradata/slscrmit/tally/Snap.20171215.082421.39781194.0004.trc' in response to an event JVMDUMP010I Snap dump written to /oradata/slscrmit/tally/Snap.20171215.082421.39781194.0004.trc JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError". Dec 15, 2017 8:24:23 AM org.elasticsearch.transport.netty.NettyInternalESLogger warn WARNING: Unexpected exception in the selector loop. java.lang.OutOfMemoryError: native memory exhausted at sun.misc.Unsafe.allocateDBBMemory(Native Method) at java.nio.DirectByteBuffer.(DirectByteBuffer.java:127) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at org.elasticsearch.common.netty.channel.socket.nio.SocketReceiveBufferAllocator.newBuffer(SocketReceiveBufferAllocator.java:64) at org.elasticsearch.common.netty.channel.socket.nio.SocketReceiveBufferAllocator.get(SocketReceiveBufferAllocator.java:41) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:62) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.lang.Thread.run(Thread.java:785).

=============================================

SVMON 输出:

Pid 命令正在使用 Pin Pgsp 虚拟 64 位 Mthrd 16MB 24315106 java 796276 11137 0 779119 N Y N

 PageSize                Inuse        Pin       Pgsp    Virtual

 s    4 KB               18356        225          0       1199

 m   64 KB               48620        682          0      48620
 L   16 MB                   0          0          0          0
 S   16 GB                   0          0          0          0

Vsid      Esid Type Description              PSize  Inuse   Pin Pgsp Virtual

169bfe6 4工作共享内存段m 4096 0 0 4096

84ba04 e 工作共享内存段 m 4096 0 0 4096

1025d8c 7工作共享内存段m 4096 0 0 4096

13225a5 6工作共享内存段m 4096 0 0 4096

119a981 c工作共享内存段m 4096 0 0 4096

14349c 8工作共享内存段m 4096 0 0 4096

1c2ed4e d工作共享内存段m 4096 0 0 4096

2e9eaf 9工作共享内存段m 4096 0 0 4096

7854f1 5工作共享内存段m 4096 0 0 4096

f2b266 f工作共享内存段m 4095 0 0 4095

181650e 工作共享内存段 m 4090 0 0 4090

1d42b52 3 work 工作存储 m 2681 0 0 2681

20002 0 工作内核段 m 743 681 0 743

============================================= ======: 我的 Aix 机器上有足够的 space: vmstat:

系统配置:lcpu=128 mem=256512M

kthr 内存页面错误 cpu


r b avm fre re pi po fr sr cy in sy cs us sy id wa

6 1 27345787 25280860 0 0 0 0 0 0 1747 28990 59128 8 3 89 0

NON-PROD:!:_mila:/oradata/slscrmit/tally> oslevel -s

uname -a

7100-04-03-1642

非产品:!:_mila:/oradata/slscrmit/tally> uname -a

AIX mila 1 7 00F79AB04C00

filestyatem 中也有 space: /dev/slscrmit_oradt 2118.50 2024.41 94.09 96% /oradata/slscrmit

我能够解决这个问题::

根本原因: 32 位 JVM 在将 java 堆内存和本机内存扩展到 2GB 以上时存在技术限制。 setenv.sh 文件包含。 “export JAVA_HOME=/usr/java8/bin”,它指向 32 位 JVM

解决方法: 目标是指向 64 位 JVM。 还要检查内核位模式是64位吗?使用命令 getconf KERNEL_BITMODE 使用命令验证 64 位 JVM 版本:

java-d64-版本

更正 setenv.sh 以指向所有 AIX 机器上的 64 位 jvm,导出 JAVA_HOME=/usr/java8_64 确保此文件中的其他条目指向此 JAVA_HOME.

现在 运行 jar 并指定具有所需堆大小的命令行参数。我能够 运行 使用 10g 堆大小。