Spark worker 在 运行 一段时间后死亡

Spark worker dies after running for some duration

我是 运行 火花流工作。

我的集群配置

Spark version - 1.6.1
spark node  config
cores - 4
memory - 6.8 G (out of 8G)
number of nodes - 3

对于我的工作,我为每个节点和总内核提供 6GB 内存 - 3

作业 运行 一个小时后,我在工作日志中收到以下错误

    Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f53b496a000, 262144, 0) failed; error='Cannot allocate memory' (errno=12)
    #
    # There is insufficient memory for the Java Runtime Environment to continue.
    # Native memory allocation (mmap) failed to map 262144 bytes for committing reserved memory.
    # An error report file with more information is saved as:
    # /usr/local/spark/sbin/hs_err_pid1622.log

而我在 work-dir/app-id/stderr 中没有看到任何错误。

通常推荐给 运行 spark worker 的 xm* 设置是什么?

如何进一步调试这个问题?

PS: 我用默认设置启动了我的 worker 和 master。

更新:

由于错误 "cannot allocate memory".

,我看到我的执行程序经常被添加和删除

日志:

  16/06/24 12:53:47 INFO MemoryStore: Block broadcast_53 stored as values in memory (estimated size 14.3 KB, free 440.8 MB)
  16/06/24 12:53:47 INFO BlockManager: Found block rdd_145_1 locally
  16/06/24 12:53:47 INFO BlockManager: Found block rdd_145_0 locally
  Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f3440743000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)

我也一样situation.I在官方文档中找到原因,是这样说的:

In general, Spark can run well with anywhere from 8 GB to hundreds of gigabytes of memory per machine. In all cases, we recommend allocating only at most 75% of the memory for Spark; leave the rest for the operating system and buffer cache.

你的计算内存有8GB,6GB是给worker的node.So,如果操作系统使用的内存超过2GB,没有给worker节点留足够的内存,worker就会丢失。 *只需检查操作系统将使用多少内存,并将剩余内存分配给工作节点*