spark2 + yarn - 准备 AM 容器时出现空指针异常
spark2 + yarn - nullpointerexception while preparing AM container
我正在尝试 运行
pyspark --master yarn
- Spark 版本:2.0.0
- Hadoop 版本:2.7.2
- Hadoop yarn Web 界面是
成功启动
事情是这样的:
16/08/15 10:00:12 DEBUG Client: Using the default MR application classpath: $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
16/08/15 10:00:12 INFO Client: Preparing resources for our AM container
16/08/15 10:00:12 DEBUG Client:
16/08/15 10:00:12 DEBUG DFSClient: /user/mispp/.sparkStaging/application_1471254869164_0006: masked=rwxr-xr-x
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #8
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #8
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: mkdirs took 14ms
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #9
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #9
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: setPermission took 10ms
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #10
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #10
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: getFileInfo took 2ms
16/08/15 10:00:12 INFO Client: Deleting staging directory hdfs://sm/user/mispp/.sparkStaging/application_1471254869164_0006
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #11
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #11
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: delete took 14ms
16/08/15 10:00:12 ERROR SparkContext: Error initializing SparkContext.
java.lang.NullPointerException
at scala.collection.mutable.ArrayOps$ofRef$.newBuilder$extension(ArrayOps.scala:190)
at scala.collection.mutable.ArrayOps$ofRef.newBuilder(ArrayOps.scala:186)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:246)
at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
at scala.collection.mutable.ArrayOps$ofRef.filter(ArrayOps.scala:186)
at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources.apply(Client.scala:484)
at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources.apply(Client.scala:480)
at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:74)
at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:480)
at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:834)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:167)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:240)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:236)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
16/08/15 10:00:12 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.server.Server@69e507eb
16/08/15 10:00:12 DEBUG Server: Graceful shutdown org.spark_project.jetty.server.Server@69e507eb by
纱线-site.xml:
(最后一个 属性 是我在网上找到的,所以试了一下是否可行)
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>sm:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>sm:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>sm:8050</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/home/mispp/hadoop-2.7.2/share/hadoop/yarn</value>
</property>
</configuration>
.bashrc:
export HADOOP_PREFIX=/home/mispp/hadoop-2.7.2
export PATH=$PATH:$HADOOP_PREFIX/bin
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export YARN_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
知道为什么会这样吗?
它在 3 个 LXD 容器(主 + 两个计算)中设置,在具有 16GB 内存的服务器上。
通常(显然不总是)当你看到
ERROR SparkContext: Error initializing SparkContext.
当使用 Yarn 时,这意味着 Spark 应用无法启动,因为它无法获得足够的资源(同样,通常是内存)。所以这是你需要检查的第一件事。
您可以在此处粘贴您的 spark-defaults.conf
。或者,如果您没有注意到 spark.executor.memory
的默认值是 1g
。您可以尝试覆盖此值,例如
pyspark --executor-memory 256m
看是否启动。
此外,您的 yarn-site.xml
中没有资源配置(例如 yarn.nodemanager.resource.memory-mb
),因此您可能没有为 Yarn 分配足够的资源。考虑到你的机器的大小,你最好明确这些值。
鉴于Spark 2.0.0代码中的错误位置:
我怀疑错误是由于 spark.yarn.jars
配置错误造成的。根据 http://spark.apache.org/docs/2.0.0/running-on-yarn.html#spark-properties.
上的文档,我会仔细检查您设置中此配置的值是否正确
我刚刚提高了@tinfoiled 的回答,但我想在这里评论 spark.yarn.jars
(以 's' 结尾)属性 的语法,因为我花了很多时间弄清楚.
正确的语法(哪个 OP 已经知道了)是 -
spark.yarn.jars=hdfs://xxx:9000/user/spark/share/lib/*.jar
其实我最后没有放*.jar,结果变成了"not being able to load ApplicationMaster"。我尝试了各种组合,但没有用。事实上,我在
上针对同样的问题在 SOF 上发布了一个问题
我什至不确定我正在做的事情是否正确,但 OP 的问题和@tinfoiled 的回答给了我一些信心,我终于能够利用这个 属性.
我正在尝试 运行
pyspark --master yarn
- Spark 版本:2.0.0
- Hadoop 版本:2.7.2
- Hadoop yarn Web 界面是 成功启动
事情是这样的:
16/08/15 10:00:12 DEBUG Client: Using the default MR application classpath: $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
16/08/15 10:00:12 INFO Client: Preparing resources for our AM container
16/08/15 10:00:12 DEBUG Client:
16/08/15 10:00:12 DEBUG DFSClient: /user/mispp/.sparkStaging/application_1471254869164_0006: masked=rwxr-xr-x
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #8
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #8
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: mkdirs took 14ms
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #9
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #9
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: setPermission took 10ms
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #10
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #10
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: getFileInfo took 2ms
16/08/15 10:00:12 INFO Client: Deleting staging directory hdfs://sm/user/mispp/.sparkStaging/application_1471254869164_0006
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp sending #11
16/08/15 10:00:12 DEBUG Client: IPC Client (1933573135) connection to sm/192.168.29.71:8020 from mispp got value #11
16/08/15 10:00:12 DEBUG ProtobufRpcEngine: Call: delete took 14ms
16/08/15 10:00:12 ERROR SparkContext: Error initializing SparkContext.
java.lang.NullPointerException
at scala.collection.mutable.ArrayOps$ofRef$.newBuilder$extension(ArrayOps.scala:190)
at scala.collection.mutable.ArrayOps$ofRef.newBuilder(ArrayOps.scala:186)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:246)
at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
at scala.collection.mutable.ArrayOps$ofRef.filter(ArrayOps.scala:186)
at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources.apply(Client.scala:484)
at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources.apply(Client.scala:480)
at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:74)
at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:480)
at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:834)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:167)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:240)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:236)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
16/08/15 10:00:12 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.server.Server@69e507eb
16/08/15 10:00:12 DEBUG Server: Graceful shutdown org.spark_project.jetty.server.Server@69e507eb by
纱线-site.xml: (最后一个 属性 是我在网上找到的,所以试了一下是否可行)
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>sm:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>sm:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>sm:8050</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/home/mispp/hadoop-2.7.2/share/hadoop/yarn</value>
</property>
</configuration>
.bashrc:
export HADOOP_PREFIX=/home/mispp/hadoop-2.7.2
export PATH=$PATH:$HADOOP_PREFIX/bin
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export YARN_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
知道为什么会这样吗? 它在 3 个 LXD 容器(主 + 两个计算)中设置,在具有 16GB 内存的服务器上。
通常(显然不总是)当你看到
ERROR SparkContext: Error initializing SparkContext.
当使用 Yarn 时,这意味着 Spark 应用无法启动,因为它无法获得足够的资源(同样,通常是内存)。所以这是你需要检查的第一件事。
您可以在此处粘贴您的 spark-defaults.conf
。或者,如果您没有注意到 spark.executor.memory
的默认值是 1g
。您可以尝试覆盖此值,例如
pyspark --executor-memory 256m
看是否启动。
此外,您的 yarn-site.xml
中没有资源配置(例如 yarn.nodemanager.resource.memory-mb
),因此您可能没有为 Yarn 分配足够的资源。考虑到你的机器的大小,你最好明确这些值。
鉴于Spark 2.0.0代码中的错误位置:
我怀疑错误是由于 spark.yarn.jars
配置错误造成的。根据 http://spark.apache.org/docs/2.0.0/running-on-yarn.html#spark-properties.
我刚刚提高了@tinfoiled 的回答,但我想在这里评论 spark.yarn.jars
(以 's' 结尾)属性 的语法,因为我花了很多时间弄清楚.
正确的语法(哪个 OP 已经知道了)是 -
spark.yarn.jars=hdfs://xxx:9000/user/spark/share/lib/*.jar
其实我最后没有放*.jar,结果变成了"not being able to load ApplicationMaster"。我尝试了各种组合,但没有用。事实上,我在
我什至不确定我正在做的事情是否正确,但 OP 的问题和@tinfoiled 的回答给了我一些信心,我终于能够利用这个 属性.