资源管理器没有节点
Resource Manager Has No Nodes
编辑:我查看了 YARN Resourcemanager not connecting to nodemanager,但该解决方案对我不起作用。我附上了与资源管理器建立连接的节点管理器日志部分:
[main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /0.0.0.0:8031
2016-06-17 19:01:04,697 INFO [main] nodemanager.NodeStatusUpdaterImpl (NodeStatusUpdaterImpl.java:getNMContainerStatuses(429)) - Sending out 0 NM container statuses: []
2016-06-17 19:01:04,701 INFO [main] nodemanager.NodeStatusUpdaterImpl (NodeStatusUpdaterImpl.java:registerWithRM(268)) - Registering with RM using containers :[]
2016-06-17 19:01:05,815 INFO [main] ipc.Client (Client.java:handleConnectionFailure(867)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-17 19:01:06,816 INFO [main] ipc.Client (Client.java:handleConnectionFailure(867)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
出于某种原因,它说它正在连接到 0.0.0.0。当我通过 ssh 进入其中一个数据节点并 ping 资源管理器时,我得到了一个响应,因此它能够解析主机名。
这让我相信我的 yarn-site.xml 中的选项不正确,因为我的节点正在尝试连接到 0.0.0.0:8031 而不是资源管理器:8031
我是 运行 docker 上的 Cloudera hadoop 集群,我遇到了 Yarn 资源管理器无法查看其他节点的问题。他们的设置方式如下:
节点 1 - 名称节点 (hadoop-hdfs-namenode)
节点 2 - 辅助名称节点 (hadoop-hdfs-secondarynamenode)
节点 3 - 纱线资源管理器 (hadoop-yarn-resourcemanager)
节点 4 - 数据节点和节点管理器(hadoop-hdfs-datanode、hadoop-yarn-nodemanager)
节点 5 - 数据节点和节点管理器(hadoop-hdfs-datanode、hadoop-yarn-nodemanager)
当我转到 namenode:50070 时,我可以看到两个节点。但是,当我转到 resource-manager:8088 时,它显示我有零个节点。我在每个节点上的 yarn-site.xml 文件如下:
<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>resource-manager:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>resource-manager:8030</value>
</property>
<property>
<description>Classpath for typical applications.</description>
<name>yarn.application.classpath</name>
<value>
$HADOOP_CONF_DIR,
$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>file:///data/1/yarn/local,file:///data/2/yarn/local,file:///data/3/yarn/local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>file:///data/1/yarn/logs,file:///data/2/yarn/logs,file:///data/3/yarn/logs</value>
</property>
<property>
<name>yarn.log.aggregation-enable</name>
<value>true</value>
</property>
<property>
<description>Where to aggregate logs</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>hdfs://namenode:8020/var/log/hadoop-yarn/apps</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>resource-manager:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>resource-manager:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>resource-manager:8033</value>
</property>
<property>
<description>
Number of seconds after an application finishes before the nodemanager's
DeletionService will delete the application's localized file directory
and log directory.
To diagnose Yarn application problems, set this property's value large
enough (for example, to 600 = 10 minutes) to permit examination of these
directories. After changing the property's value, you must restart the
nodemanager in order for it to have an effect.
The roots of Yarn applications' work directories is configurable with
the yarn.nodemanager.local-dirs property (see below), and the roots
of the Yarn applications' log directories is configurable with the
yarn.nodemanager.log-dirs property (see also below).
</description>
<name>yarn.nodemanager.delete.debug-delay-sec</name>
<value>600</value>
</property>
</configuration>
有人知道为什么会这样吗?
感谢阅读。
指定:
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master-1</value>
</property>
如编辑中所示,似乎 yarn-site.xml 没有被拾取,只有默认值发生了。我解决了这个问题,将 yarn-site.xml 文件以 root 用户身份复制到机器上的每个目录中。然后我 运行 节点管理器使其在读取文件时出错,因为它不在用户 root 下 运行。日志将我定向到它期望文件位于 yarn 特定目录而不是一般 hadoop 目录中的位置。
编辑:我查看了 YARN Resourcemanager not connecting to nodemanager,但该解决方案对我不起作用。我附上了与资源管理器建立连接的节点管理器日志部分:
[main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at /0.0.0.0:8031
2016-06-17 19:01:04,697 INFO [main] nodemanager.NodeStatusUpdaterImpl (NodeStatusUpdaterImpl.java:getNMContainerStatuses(429)) - Sending out 0 NM container statuses: []
2016-06-17 19:01:04,701 INFO [main] nodemanager.NodeStatusUpdaterImpl (NodeStatusUpdaterImpl.java:registerWithRM(268)) - Registering with RM using containers :[]
2016-06-17 19:01:05,815 INFO [main] ipc.Client (Client.java:handleConnectionFailure(867)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-17 19:01:06,816 INFO [main] ipc.Client (Client.java:handleConnectionFailure(867)) - Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
出于某种原因,它说它正在连接到 0.0.0.0。当我通过 ssh 进入其中一个数据节点并 ping 资源管理器时,我得到了一个响应,因此它能够解析主机名。
这让我相信我的 yarn-site.xml 中的选项不正确,因为我的节点正在尝试连接到 0.0.0.0:8031 而不是资源管理器:8031
我是 运行 docker 上的 Cloudera hadoop 集群,我遇到了 Yarn 资源管理器无法查看其他节点的问题。他们的设置方式如下:
节点 1 - 名称节点 (hadoop-hdfs-namenode)
节点 2 - 辅助名称节点 (hadoop-hdfs-secondarynamenode)
节点 3 - 纱线资源管理器 (hadoop-yarn-resourcemanager)
节点 4 - 数据节点和节点管理器(hadoop-hdfs-datanode、hadoop-yarn-nodemanager)
节点 5 - 数据节点和节点管理器(hadoop-hdfs-datanode、hadoop-yarn-nodemanager)
当我转到 namenode:50070 时,我可以看到两个节点。但是,当我转到 resource-manager:8088 时,它显示我有零个节点。我在每个节点上的 yarn-site.xml 文件如下:
<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>resource-manager:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>resource-manager:8030</value>
</property>
<property>
<description>Classpath for typical applications.</description>
<name>yarn.application.classpath</name>
<value>
$HADOOP_CONF_DIR,
$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>file:///data/1/yarn/local,file:///data/2/yarn/local,file:///data/3/yarn/local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>file:///data/1/yarn/logs,file:///data/2/yarn/logs,file:///data/3/yarn/logs</value>
</property>
<property>
<name>yarn.log.aggregation-enable</name>
<value>true</value>
</property>
<property>
<description>Where to aggregate logs</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>hdfs://namenode:8020/var/log/hadoop-yarn/apps</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>resource-manager:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>resource-manager:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>resource-manager:8033</value>
</property>
<property>
<description>
Number of seconds after an application finishes before the nodemanager's
DeletionService will delete the application's localized file directory
and log directory.
To diagnose Yarn application problems, set this property's value large
enough (for example, to 600 = 10 minutes) to permit examination of these
directories. After changing the property's value, you must restart the
nodemanager in order for it to have an effect.
The roots of Yarn applications' work directories is configurable with
the yarn.nodemanager.local-dirs property (see below), and the roots
of the Yarn applications' log directories is configurable with the
yarn.nodemanager.log-dirs property (see also below).
</description>
<name>yarn.nodemanager.delete.debug-delay-sec</name>
<value>600</value>
</property>
</configuration>
有人知道为什么会这样吗?
感谢阅读。
指定:
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master-1</value>
</property>
如编辑中所示,似乎 yarn-site.xml 没有被拾取,只有默认值发生了。我解决了这个问题,将 yarn-site.xml 文件以 root 用户身份复制到机器上的每个目录中。然后我 运行 节点管理器使其在读取文件时出错,因为它不在用户 root 下 运行。日志将我定向到它期望文件位于 yarn 特定目录而不是一般 hadoop 目录中的位置。