Apache Hadoop 2.7.3,套接字超时错误
Apache Hadoop 2.7.3, Socket Timeout Error
我遇到了与下面相同的问题 link。
Hadoop, Socket Timeout Error
你能帮我解决这个问题吗,我在 Apache Hadoop 2.7.3 EC2 上有同样的问题 installation.Does link 中提到的属性需要添加到名称和数据节点配置文件?如果是,那么所有 .xml 是什么?提前致谢。
此外,应用程序正尝试根据以下错误访问 EC2 上的内部 ip,我是否需要打开任何端口?网络上说 8042 UI.
所有节点和 Nodemanager 和 Resource Manager(RM) 都显示在 运行ning at jps。
当我尝试运行 map reduce 示例时,Namenode 出错:
作业 job_1506038808044_0002 失败,状态为 FAILED,原因是:应用程序 application_1506038808044_0002 由于启动错误 appattempt_1506038808044_0002_000002 而失败 2 次。出现异常:org.apache.hadoop.net.ConnectTimeoutException:从 ip-172-31-1-10/172.31.1.10 调用 ip-172-31-5-59.ec2.internal:43555 套接字超时异常失败:org.apache.hadoop.net.ConnectTimeoutException:等待通道准备好连接时超时 20000 毫秒。 ch : java.nio.channels.SocketChannel[connection-pending remote=ip-172-31-5-59.ec2.internal/172.31.5.59:43555]
最后,RM 网站 UI 在作业 运行ning:
期间一直显示以下消息
状态:等待 AM 容器分配、启动并向 RM 注册。
谢谢,
阿莎
在尝试 Hadoop 上的解决方案后,套接字超时错误(link 在我的问题中)并将以下内容添加到 hdfs-site.xml 文件中,通过允许所有 ICMP 和 UDP 规则解决了这个问题到 ec2 实例,以便它们可以相互 ping。
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/hadoop_work/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/hadoop_work/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/usr/local/hadoop/hadoop_work/hdfs/namesecondary</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
<property>
<name>dfs.client.use.datanode.hostname</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>2000000</value>
</property>
<property>
<name>dfs.socket.timeout</name>
<value>2000000</value>
</property>
<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>true</value>
<description>Whether datanodes should use datanode hostnames when
connecting to other datanodes for data transfer.
</description>
</property>
<property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the RPC server will bind to. If this optional address is
set, it overrides only the hostname portion of dfs.namenode.rpc-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node listen on all interfaces by
setting it to 0.0.0.0.
</description>
</property>
<property>
<name>dfs.namenode.servicerpc-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the service RPC server will bind to. If this optional address is
set, it overrides only the hostname portion of dfs.namenode.servicerpc-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node listen on all interfaces by
setting it to 0.0.0.0.
</description>
</property>
<property>
<name>dfs.namenode.http-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the HTTP server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.http-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTP server listen on all
interfaces by setting it to 0.0.0.0.
</description>
</property>
<property>
<name>dfs.namenode.https-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the HTTPS server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.https-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTPS server listen on all
interfaces by setting it to 0.0.0.0.
</description>
</property>
我遇到了与下面相同的问题 link。
Hadoop, Socket Timeout Error
你能帮我解决这个问题吗,我在 Apache Hadoop 2.7.3 EC2 上有同样的问题 installation.Does link 中提到的属性需要添加到名称和数据节点配置文件?如果是,那么所有 .xml 是什么?提前致谢。
此外,应用程序正尝试根据以下错误访问 EC2 上的内部 ip,我是否需要打开任何端口?网络上说 8042 UI.
所有节点和 Nodemanager 和 Resource Manager(RM) 都显示在 运行ning at jps。
当我尝试运行 map reduce 示例时,Namenode 出错:
作业 job_1506038808044_0002 失败,状态为 FAILED,原因是:应用程序 application_1506038808044_0002 由于启动错误 appattempt_1506038808044_0002_000002 而失败 2 次。出现异常:org.apache.hadoop.net.ConnectTimeoutException:从 ip-172-31-1-10/172.31.1.10 调用 ip-172-31-5-59.ec2.internal:43555 套接字超时异常失败:org.apache.hadoop.net.ConnectTimeoutException:等待通道准备好连接时超时 20000 毫秒。 ch : java.nio.channels.SocketChannel[connection-pending remote=ip-172-31-5-59.ec2.internal/172.31.5.59:43555]
最后,RM 网站 UI 在作业 运行ning:
期间一直显示以下消息状态:等待 AM 容器分配、启动并向 RM 注册。
谢谢, 阿莎
在尝试 Hadoop 上的解决方案后,套接字超时错误(link 在我的问题中)并将以下内容添加到 hdfs-site.xml 文件中,通过允许所有 ICMP 和 UDP 规则解决了这个问题到 ec2 实例,以便它们可以相互 ping。
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/hadoop_work/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/hadoop_work/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/usr/local/hadoop/hadoop_work/hdfs/namesecondary</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
<property>
<name>dfs.client.use.datanode.hostname</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>2000000</value>
</property>
<property>
<name>dfs.socket.timeout</name>
<value>2000000</value>
</property>
<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>true</value>
<description>Whether datanodes should use datanode hostnames when
connecting to other datanodes for data transfer.
</description>
</property>
<property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the RPC server will bind to. If this optional address is
set, it overrides only the hostname portion of dfs.namenode.rpc-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node listen on all interfaces by
setting it to 0.0.0.0.
</description>
</property>
<property>
<name>dfs.namenode.servicerpc-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the service RPC server will bind to. If this optional address is
set, it overrides only the hostname portion of dfs.namenode.servicerpc-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node listen on all interfaces by
setting it to 0.0.0.0.
</description>
</property>
<property>
<name>dfs.namenode.http-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the HTTP server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.http-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTP server listen on all
interfaces by setting it to 0.0.0.0.
</description>
</property>
<property>
<name>dfs.namenode.https-bind-host</name>
<value>0.0.0.0</value>
<description>
The actual address the HTTPS server will bind to. If this optional address
is set, it overrides only the hostname portion of dfs.namenode.https-address.
It can also be specified per name node or name service for HA/Federation.
This is useful for making the name node HTTPS server listen on all
interfaces by setting it to 0.0.0.0.
</description>
</property>