hadoop namenode、datanode、secondarynamenode 没有启动

hadoop namenode,datanode,secondarynamenode are not starting up

我刚刚下载了 hadoop-0.20 tar 并解压了。我设置了 JAVA_HOME 和 HADOOP_HOME。我修改了core-site.xml、hdfs-site.xml和mapred-site.xml.

我tar提供服务。

  jps


 jps
 JobTracker
 TaskTracker

我查看日志。它说

 2015-02-11 18:07:52,278 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

 /************************************************************
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = scspn0022420004.lab.eng.btc.netapp.in/10.72.40.68
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 0.20.0
 STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
 ************************************************************/
  2015-02-11 18:07:52,341 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.NullPointerException
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
    at   org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:175)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:955)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:964)

    2015-02-11 18:07:52,346 INFO   org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
   /************************************************************
   SHUTDOWN_MSG: Shutting down NameNode at   scspn0022420004.lab.eng.btc.netapp.in/10.72.40.68
   ************************************************************/

我误会了什么?

我的Conf文件如下:

核心站点

 <configuration>
  <property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost:8020</value>
  </property>
 </configuration>

hdfs-站点

 <configuration>
  <property>
  <name>dfs.replication</name>
  <value>1</value>
 </property>
 <!-- Immediately exit safemode as soon as one DataNode checks in.
   On a multi-node cluster, these configurations must be removed.  -->
 <property>
   <name>dfs.safemode.extension</name>
   <value>0</value>
  </property>
  <property>
   <name>dfs.safemode.min.datanodes</name>
   <value>1</value>
  </property>
  <property>
   <name>hadoop.tmp.dir</name>
   <value>/var/lib/hadoop-hdfs/cache/${user.name}</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/name</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value>
   </property>
   <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value>
   </property>

  </configuration>

mapred-site.xml

  <configuration>
   <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
   </property>
  </configuration>

有什么想法吗?

这是我在 tarting start-dfs.sh

时在控制台中看到的内容
 localhost: starting secondarynamenode, logging to /root/hadoop/hadoop-0.20.0/bin/../logs/hadoop-root-secondarynamenode- hostname.out
 localhost: Exception in thread "main" java.lang.NullPointerException
 localhost:      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>   (SecondaryNameNode.java:115)
 localhost:      at   org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)

我没有使用0.20.0版本,但是你确定core-site.xml里面的key是fs.defaultFS吗? 在core-default.xml中好像被命名为fs.default.name.

我猜你没有正确设置你的 hadoop 集群,请按照以下步骤操作:

第 1 步:开始设置 .bashrc:

vi $HOME/.bashrc

将以下行放在文件末尾:(将 hadoop 主页更改为您的)

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat  | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin

第 2 步:编辑 hadoop-env.sh 如下:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun

第 3 步:现在创建目录并设置所需的所有权和权限

$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp

第 4 步:编辑 core-site.xml

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>

第 5 步:编辑 mapred-site.xml

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
</property>

第 6 步:编辑 hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>

最后格式化您的 hdfs(您需要在第一次设置 Hadoop 集群时执行此操作)

 $ /usr/local/hadoop/bin/hadoop namenode -format

希望这对您有所帮助