执行hdfs zkfc命令时出错

Error when execute hdfs zkfc command

我正在尝试 运行 hdfs 使用 3 台名称节点机器、2 台数据节点机器和 1 台客户端机器。

当我执行hdfs zkfc –formatZK 我有下一个致命错误,我不知道为什么,因为我以前尝试过集群并且它有效,但现在它不起作用。

16/01/21 15:05:14 INFO zookeeper.ZooKeeper: Session: 0x25264b6c3d90000 closed
16/01/21 15:05:14 WARN ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x25264b6c3d90000
16/01/21 15:05:14 INFO zookeeper.ClientCnxn: EventThread shut down
16/01/21 15:05:14 FATAL tools.DFSZKFailoverController: Got a fatal error, exiting now
org.apache.hadoop.HadoopIllegalArgumentException: Bad argument: –formatZK
    at org.apache.hadoop.ha.ZKFailoverController.badArg(ZKFailoverController.java:251)
    at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:214)
    at org.apache.hadoop.ha.ZKFailoverController.access[=10=]0(ZKFailoverController.java:61)
    at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:172)
    at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
    at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)
    at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)

我可以 运行 没有任何错误地执行下一个命令:

namenode1$ `hadoop-daemon.sh start journalnode`
namenode2$ `hadoop-daemon.sh start journalnode`
namenode3$ `hadoop-daemon.sh start journalnode`

namenode1$ `hadoop namenode -format`
namenode1$ `hadoop-daemon.sh start namenode`

namenode2$ `hadoop namenode -bootstrapStandby`
namenode2$ `hadoop-daemon.sh start namenode`

namenode1$ `hadoop-daemon.sh start zkfc`
namenode2$ `hadoop-daemon.sh start zkfc`
namenode3$ `hadoop-daemon.sh start zkfc`

但是当我使用 namenode1:50070 访问网页时,它看起来像待机,而 namenode2:50070 到。 我尝试将 hdfs haadmin -getServiceState nn01 与 nn01 和 nn02 一起使用,但两者都显示为待机状态。

我的配置如下:

etc/hosts

127.0.0.1 localhost
172.16.8.191 name1
172.16.8.192 name2
172.16.8.193 name3
172.16.8.202 data1
172.16.8.203 data2
172.16.8.204 client1

zoo.cfg

tickTime=2000

        dataDir=/opt/ZooData

        clientPort=2181

        initLimit=5
        syncLimit=2
        server.1=172.16.8.191:2888:3888
        server.2=172.16.8.192:2888:3888
        server.3=172.16.8.193:2888:3888

核心-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
 <property>
    <name>fs.default.name</name>
    <value>hdfs://auto-ha</value>
 </property>
</configuration>

hdfs-site.xml

<configuration>
     <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
     <property>
        <name>dfs.name.dir</name>
        <value>file:///hdfs/name</value>
    </property>
     <property>
        <name>dfs.data.dir</name>
        <value>file:///hdfs/data</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
     <property>
        <name>dfs.nameservices</name>
        <value>auto-ha</value>
     </property>
     <property>
        <name>dfs.ha.namenodes.auto-ha</name>
        <value>nn01,nn02</value>
     </property>
     <property>
        <name>dfs.namenode.rpc-address.auto-ha.nn01</name>
        <value>name1:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.auto-ha.nn01</name>
        <value>name1:50070</value>
     </property>
    <property>
        <name>dfs.namenode.rpc-address.auto-ha.nn02</name>
        <value>name2:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.auto-ha.nn02</name>
        <value>name2:50070</value>
     </property>
     <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://name1:8485;name2:8485;name3:8485/auto-ha</value>
     </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/hdfs/journalnode</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/vagrant/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled.auto-ha</name>
        <value>true</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>name1:2181,name2:2181,name3:2181</value>
    </property>
</configuration>

在 zoo.cfg 你的 zookeeper clientPort=2181

并且在 hdfs-site 上您将端口设置为 3000(尝试更改为 2181)

  <property>
       <name>ha.zookeeper.quorum</name>
        <value>172.16.8.191:3000,172.16.8.192:3000</value>
  </property>

当我从 microsoft-word 复制命令 "hdfs zkfc –formatZK" 时,该行比您必须在终端中输入的命令的实际行长。

Word command: hdfs zkfc –formatZK

Real command: hdfs zkfc -formatZK