HDFS write resulting in " CreateSymbolicLink error (1314): A required privilege is not held by the client."
HDFS write resulting in " CreateSymbolicLink error (1314): A required privilege is not held by the client."
试图从 Apache Hadoop 执行示例地图缩减程序。当 map reduce 作业为 运行 时出现以下异常。已尝试 hdfs dfs -chmod 777 /
,但未能解决问题。
15/03/10 13:13:10 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with
ToolRunner to remedy this.
15/03/10 13:13:10 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
15/03/10 13:13:10 INFO input.FileInputFormat: Total input paths to process : 2
15/03/10 13:13:11 INFO mapreduce.JobSubmitter: number of splits:2
15/03/10 13:13:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1425973278169_0001
15/03/10 13:13:12 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
15/03/10 13:13:12 INFO impl.YarnClientImpl: Submitted application application_1425973278169_0001
15/03/10 13:13:12 INFO mapreduce.Job: The url to track the job: http://B2ML10803:8088/proxy/application_1425973278169_0001/
15/03/10 13:13:12 INFO mapreduce.Job: Running job: job_1425973278169_0001
15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 running in uber mode : false
15/03/10 13:13:18 INFO mapreduce.Job: map 0% reduce 0%
15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 failed with state FAILED due to: Application application_1425973278169_0001 failed 2 times due
to AM Container for appattempt_1425973278169_0001_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://B2ML10803:8088/proxy/application_1425973278169_0001/Then, click on links to logs of each attemp
t.
Diagnostics: Exception from container-launch.
Container id: container_1425973278169_0001_02_000001
Exit code: 1
Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.
堆栈跟踪:
ExitCodeException exitCode=1: CreateSymbolicLink error (1314): A required privilege is not held by the client.
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Shell 输出:
1 file(s) moved.
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/03/10 13:13:18 INFO mapreduce.Job: Counters: 0
我不知道错误的原因,但是重新格式化NameNode帮助我在Windows 8中解决了它.
- 删除所有旧日志。清理文件夹 C:\hadoop\logs 和 C:\hadoop\logs\userlogs
- 清理文件夹 C:\hadoop\data\dfs\datanode 和 C:\hadoop\data\dfs\namenode。
在管理员模式下使用调用命令重新格式化NameNode:
c:\hadoop\bin>hdfs namenode -format
有关说明,请参阅 this for a solution and this。基本上,符号链接可能存在安全风险,UAC 的设计会阻止用户(甚至属于管理员组的用户)创建符号链接,除非他们 运行 处于提升模式。
长话短说,尝试重新格式化您的名称节点并从提升的命令提示符启动 Hadoop 和所有 Hadoop 作业。
Win 8.1 + hadoop 2.7.0(从源构建)
运行 管理员模式下的命令提示符
执行etc\hadoop\hadoop-env.cmd
运行 sbin\start-dfs.cmd
运行 sbin\start-yarn.cmd
现在尝试运行你的工作
我最近遇到了完全相同的问题。我尝试重新格式化 namenode 但它不起作用,我相信这不能永久解决问题。参考@aoetalks,我通过查看本地组策略在 Windows Server 2012 R2 上解决了这个问题。
总而言之,请尝试以下步骤:
- 打开本地组策略(按
Win+R
打开 "Run..." - 输入 gpedit.msc
)
- 展开 "Computer Configuration" - "Windows Settings" - "Security Settings" - "Local Policies" - "User Rights Assignment"
- 找到右边的"Create symbolic links",看看你的用户是否在其中。如果没有,请将您的用户添加到其中。
- 下次登录后生效,请退出再登录。
如果这仍然不起作用,可能是因为您使用的是管理员帐户。在这种情况下,您必须在同一目录中禁用User Account Control: Run all administrators in Admin Approval Mode
(即组策略中的用户权限分配),然后重新启动计算机使其生效。
参考:https://superuser.com/questions/104845/permission-to-make-symbolic-links-in-windows-7
我遇到了和你一样的问题。我们通过检查 java 环境解决了这个问题。
- 检查
java version
和 javac version
。
- 确保集群中的每台计算机都具有相同的java环境。
在Windows中,更改hdfs-site.xml中的配置为
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///C:/hadoop-2.7.2/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///C:/hadoop-2.7.2/data/datanode</value>
</property>
</configuration>
在管理员模式下打开 cmd 并 运行 命令:-
- 停止-all.cmd
- hdfs namenode –format
- 开始-all.cmd
然后 运行 管理员模式下的最后一个 jar
hadoop jar C:\Hadoop_Demo\wordCount\target\wordCount-0.0.1-SNAPSHOT.jar file:///C:/Hadoop/input.txt file:///C:/Hadoop/output
我解决了同样的问题。让我们 "Run as administrator" 当你 运行 "Command Prompt".
试图从 Apache Hadoop 执行示例地图缩减程序。当 map reduce 作业为 运行 时出现以下异常。已尝试 hdfs dfs -chmod 777 /
,但未能解决问题。
15/03/10 13:13:10 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with
ToolRunner to remedy this.
15/03/10 13:13:10 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
15/03/10 13:13:10 INFO input.FileInputFormat: Total input paths to process : 2
15/03/10 13:13:11 INFO mapreduce.JobSubmitter: number of splits:2
15/03/10 13:13:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1425973278169_0001
15/03/10 13:13:12 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
15/03/10 13:13:12 INFO impl.YarnClientImpl: Submitted application application_1425973278169_0001
15/03/10 13:13:12 INFO mapreduce.Job: The url to track the job: http://B2ML10803:8088/proxy/application_1425973278169_0001/
15/03/10 13:13:12 INFO mapreduce.Job: Running job: job_1425973278169_0001
15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 running in uber mode : false
15/03/10 13:13:18 INFO mapreduce.Job: map 0% reduce 0%
15/03/10 13:13:18 INFO mapreduce.Job: Job job_1425973278169_0001 failed with state FAILED due to: Application application_1425973278169_0001 failed 2 times due
to AM Container for appattempt_1425973278169_0001_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://B2ML10803:8088/proxy/application_1425973278169_0001/Then, click on links to logs of each attemp
t.
Diagnostics: Exception from container-launch.
Container id: container_1425973278169_0001_02_000001
Exit code: 1
Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.
堆栈跟踪:
ExitCodeException exitCode=1: CreateSymbolicLink error (1314): A required privilege is not held by the client.
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Shell 输出:
1 file(s) moved.
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
15/03/10 13:13:18 INFO mapreduce.Job: Counters: 0
我不知道错误的原因,但是重新格式化NameNode帮助我在Windows 8中解决了它.
- 删除所有旧日志。清理文件夹 C:\hadoop\logs 和 C:\hadoop\logs\userlogs
- 清理文件夹 C:\hadoop\data\dfs\datanode 和 C:\hadoop\data\dfs\namenode。
在管理员模式下使用调用命令重新格式化NameNode:
c:\hadoop\bin>hdfs namenode -format
有关说明,请参阅 this for a solution and this。基本上,符号链接可能存在安全风险,UAC 的设计会阻止用户(甚至属于管理员组的用户)创建符号链接,除非他们 运行 处于提升模式。
长话短说,尝试重新格式化您的名称节点并从提升的命令提示符启动 Hadoop 和所有 Hadoop 作业。
Win 8.1 + hadoop 2.7.0(从源构建)
运行 管理员模式下的命令提示符
执行etc\hadoop\hadoop-env.cmd
运行 sbin\start-dfs.cmd
运行 sbin\start-yarn.cmd
现在尝试运行你的工作
我最近遇到了完全相同的问题。我尝试重新格式化 namenode 但它不起作用,我相信这不能永久解决问题。参考@aoetalks,我通过查看本地组策略在 Windows Server 2012 R2 上解决了这个问题。
总而言之,请尝试以下步骤:
- 打开本地组策略(按
Win+R
打开 "Run..." - 输入gpedit.msc
) - 展开 "Computer Configuration" - "Windows Settings" - "Security Settings" - "Local Policies" - "User Rights Assignment"
- 找到右边的"Create symbolic links",看看你的用户是否在其中。如果没有,请将您的用户添加到其中。
- 下次登录后生效,请退出再登录。
如果这仍然不起作用,可能是因为您使用的是管理员帐户。在这种情况下,您必须在同一目录中禁用User Account Control: Run all administrators in Admin Approval Mode
(即组策略中的用户权限分配),然后重新启动计算机使其生效。
参考:https://superuser.com/questions/104845/permission-to-make-symbolic-links-in-windows-7
我遇到了和你一样的问题。我们通过检查 java 环境解决了这个问题。
- 检查
java version
和javac version
。 - 确保集群中的每台计算机都具有相同的java环境。
在Windows中,更改hdfs-site.xml中的配置为
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///C:/hadoop-2.7.2/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///C:/hadoop-2.7.2/data/datanode</value>
</property>
</configuration>
在管理员模式下打开 cmd 并 运行 命令:-
- 停止-all.cmd
- hdfs namenode –format
- 开始-all.cmd
然后 运行 管理员模式下的最后一个 jar hadoop jar C:\Hadoop_Demo\wordCount\target\wordCount-0.0.1-SNAPSHOT.jar file:///C:/Hadoop/input.txt file:///C:/Hadoop/output
我解决了同样的问题。让我们 "Run as administrator" 当你 运行 "Command Prompt".