java.io.IOException:方案没有文件系统:hdfs
java.io.IOException: No FileSystem for scheme : hdfs
我正在使用 Cloudera Quickstart VM CDH5.3.0(就包裹包而言)和 Spark 1.2.0 $SPARK_HOME=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark
并使用命令
提交 Spark 应用程序
./bin/spark-submit --class <Spark_App_Main_Class_Name> --master spark://localhost.localdomain:7077 --deploy-mode client --executor-memory 4G ../apps/<Spark_App_Target_Jar_Name>.jar
Spark_App_Main_Class_Name.scala
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.mllib.util.MLUtils
object Spark_App_Main_Class_Name {
def main(args: Array[String]) {
val hConf = new SparkConf()
.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)
.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)
val sc = new SparkContext(hConf)
val data = MLUtils.loadLibSVMFile(sc, "hdfs://localhost.localdomain:8020/analytics/data/mllib/sample_libsvm_data.txt")
...
}
}
但是我在客户端模式下提交应用程序时收到 ClassNotFoundException
的 org.apache.hadoop.hdfs.DistributedFileSystem
[cloudera@localhost bin]$ ./spark-submit --class Spark_App_Main_Class_Name --master spark://localhost.localdomain:7077 --deploy-mode client --executor-memory 4G ../apps/Spark_App_Target_Jar_Name.jar
15/11/30 09:46:34 INFO SparkContext: Spark configuration:
spark.app.name=Spark_App_Main_Class_Name
spark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
spark.eventLog.dir=hdfs://localhost.localdomain:8020/user/spark/applicationHistory
spark.eventLog.enabled=true
spark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
spark.executor.memory=4G
spark.jars=file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/bin/../apps/Spark_App_Target_Jar_Name.jar
spark.logConf=true
spark.master=spark://localhost.localdomain:7077
spark.yarn.historyServer.address=http://localhost.localdomain:18088
15/11/30 09:46:34 WARN Utils: Your hostname, localhost.localdomain resolves to a loopback address: 127.0.0.1; using 10.113.234.150 instead (on interface eth12)
15/11/30 09:46:34 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/11/30 09:46:34 INFO SecurityManager: Changing view acls to: cloudera
15/11/30 09:46:34 INFO SecurityManager: Changing modify acls to: cloudera
15/11/30 09:46:34 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(cloudera); users with modify permissions: Set(cloudera)
15/11/30 09:46:35 INFO Slf4jLogger: Slf4jLogger started
15/11/30 09:46:35 INFO Remoting: Starting remoting
15/11/30 09:46:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.113.234.150:59473]
15/11/30 09:46:35 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@10.113.234.150:59473]
15/11/30 09:46:35 INFO Utils: Successfully started service 'sparkDriver' on port 59473.
15/11/30 09:46:36 INFO SparkEnv: Registering MapOutputTracker
15/11/30 09:46:36 INFO SparkEnv: Registering BlockManagerMaster
15/11/30 09:46:36 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20151130094636-8c3d
15/11/30 09:46:36 INFO MemoryStore: MemoryStore started with capacity 267.3 MB
15/11/30 09:46:38 INFO HttpFileServer: HTTP File server directory is /tmp/spark-7d1f2861-a568-4919-8f7e-9a9fe6aab2b4
15/11/30 09:46:38 INFO HttpServer: Starting HTTP Server
15/11/30 09:46:38 INFO Utils: Successfully started service 'HTTP file server' on port 50003.
15/11/30 09:46:38 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/11/30 09:46:38 INFO SparkUI: Started SparkUI at http://10.113.234.150:4040
15/11/30 09:46:39 INFO SparkContext: Added JAR file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/bin/../apps/Spark_App_Target_Jar_Name.jar at http://10.113.234.150:50003/jars/Spark_App_Target_Jar_Name.jar with timestamp 1448894799228
15/11/30 09:46:39 INFO AppClient$ClientActor: Connecting to master spark://localhost.localdomain:7077...
15/11/30 09:46:40 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151130094640-0000
15/11/30 09:46:41 INFO NettyBlockTransferService: Server created on 56458
15/11/30 09:46:41 INFO BlockManagerMaster: Trying to register BlockManager
15/11/30 09:46:41 INFO BlockManagerMasterActor: Registering block manager 10.113.234.150:56458 with 267.3 MB RAM, BlockManagerId(<driver>, 10.113.234.150, 56458)
15/11/30 09:46:41 INFO BlockManagerMaster: Registered BlockManager
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2047)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2578)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.spark.util.FileLogger.<init>(FileLogger.scala:90)
at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:63)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:352)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:92)
at Spark_App_Main_Class_Name$.main(Spark_App_Main_Class_Name.scala:22)
at Spark_App_Main_Class_Name.main(Spark_App_Main_Class_Name.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1953)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2045)
... 16 more
似乎 Spark 应用程序无法映射 HDFS,因为最初我收到错误:
Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.spark.util.FileLogger.<init>(FileLogger.scala:90)
at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:63)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:352)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:92)
at LogisticRegressionwithBFGS$.main(LogisticRegressionwithBFGS.scala:21)
at LogisticRegressionwithBFGS.main(LogisticRegressionwithBFGS.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我按照 hadoop No FileSystem for scheme: file 将 "fs.hdfs.impl" 和 "fs.file.impl" 添加到 Spark 配置设置
您的类路径中需要有 hadoop-hdfs-2.x jars (maven link)。
在提交您的申请时,使用 spark-submit 的 --jar 选项提及额外的 jar 位置。
另一方面,您最好转移到具有 spark1.5 的 CDH5.5。
这个问题我经过一番详细的查找,也做了不同的尝试。基本上,问题似乎是由于 hadoop-hdfs jar 不可用,但是在提交 spark 应用程序时,即使在使用 maven-assembly-plugin
或 maven-jar-plugin
/maven-dependency-plugin
之后,也找不到依赖的 jar
在 maven-jar-plugin
/maven-dependency-plugin
组合中,正在创建主 class jar 和从属 jar,但仍然为从属 jar 提供 --jar
选项导致同样报错如下
./spark-submit --class Spark_App_Main_Class_Name --master spark://localhost.localdomain:7077 --deploy-mode client --executor-memory 4G --jars ../apps/Spark_App_Target_Jar_Name-dep.jar ../apps/Spark_App_Target_Jar_Name.jar
按照 "krookedking" 在 hadoop-no-filesystem-for-scheme-file 中的建议使用 maven-shade-plugin
似乎在正确的地方解决了问题,因为创建了一个包含主要 class 和所有内容的 jar 文件从属 classes 消除了 class 路径问题。
我的最终工作 spark-submit 命令如下:
./spark-submit --class Spark_App_Main_Class_Name --master spark://localhost.localdomain:7077 --deploy-mode client --executor-memory 4G ../apps/Spark_App_Target_Jar_Name.jar
我的项目pom.xml中的maven-shade-plugin
如下:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.2</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
注意:过滤器中的排除项将启用去除
java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
我在 运行 来自我的 IDE 的 Spark 代码和访问远程 HDFS 时遇到了同样的问题。
所以我设置了以下配置,它得到了解决。
JavaSparkContext jsc=new JavaSparkContext(conf);
Configuration hadoopConfig = jsc.hadoopConfiguration();
hadoopConfig.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
hadoopConfig.set("fs.file.impl",org.apache.hadoop.fs.LocalFileSystem.class.getName());
我正在使用 Cloudera Quickstart VM CDH5.3.0(就包裹包而言)和 Spark 1.2.0 $SPARK_HOME=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark
并使用命令
./bin/spark-submit --class <Spark_App_Main_Class_Name> --master spark://localhost.localdomain:7077 --deploy-mode client --executor-memory 4G ../apps/<Spark_App_Target_Jar_Name>.jar
Spark_App_Main_Class_Name.scala
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.mllib.util.MLUtils
object Spark_App_Main_Class_Name {
def main(args: Array[String]) {
val hConf = new SparkConf()
.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)
.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)
val sc = new SparkContext(hConf)
val data = MLUtils.loadLibSVMFile(sc, "hdfs://localhost.localdomain:8020/analytics/data/mllib/sample_libsvm_data.txt")
...
}
}
但是我在客户端模式下提交应用程序时收到 ClassNotFoundException
的 org.apache.hadoop.hdfs.DistributedFileSystem
[cloudera@localhost bin]$ ./spark-submit --class Spark_App_Main_Class_Name --master spark://localhost.localdomain:7077 --deploy-mode client --executor-memory 4G ../apps/Spark_App_Target_Jar_Name.jar
15/11/30 09:46:34 INFO SparkContext: Spark configuration:
spark.app.name=Spark_App_Main_Class_Name
spark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
spark.eventLog.dir=hdfs://localhost.localdomain:8020/user/spark/applicationHistory
spark.eventLog.enabled=true
spark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native
spark.executor.memory=4G
spark.jars=file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/bin/../apps/Spark_App_Target_Jar_Name.jar
spark.logConf=true
spark.master=spark://localhost.localdomain:7077
spark.yarn.historyServer.address=http://localhost.localdomain:18088
15/11/30 09:46:34 WARN Utils: Your hostname, localhost.localdomain resolves to a loopback address: 127.0.0.1; using 10.113.234.150 instead (on interface eth12)
15/11/30 09:46:34 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/11/30 09:46:34 INFO SecurityManager: Changing view acls to: cloudera
15/11/30 09:46:34 INFO SecurityManager: Changing modify acls to: cloudera
15/11/30 09:46:34 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(cloudera); users with modify permissions: Set(cloudera)
15/11/30 09:46:35 INFO Slf4jLogger: Slf4jLogger started
15/11/30 09:46:35 INFO Remoting: Starting remoting
15/11/30 09:46:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.113.234.150:59473]
15/11/30 09:46:35 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@10.113.234.150:59473]
15/11/30 09:46:35 INFO Utils: Successfully started service 'sparkDriver' on port 59473.
15/11/30 09:46:36 INFO SparkEnv: Registering MapOutputTracker
15/11/30 09:46:36 INFO SparkEnv: Registering BlockManagerMaster
15/11/30 09:46:36 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20151130094636-8c3d
15/11/30 09:46:36 INFO MemoryStore: MemoryStore started with capacity 267.3 MB
15/11/30 09:46:38 INFO HttpFileServer: HTTP File server directory is /tmp/spark-7d1f2861-a568-4919-8f7e-9a9fe6aab2b4
15/11/30 09:46:38 INFO HttpServer: Starting HTTP Server
15/11/30 09:46:38 INFO Utils: Successfully started service 'HTTP file server' on port 50003.
15/11/30 09:46:38 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/11/30 09:46:38 INFO SparkUI: Started SparkUI at http://10.113.234.150:4040
15/11/30 09:46:39 INFO SparkContext: Added JAR file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/bin/../apps/Spark_App_Target_Jar_Name.jar at http://10.113.234.150:50003/jars/Spark_App_Target_Jar_Name.jar with timestamp 1448894799228
15/11/30 09:46:39 INFO AppClient$ClientActor: Connecting to master spark://localhost.localdomain:7077...
15/11/30 09:46:40 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151130094640-0000
15/11/30 09:46:41 INFO NettyBlockTransferService: Server created on 56458
15/11/30 09:46:41 INFO BlockManagerMaster: Trying to register BlockManager
15/11/30 09:46:41 INFO BlockManagerMasterActor: Registering block manager 10.113.234.150:56458 with 267.3 MB RAM, BlockManagerId(<driver>, 10.113.234.150, 56458)
15/11/30 09:46:41 INFO BlockManagerMaster: Registered BlockManager
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2047)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2578)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.spark.util.FileLogger.<init>(FileLogger.scala:90)
at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:63)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:352)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:92)
at Spark_App_Main_Class_Name$.main(Spark_App_Main_Class_Name.scala:22)
at Spark_App_Main_Class_Name.main(Spark_App_Main_Class_Name.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1953)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2045)
... 16 more
似乎 Spark 应用程序无法映射 HDFS,因为最初我收到错误:
Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.spark.util.FileLogger.<init>(FileLogger.scala:90)
at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:63)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:352)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:92)
at LogisticRegressionwithBFGS$.main(LogisticRegressionwithBFGS.scala:21)
at LogisticRegressionwithBFGS.main(LogisticRegressionwithBFGS.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我按照 hadoop No FileSystem for scheme: file 将 "fs.hdfs.impl" 和 "fs.file.impl" 添加到 Spark 配置设置
您的类路径中需要有 hadoop-hdfs-2.x jars (maven link)。 在提交您的申请时,使用 spark-submit 的 --jar 选项提及额外的 jar 位置。
另一方面,您最好转移到具有 spark1.5 的 CDH5.5。
这个问题我经过一番详细的查找,也做了不同的尝试。基本上,问题似乎是由于 hadoop-hdfs jar 不可用,但是在提交 spark 应用程序时,即使在使用 maven-assembly-plugin
或 maven-jar-plugin
/maven-dependency-plugin
之后,也找不到依赖的 jar
在 maven-jar-plugin
/maven-dependency-plugin
组合中,正在创建主 class jar 和从属 jar,但仍然为从属 jar 提供 --jar
选项导致同样报错如下
./spark-submit --class Spark_App_Main_Class_Name --master spark://localhost.localdomain:7077 --deploy-mode client --executor-memory 4G --jars ../apps/Spark_App_Target_Jar_Name-dep.jar ../apps/Spark_App_Target_Jar_Name.jar
按照 "krookedking" 在 hadoop-no-filesystem-for-scheme-file 中的建议使用 maven-shade-plugin
似乎在正确的地方解决了问题,因为创建了一个包含主要 class 和所有内容的 jar 文件从属 classes 消除了 class 路径问题。
我的最终工作 spark-submit 命令如下:
./spark-submit --class Spark_App_Main_Class_Name --master spark://localhost.localdomain:7077 --deploy-mode client --executor-memory 4G ../apps/Spark_App_Target_Jar_Name.jar
我的项目pom.xml中的maven-shade-plugin
如下:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.2</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
注意:过滤器中的排除项将启用去除
java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
我在 运行 来自我的 IDE 的 Spark 代码和访问远程 HDFS 时遇到了同样的问题。
所以我设置了以下配置,它得到了解决。
JavaSparkContext jsc=new JavaSparkContext(conf);
Configuration hadoopConfig = jsc.hadoopConfiguration();
hadoopConfig.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
hadoopConfig.set("fs.file.impl",org.apache.hadoop.fs.LocalFileSystem.class.getName());