NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities while reading s3 Data with spark
NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities while reading s3 Data with spark
我想 运行 在我的本地开发机器上(通过 Intellij)从 Amazon s3 读取数据的简单 spark 作业。
我的build.sbt文件:
scalaVersion := "2.11.12"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.3.1",
"org.apache.spark" %% "spark-sql" % "2.3.1",
"com.amazonaws" % "aws-java-sdk" % "1.11.407",
"org.apache.hadoop" % "hadoop-aws" % "3.1.1"
)
我的代码片段:
val spark = SparkSession
.builder
.appName("test")
.master("local[2]")
.getOrCreate()
spark
.sparkContext
.hadoopConfiguration
.set("fs.s3n.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
val schema_p = ...
val df = spark
.read
.schema(schema_p)
.parquet("s3a:///...")
我得到以下异常:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access0(URLClassLoader.java:73)
at java.net.URLClassLoader.run(URLClassLoader.java:368)
at java.net.URLClassLoader.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2093)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2058)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2152)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2580)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2593)
at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:354)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:606)
at Test$.delayedEndpoint$Test(Test.scala:27)
at Test$delayedInit$body.apply(Test.scala:4)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main.apply(App.scala:76)
at scala.App$$anonfun$main.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at Test$.main(Test.scala:4)
at Test.main(Test.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.StreamCapabilities
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 41 more
将 s3a:///
替换为 s3:///
时出现另一个错误:No FileSystem for scheme: s3
由于我是 AWS 的新手,我不知道我应该使用 s3:///
、s3a:///
还是 s3n:///
。我已经使用 aws-cli 设置了我的 AWS 凭证。
我的机器上没有安装任何 Spark。
在此先感谢您的帮助
我会先看看 the S3A troubleshooting docs
Do not attempt to “drop in” a newer version of the AWS SDK than that which the Hadoop version was built with Whatever problem you have, changing the AWS SDK version will not fix things, only change the stack traces you see.
无论您在本地 spark 安装上安装的 hadoop-JAR 版本是什么,您都需要 完全 相同版本的 hadoop-aws
,并且完全相同的版本构建 hadoop-aws 的 aws SDK 的一部分。尝试 mvnrepository 了解详情。
对我来说,除了上述之外,还通过在 pom.xml 中添加以下依赖项解决了这个问题:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.1.1</version>
</dependency>
在我的例子中,我修复了它选择正确的依赖版本:
"org.apache.spark" % "spark-core_2.11" % "2.4.0",
"org.apache.spark" % "spark-sql_2.11" % "2.4.0",
"org.apache.hadoop" % "hadoop-common" % "3.2.1",
"org.apache.hadoop" % "hadoop-aws" % "3.2.1"
我想 运行 在我的本地开发机器上(通过 Intellij)从 Amazon s3 读取数据的简单 spark 作业。
我的build.sbt文件:
scalaVersion := "2.11.12"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.3.1",
"org.apache.spark" %% "spark-sql" % "2.3.1",
"com.amazonaws" % "aws-java-sdk" % "1.11.407",
"org.apache.hadoop" % "hadoop-aws" % "3.1.1"
)
我的代码片段:
val spark = SparkSession
.builder
.appName("test")
.master("local[2]")
.getOrCreate()
spark
.sparkContext
.hadoopConfiguration
.set("fs.s3n.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
val schema_p = ...
val df = spark
.read
.schema(schema_p)
.parquet("s3a:///...")
我得到以下异常:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access0(URLClassLoader.java:73)
at java.net.URLClassLoader.run(URLClassLoader.java:368)
at java.net.URLClassLoader.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2093)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2058)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2152)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2580)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2593)
at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:354)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:606)
at Test$.delayedEndpoint$Test(Test.scala:27)
at Test$delayedInit$body.apply(Test.scala:4)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main.apply(App.scala:76)
at scala.App$$anonfun$main.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at Test$.main(Test.scala:4)
at Test.main(Test.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.StreamCapabilities
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 41 more
将 s3a:///
替换为 s3:///
时出现另一个错误:No FileSystem for scheme: s3
由于我是 AWS 的新手,我不知道我应该使用 s3:///
、s3a:///
还是 s3n:///
。我已经使用 aws-cli 设置了我的 AWS 凭证。
我的机器上没有安装任何 Spark。
在此先感谢您的帮助
我会先看看 the S3A troubleshooting docs
Do not attempt to “drop in” a newer version of the AWS SDK than that which the Hadoop version was built with Whatever problem you have, changing the AWS SDK version will not fix things, only change the stack traces you see.
无论您在本地 spark 安装上安装的 hadoop-JAR 版本是什么,您都需要 完全 相同版本的 hadoop-aws
,并且完全相同的版本构建 hadoop-aws 的 aws SDK 的一部分。尝试 mvnrepository 了解详情。
对我来说,除了上述之外,还通过在 pom.xml 中添加以下依赖项解决了这个问题:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.1.1</version>
</dependency>
在我的例子中,我修复了它选择正确的依赖版本:
"org.apache.spark" % "spark-core_2.11" % "2.4.0",
"org.apache.spark" % "spark-sql_2.11" % "2.4.0",
"org.apache.hadoop" % "hadoop-common" % "3.2.1",
"org.apache.hadoop" % "hadoop-aws" % "3.2.1"