无法在 windows 10 上将 rdd 保存到本地文件系统

unable to save rdd on local filesystem on windows 10

我有一个 scala/spark 程序,用于验证输入目录中的 xmls 文件,然后将报告写入另一个输入参数(要写入报告的本地文件系统路径)。

根据利益相关者的要求,该程序将 运行 在本地机器上运行,因此我在本地模式下使用 spark。 到现在一切都很好,我正在使用下面的代码将我的报告保存到文件中

dataframe.repartition(1)
    .write
    .option("header", "true")
    .mode("overwrite")
    .csv(reportPath)

然而,这需要 winutils installed/configured 在 运行 机器上安装我的程序。

鉴于我们经常使用 cloudera 更新,因此在每次更新后更改 winutils 会产生开销,因为我们会将 jars 更新到 pom 文件中的最新版本。因此,我被要求删除对 winutils

的依赖

快速 google 搜索并遇到 我决定将上面的代码更改为

val outputRdd = dataframe.rdd
val count = outputRdd.count()
println("\nCount is: " + count + "\n")
println("\nOutput path is: " + reportPath + "\n")
outputRdd.coalesce(1).saveAsTextFile(reportPath)

但是,在运行我现在收到这个错误的代码

Count is: 15


Output path is: C:\codingdir\test\report

Exception in thread "main" java.lang.IllegalAccessError: tried to access method org.apache.hadoop.mapred.JobContextImpl.<init>(Lorg/apache/hadoop/mapred/JobConf;Lorg/apache/hadoop/mapreduce/JobID;)V from class org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil
    at org.apache.spark.internal.io.HadoopMapRedWriteConfigUtil.createJobContext(SparkHadoopWriter.scala:178)
    at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:67)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset.apply$mcV$sp(PairRDDFunctions.scala:1096)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset.apply(PairRDDFunctions.scala:1094)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset.apply(PairRDDFunctions.scala:1094)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile.apply$mcV$sp(PairRDDFunctions.scala:1067)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile.apply(PairRDDFunctions.scala:1032)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile.apply(PairRDDFunctions.scala:1032)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1032)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile.apply$mcV$sp(PairRDDFunctions.scala:958)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile.apply(PairRDDFunctions.scala:958)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile.apply(PairRDDFunctions.scala:958)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:957)
    at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile.apply$mcV$sp(RDD.scala:1499)
    at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile.apply(RDD.scala:1478)
    at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile.apply(RDD.scala:1478)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1478)
    at com.optus.dcoe.hawk.XmlParser$.delayedEndpoint$com$optus$dcoe$hawk$XmlParser(XmlParser.scala:120)
    at com.optus.dcoe.hawk.XmlParser$delayedInit$body.apply(XmlParser.scala:16)
    at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
    at scala.App$$anonfun$main.apply(App.scala:76)
    at scala.App$$anonfun$main.apply(App.scala:76)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
    at scala.App$class.main(App.scala:76)
    at com.optus.dcoe.hawk.XmlParser$.main(XmlParser.scala:16)
    at com.optus.dcoe.hawk.XmlParser.main(XmlParser.scala)

我尝试将 reportPath 变量的值更改为 C:\codingdir\test\report 文件://C:/codingdir/test/report 文件://C:/codingdir/test/report

上建议的其他值

和其他链接,但我仍然遇到同样的错误

我找到了这些关于 java.lang.IllegalAccessError 的文章,但不确定如何解决此错误:

有人可以帮我解决这个问题吗?

与 winutls 有关的环境变量 HADOOP_HOME 已被删除。 winutils 条目已从 PATH 变量中删除 我在 windows 10 上使用 java 8(该程序的所有用户都在类似的笔记本电脑上) Spark版本为2.4.0-cdh6.2.1

终于找到问题了, 这是由一些不需要的 mapreduce 相关依赖项引起的,这些依赖项现在已被删除,我现在已经转移到另一个错误