将整列数组组合成一个数组

Combining entire column of Arrays into one Array

我有这个数据框,我想合并所有数组, 在数据列中,成一个大数组,与DataFrame分开。

Scala 和 DataFrame API 对我来说还是很新,但我试了一下:

case class Tile(data: Array[Int])

val ta = Tile(Array(1,2))
val tb = Tile(Array(3,4))
val tc = Tile(Array(5,6))

df =  ListBuffer(ta,tb,tc).toDF()

// Combine contents of DF into one array
val result = new Array[Int](6)
var offset = 0
val combine = (t: WrappedArray[Int]) => {
    Array.copy(t, 0, result, offset, t.length)
    offset += t.length
}

df.foreach(r => combine(r(0).asInstanceOf[WrappedArray[Int]]))

df.show()
+------+
|  data|
+------+
|[1, 1]|
|[2, 2]|
|[3, 3]|
+------+

当我运行这个时,我得到以下错误:

16/08/23 11:21:32 ERROR executor.Executor: Exception in task 0.0 in stage 17.0 (TID 17)
scala.MatchError: WrappedArray(1, 1) (of class scala.collection.mutable.WrappedArray$ofRef)
at scala.runtime.ScalaRunTime$.array_apply(ScalaRunTime.scala:71)
at scala.Array$.slowcopy(Array.scala:81)
at scala.Array$.copy(Array.scala:107)
at $line150.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun.apply(<console>:32)
at $line150.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun.apply(<console>:31)
at $line190.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun.apply(<console>:46)
at $line190.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun.apply(<console>:46)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$foreach$$anonfun$apply.apply(RDD.scala:912)
at org.apache.spark.rdd.RDD$$anonfun$foreach$$anonfun$apply.apply(RDD.scala:912)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1869)
at org.apache.spark.SparkContext$$anonfun$runJob.apply(SparkContext.scala:1869)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:74

谁能指出我正确的方向?谢谢!

使用 Spark 时,您不能像平常那样使用 foreach 来累积数据。由于 spark 在所有执行者之间分配工作,因此您的 function 需要 Serializable.

如果您仍然想以与平常类似的方式做事,请使用支持 spark 分布式模型的Accumulator

val myRdd: RDD[List[Int]] = sc.parallelize(List(List(1,2), List(3,4), List(5,6))

val acc = sc.collectionAccumulator[Int]("MyAccumulator")

myRdd.foreach(l => l.foreach(i => acc.add(i)))

或者你的情况

case class Tile(data: Array[Int])

val myRdd: RDD[Tile] = sc.parallelize(List(
  Tile(Array(1,2)),
  Tile(Array(3,4)),
  Tile(Array(5,6))
))

val acc = sc.collectionAccumulator[Int]("MyAccumulator")

myRdd.foreach(t => t.data.foreach(i => acc.add(i)))