Spark:将rdd [row]转换为dataframe,其中行中的一列是列表

Spark: convert rdd[row] to dataframe where one of the columns in the row is a list

我有一个 rdd[row],每行包含以下数据

[guid, List(peopleObjects)]  
["123", List(peopleObjects1, peopleObjects2, peopleObjects3)]

我想将其转换为数据框
我正在使用以下代码

val personStructureType = new StructType()
    .add(StructField("guid", StringType, true))
    .add(StructField("personList", StringType, true))  
val personDF = hiveContext.createDataFrame(personRDD, personStructureType)

我应该为我的模式使用不同的数据类型而不是 StringType 吗?

如果我的列表只是一个字符串,它可以工作,但是当它是一个列表时,我会收到以下错误

scala.MatchError: List(personObject1, personObject2, personObject3) (of class scala.collection.immutable.$colon$colon)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:295)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:294)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter.apply(CatalystTypeConverters.scala:401)
    at org.apache.spark.sql.SQLContext$$anonfun.apply(SQLContext.scala:445)
    at org.apache.spark.sql.SQLContext$$anonfun.apply(SQLContext.scala:445)
    at scala.collection.Iterator$$anon.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon.next(Iterator.scala:328)
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:219)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745) 

尚不完全清楚您要做什么,但更好的方法是创建一个 case class,然后将 RDD 行映射到case class,然后调用 toDF.

类似于:

case class MyClass(guid: Int, peopleObjects: List[String])

val rdd = sc.parallelize(Array((123,List("a","b")),(1232,List("b","d"))))

val df =  rdd.map(r => MyClass(r._1, r._2)).toDF
df.show
+----+-------------+
|guid|peopleObjects|
+----+-------------+
| 123|       [a, b]|
|1232|       [b, d]|
+----+-------------+

或者你可以用手写的方式,但不使用大小写 class,像这样:

val df = sqlContext.createDataFrame(
  rdd.map(r => Row(r._1, r._2)),
  StructType(Array(
    StructField("guid",IntegerType),
    StructField("peopleObjects", ArrayType(StringType))
  ))
)