使用来自 Spark 驱动程序的 Java 的本机 readObject 进行反序列化时出现 ClassCastException

ClassCastException while deserializing with Java's native readObject from Spark driver

我有两个 spark 作业 A 和 B,因此 A 必须在 B 之前 运行。A 的输出必须可读自:

我目前正在使用 Java 的原生序列化和 Scala 案例 类。

来自 A Spark 作业:

val model = ALSFactorizerModel(...)

context.writeSerializable(resultOutputPath, model)

序列化方法:

def writeSerializable[T <: Serializable](path: String, obj: T): Unit = {
  val writer: OutputStream = ... // Google Cloud Storage dependant
  val oos: ObjectOutputStream = new ObjectOutputStream(writer)
  oos.writeObject(obj)
  oos.close()
  writer.close()
}

来自 B Spark 作业或任何独立的非 Spark Scala 代码:

val lastFactorizerModel: ALSFactorizerModel = context
                     .readSerializable[ALSFactorizerModel](ALSFactorizer.resultOutputPath)

使用反序列化方法:

def readSerializable[T <: Serializable](path: String): T = {
  val is : InputStream = ... // Google Cloud Storage dependant
  val ois = new ObjectInputStream(is)
  val model: T = ois
    .readObject()
    .asInstanceOf[T]
  ois.close()
  is.close()

  model
}

(嵌套)案例 类:

ALSFactorizerModel:

package mycompany.algo.als.common.io.model.factorizer

import mycompany.data.item.ItemStore

@SerialVersionUID(1L)
final case class ALSFactorizerModel(
  knownItems:       Array[ALSFeaturedKnownItem],
  unknownItems:     Array[ALSFeaturedUnknownItem],
  rank:             Int,
  modelTS:          Long,
  itemRepositoryTS: Long,
  stores:           Seq[ItemStore]
) {   
}

物品商店:

package mycompany.data.item

@SerialVersionUID(1L)
final case class ItemStore(
  id:     String,
  tenant: String,
  name:   String,
  index:  Int
) {
}

输出:

异常:

java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field mycompany.algo.als.common.io.model.factorizer.ALSFactorizerModel.stores of type scala.collection.Seq in instance of mycompany.algo.als.common.io.model.factorizer.ALSFactorizerModel
  at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
  at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2251)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
  at mycompany.fs.gcs.SimpleGCSFileSystem.readSerializable(SimpleGCSFileSystem.scala:71)
  at mycompany.algo.als.batch.strategy.ALSClusterer$.run(ALSClusterer.scala:38)
  at mycompany.batch.SinglePredictorEbapBatch$$anonfun.apply(SinglePredictorEbapBatch.scala:55)
  at mycompany.batch.SinglePredictorEbapBatch$$anonfun.apply(SinglePredictorEbapBatch.scala:55)
  at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1(Future.scala:24)
  at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
  at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
  at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
  at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
  at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
  at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

我是不是漏掉了什么?我是否应该配置 Dataproc/Spark 以支持对此代码使用 Java 序列化?

我用 --jars <path to my fatjar> 提交作业,之前从未遇到过其他问题。 spark依赖不包含在这个Jar中,范围是Provided.

Scala 版本:2.11.8 Spark 版本:2.0.2 SBT 版本: 0.13.13

感谢您的帮助

stores: Array[ItemStore] 替换 stores: Seq[ItemStore] 已经解决了我们的问题。

或者我们可以使用另一个 class 加载器来执行 ser/deser-ialization 操作。

希望这会有所帮助。