Spark 序列化的怪异之处
Weirdness with Spark serialization
我在使用 JavaPairRdd.repartitionAndrepartitionAndSortWithinPartitions
方法时遇到了 Spark 的问题。我已经尝试了任何有理智的人会想到的一切。我终于写了一个足够简单的小片段来形象化问题:
public class Main {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("test").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
final List<String> list = Arrays.asList("I", "am", "totally", "baffled");
final HashPartitioner partitioner = new HashPartitioner(2);
doSomething(sc, list, partitioner, String.CASE_INSENSITIVE_ORDER);
doSomething(sc, list, partitioner, Main::compareString);
doSomething(sc, list, partitioner, new StringComparator());
doSomething(sc, list, partitioner, new SerializableStringComparator());
doSomething(sc, list, partitioner, (s1,s2) -> Integer.compare(s1.charAt(0),s2.charAt(0)));
}
public static <T> void doSomething(JavaSparkContext sc, List<T> list, Partitioner partitioner, Comparator<T> comparator) {
try {
sc.parallelize(list)
.mapToPair(elt -> new Tuple2<>(elt,elt))
.repartitionAndSortWithinPartitions(partitioner,comparator)
.count();
System.out.println("success");
} catch (Exception e) {
System.out.println("failure");
}
}
public static int compareString(String s1, String s2) {
return Integer.compare(s1.charAt(0),s2.charAt(0));
}
public static class StringComparator implements Comparator<String> {
@Override
public int compare(String s1, String s2) {
return Integer.compare(s1.charAt(0),s2.charAt(0));
}
}
public static class SerializableStringComparator implements Comparator<String>, Serializable {
@Override
public int compare(String s1, String s2) {
return Integer.compare(s1.charAt(0),s2.charAt(0));
}
}
}
除了 Spark 日志记录,它输出:
success
failure
failure
success
failure
失败时抛出的异常总是一样的:
org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:483)
org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:240)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:150)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:158)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:58)
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:39)
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:835)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage.apply(DAGScheduler.scala:781)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage.apply(DAGScheduler.scala:780)
scala.collection.immutable.List.foreach(List.scala:318)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:780)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1193)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1192)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage.apply(DAGScheduler.scala:781)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage.apply(DAGScheduler.scala:780)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:780)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
现在我得到了修复:将我的自定义比较器声明为 Serializable
(我检查了标准库代码,不区分大小写的字符串比较器被声明为可序列化,这样才有意义)。
但是为什么呢?为什么我不应该在这里使用 lambda?我本来希望第二个和最后一个能正常工作,因为我只使用了静态方法和 classes。
我觉得特别奇怪的是,我已经注册了我试图序列化到 Kryo 的 classes,而我没有注册的 class 可以使用它们的默认关联序列化程序轻松序列化(Kryo 将 FieldSerializer
作为大多数 classes 的默认值)。但是,在任务序列化失败之前,Kryo registrator 永远不会被执行。
我的问题没有明确说明为什么我如此困惑(关于 Kryo 注册代码未被执行),所以我对其进行了编辑以反映它。
我发现 Spark 使用了两种不同的序列化程序:
一个用于序列化从master到slave的任务,在代码中称为closureSerializer
(参见SparkEnv.scala
)。只能在我post.
日期设置为JavaSerializer
一个用于序列化实际处理的数据,在SparkEnv
中调用serializer
。这个可以设置为 JavaSerializer
或 `KryoSerializer.
将 class 注册到 Kryo 并不能确保您始终使用 Kryo 对其进行序列化,这取决于您如何使用它。例如,DAGScheduler
仅使用 closureSerializer
,因此无论您如何配置序列化,如果对象被 DAGScheduler
操作,您将始终需要使对象成为 Java-可序列化在某些时候(除非 Spark 在以后的版本中为闭包启用 Kryo 序列化)。
我在使用 JavaPairRdd.repartitionAndrepartitionAndSortWithinPartitions
方法时遇到了 Spark 的问题。我已经尝试了任何有理智的人会想到的一切。我终于写了一个足够简单的小片段来形象化问题:
public class Main {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("test").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
final List<String> list = Arrays.asList("I", "am", "totally", "baffled");
final HashPartitioner partitioner = new HashPartitioner(2);
doSomething(sc, list, partitioner, String.CASE_INSENSITIVE_ORDER);
doSomething(sc, list, partitioner, Main::compareString);
doSomething(sc, list, partitioner, new StringComparator());
doSomething(sc, list, partitioner, new SerializableStringComparator());
doSomething(sc, list, partitioner, (s1,s2) -> Integer.compare(s1.charAt(0),s2.charAt(0)));
}
public static <T> void doSomething(JavaSparkContext sc, List<T> list, Partitioner partitioner, Comparator<T> comparator) {
try {
sc.parallelize(list)
.mapToPair(elt -> new Tuple2<>(elt,elt))
.repartitionAndSortWithinPartitions(partitioner,comparator)
.count();
System.out.println("success");
} catch (Exception e) {
System.out.println("failure");
}
}
public static int compareString(String s1, String s2) {
return Integer.compare(s1.charAt(0),s2.charAt(0));
}
public static class StringComparator implements Comparator<String> {
@Override
public int compare(String s1, String s2) {
return Integer.compare(s1.charAt(0),s2.charAt(0));
}
}
public static class SerializableStringComparator implements Comparator<String>, Serializable {
@Override
public int compare(String s1, String s2) {
return Integer.compare(s1.charAt(0),s2.charAt(0));
}
}
}
除了 Spark 日志记录,它输出:
success
failure
failure
success
failure
失败时抛出的异常总是一样的:
org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:483)
org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:240)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:150)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:158)
org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:58)
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:39)
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:835)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage.apply(DAGScheduler.scala:781)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage.apply(DAGScheduler.scala:780)
scala.collection.immutable.List.foreach(List.scala:318)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:780)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1193)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1192)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:847)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:778)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage.apply(DAGScheduler.scala:781)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$submitStage.apply(DAGScheduler.scala:780)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:780)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:762)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1362)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
现在我得到了修复:将我的自定义比较器声明为 Serializable
(我检查了标准库代码,不区分大小写的字符串比较器被声明为可序列化,这样才有意义)。
但是为什么呢?为什么我不应该在这里使用 lambda?我本来希望第二个和最后一个能正常工作,因为我只使用了静态方法和 classes。
我觉得特别奇怪的是,我已经注册了我试图序列化到 Kryo 的 classes,而我没有注册的 class 可以使用它们的默认关联序列化程序轻松序列化(Kryo 将 FieldSerializer
作为大多数 classes 的默认值)。但是,在任务序列化失败之前,Kryo registrator 永远不会被执行。
我的问题没有明确说明为什么我如此困惑(关于 Kryo 注册代码未被执行),所以我对其进行了编辑以反映它。
我发现 Spark 使用了两种不同的序列化程序:
一个用于序列化从master到slave的任务,在代码中称为
closureSerializer
(参见SparkEnv.scala
)。只能在我post. 日期设置为一个用于序列化实际处理的数据,在
SparkEnv
中调用serializer
。这个可以设置为JavaSerializer
或 `KryoSerializer.
JavaSerializer
将 class 注册到 Kryo 并不能确保您始终使用 Kryo 对其进行序列化,这取决于您如何使用它。例如,DAGScheduler
仅使用 closureSerializer
,因此无论您如何配置序列化,如果对象被 DAGScheduler
操作,您将始终需要使对象成为 Java-可序列化在某些时候(除非 Spark 在以后的版本中为闭包启用 Kryo 序列化)。