使用 Spark DataFrame 发展模式
Evolving a schema with Spark DataFrame
我正在使用 Spark 数据框,它可以从三种不同的模式版本之一加载数据:
// Original
{ "A": {"B": 1 } }
// Addition "C"
{ "A": {"B": 1 }, "C": 2 }
// Additional "A.D"
{ "A": {"B": 1, "D": 3 }, "C": 2 }
我可以通过检查架构是否包含字段 "C" 来处理额外的 "C",如果不包含则向数据框添加新列。但是我不知道如何为子对象创建一个字段。
public void evolvingSchema() {
String versionOne = "{ \"A\": {\"B\": 1 } }";
String versionTwo = "{ \"A\": {\"B\": 1 }, \"C\": 2 }";
String versionThree = "{ \"A\": {\"B\": 1, \"D\": 3 }, \"C\": 2 }";
process(spark.getContext(), "1", versionOne);
process(spark.getContext(), "2", versionTwo);
process(spark.getContext(), "2", versionThree);
}
private static void process(JavaSparkContext sc, String version, String data) {
SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read().json(sc.parallelize(Arrays.asList(data)));
if(!Arrays.asList(df.schema().fieldNames()).contains("C")) {
df = df.withColumn("C", org.apache.spark.sql.functions.lit(null));
}
// Not sure what to put here. The fieldNames does not contain the "A.D"
try {
df.select("C").collect();
} catch(Exception e) {
System.out.println("Failed to C for " + version);
}
try {
df.select("A.D").collect();
} catch(Exception e) {
System.out.println("Failed to A.D for " + version);
}
}
JSON 源不太适合具有不断发展的模式的数据(Avro 或 Parquet 怎么样)但简单的解决方案是对所有源使用相同的模式并使新字段可选/可为空:
import org.apache.spark.sql.types.{StructType, StructField, LongType}
val schema = StructType(Seq(
StructField("A", StructType(Seq(
StructField("B", LongType, true),
StructField("D", LongType, true)
)), true),
StructField("C", LongType, true)))
您可以像这样将 schema
传递给 DataFrameReader
:
val rddV1 = sc.parallelize(Seq("{ \"A\": {\"B\": 1 } }"))
val df1 = sqlContext.read.schema(schema).json(rddV1)
val rddV2 = sc.parallelize(Seq("{ \"A\": {\"B\": 1 }, \"C\": 2 }"))
val df2 = sqlContext.read.schema(schema).json(rddV2)
val rddV3 = sc.parallelize(Seq("{ \"A\": {\"B\": 1, \"D\": 3 }, \"C\": 2 }"))
val df3 = sqlContext.read.schema(schema).json(rddV3)
您将获得独立于变体的一致结构:
require(df1.schema == df2.schema && df2.schema == df3.schema)
缺少列自动设置为 null
:
df1.printSchema
// root
// |-- A: struct (nullable = true)
// | |-- B: long (nullable = true)
// | |-- D: long (nullable = true)
// |-- C: long (nullable = true)
df1.show
// +--------+----+
// | A| C|
// +--------+----+
// |[1,null]|null|
// +--------+----+
df2.show
// +--------+---+
// | A| C|
// +--------+---+
// |[1,null]| 2|
// +--------+---+
df3.show
// +-----+---+
// | A| C|
// +-----+---+
// |[1,3]| 2|
// +-----+---+
注:
此解决方案依赖于数据源。它可能适用于也可能不适用于其他来源,或 .
zero323 已回答问题,但在 Scala 中。这是同一件事,但在 Java.
中
public void evolvingSchema() {
String versionOne = "{ \"A\": {\"B\": 1 } }";
String versionTwo = "{ \"A\": {\"B\": 1 }, \"C\": 2 }";
String versionThree = "{ \"A\": {\"B\": 1, \"D\": 3 }, \"C\": 2 }";
process(spark.getContext(), "1", versionOne);
process(spark.getContext(), "2", versionTwo);
process(spark.getContext(), "2", versionThree);
}
private static void process(JavaSparkContext sc, String version, String data) {
StructType schema = DataTypes.createStructType(Arrays.asList(
DataTypes.createStructField("A",
DataTypes.createStructType(Arrays.asList(
DataTypes.createStructField("B", DataTypes.LongType, true),
DataTypes.createStructField("D", DataTypes.LongType, true))), true),
DataTypes.createStructField("C", DataTypes.LongType, true)));
SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read().schema(schema).json(sc.parallelize(Arrays.asList(data)));
try {
df.select("C").collect();
} catch(Exception e) {
System.out.println("Failed to C for " + version);
}
try {
df.select("A.D").collect();
} catch(Exception e) {
System.out.println("Failed to A.D for " + version);
}
}
我正在使用 Spark 数据框,它可以从三种不同的模式版本之一加载数据:
// Original
{ "A": {"B": 1 } }
// Addition "C"
{ "A": {"B": 1 }, "C": 2 }
// Additional "A.D"
{ "A": {"B": 1, "D": 3 }, "C": 2 }
我可以通过检查架构是否包含字段 "C" 来处理额外的 "C",如果不包含则向数据框添加新列。但是我不知道如何为子对象创建一个字段。
public void evolvingSchema() {
String versionOne = "{ \"A\": {\"B\": 1 } }";
String versionTwo = "{ \"A\": {\"B\": 1 }, \"C\": 2 }";
String versionThree = "{ \"A\": {\"B\": 1, \"D\": 3 }, \"C\": 2 }";
process(spark.getContext(), "1", versionOne);
process(spark.getContext(), "2", versionTwo);
process(spark.getContext(), "2", versionThree);
}
private static void process(JavaSparkContext sc, String version, String data) {
SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read().json(sc.parallelize(Arrays.asList(data)));
if(!Arrays.asList(df.schema().fieldNames()).contains("C")) {
df = df.withColumn("C", org.apache.spark.sql.functions.lit(null));
}
// Not sure what to put here. The fieldNames does not contain the "A.D"
try {
df.select("C").collect();
} catch(Exception e) {
System.out.println("Failed to C for " + version);
}
try {
df.select("A.D").collect();
} catch(Exception e) {
System.out.println("Failed to A.D for " + version);
}
}
JSON 源不太适合具有不断发展的模式的数据(Avro 或 Parquet 怎么样)但简单的解决方案是对所有源使用相同的模式并使新字段可选/可为空:
import org.apache.spark.sql.types.{StructType, StructField, LongType}
val schema = StructType(Seq(
StructField("A", StructType(Seq(
StructField("B", LongType, true),
StructField("D", LongType, true)
)), true),
StructField("C", LongType, true)))
您可以像这样将 schema
传递给 DataFrameReader
:
val rddV1 = sc.parallelize(Seq("{ \"A\": {\"B\": 1 } }"))
val df1 = sqlContext.read.schema(schema).json(rddV1)
val rddV2 = sc.parallelize(Seq("{ \"A\": {\"B\": 1 }, \"C\": 2 }"))
val df2 = sqlContext.read.schema(schema).json(rddV2)
val rddV3 = sc.parallelize(Seq("{ \"A\": {\"B\": 1, \"D\": 3 }, \"C\": 2 }"))
val df3 = sqlContext.read.schema(schema).json(rddV3)
您将获得独立于变体的一致结构:
require(df1.schema == df2.schema && df2.schema == df3.schema)
缺少列自动设置为 null
:
df1.printSchema
// root
// |-- A: struct (nullable = true)
// | |-- B: long (nullable = true)
// | |-- D: long (nullable = true)
// |-- C: long (nullable = true)
df1.show
// +--------+----+
// | A| C|
// +--------+----+
// |[1,null]|null|
// +--------+----+
df2.show
// +--------+---+
// | A| C|
// +--------+---+
// |[1,null]| 2|
// +--------+---+
df3.show
// +-----+---+
// | A| C|
// +-----+---+
// |[1,3]| 2|
// +-----+---+
注:
此解决方案依赖于数据源。它可能适用于也可能不适用于其他来源,或
zero323 已回答问题,但在 Scala 中。这是同一件事,但在 Java.
中public void evolvingSchema() {
String versionOne = "{ \"A\": {\"B\": 1 } }";
String versionTwo = "{ \"A\": {\"B\": 1 }, \"C\": 2 }";
String versionThree = "{ \"A\": {\"B\": 1, \"D\": 3 }, \"C\": 2 }";
process(spark.getContext(), "1", versionOne);
process(spark.getContext(), "2", versionTwo);
process(spark.getContext(), "2", versionThree);
}
private static void process(JavaSparkContext sc, String version, String data) {
StructType schema = DataTypes.createStructType(Arrays.asList(
DataTypes.createStructField("A",
DataTypes.createStructType(Arrays.asList(
DataTypes.createStructField("B", DataTypes.LongType, true),
DataTypes.createStructField("D", DataTypes.LongType, true))), true),
DataTypes.createStructField("C", DataTypes.LongType, true)));
SQLContext sqlContext = new SQLContext(sc);
DataFrame df = sqlContext.read().schema(schema).json(sc.parallelize(Arrays.asList(data)));
try {
df.select("C").collect();
} catch(Exception e) {
System.out.println("Failed to C for " + version);
}
try {
df.select("A.D").collect();
} catch(Exception e) {
System.out.println("Failed to A.D for " + version);
}
}