向 Spark DataFrame 添加一个空列
Add an empty column to Spark DataFrame
正如网络上 other locations 中所述,向现有 DataFrame 添加新列并不简单。不幸的是,拥有此功能很重要(即使它在分布式环境中效率低下)尤其是在尝试使用 unionAll
.
连接两个 DataFrame
时
将 null
列添加到 DataFrame
以促进 unionAll
的最优雅的解决方法是什么?
我的版本是这样的:
from pyspark.sql.types import StringType
from pyspark.sql.functions import UserDefinedFunction
to_none = UserDefinedFunction(lambda x: None, StringType())
new_df = old_df.withColumn('new_column', to_none(df_old['any_col_from_old']))
这里你只需要一个文字和演员:
from pyspark.sql.functions import lit
new_df = old_df.withColumn('new_column', lit(None).cast(StringType()))
完整示例:
df = sc.parallelize([row(1, "2"), row(2, "3")]).toDF()
df.printSchema()
## root
## |-- foo: long (nullable = true)
## |-- bar: string (nullable = true)
new_df = df.withColumn('new_column', lit(None).cast(StringType()))
new_df.printSchema()
## root
## |-- foo: long (nullable = true)
## |-- bar: string (nullable = true)
## |-- new_column: string (nullable = true)
new_df.show()
## +---+---+----------+
## |foo|bar|new_column|
## +---+---+----------+
## | 1| 2| null|
## | 2| 3| null|
## +---+---+----------+
可在此处找到 Scala 等效项:Create new Dataframe with empty/null field values
我会将 lit(None) 转换为 NullType 而不是 StringType。这样一来,如果我们不得不过滤掉该列上的非空行……可以按如下方式轻松完成
df = sc.parallelize([Row(1, "2"), Row(2, "3")]).toDF()
new_df = df.withColumn('new_column', lit(None).cast(NullType()))
new_df.printSchema()
df_null = new_df.filter(col("new_column").isNull()).show()
df_non_null = new_df.filter(col("new_column").isNotNull()).show()
如果要强制转换为 StringType,请注意不要使用 lit("None")(带引号),因为在 col("new_column").
没有import StringType
的选项
df = df.withColumn('foo', F.lit(None).cast('string'))
完整示例:
from pyspark.sql import SparkSession, functions as F
spark = SparkSession.builder.getOrCreate()
df = spark.range(1, 3).toDF('c')
df = df.withColumn('foo', F.lit(None).cast('string'))
df.printSchema()
# root
# |-- c: long (nullable = false)
# |-- foo: string (nullable = true)
df.show()
# +---+----+
# | c| foo|
# +---+----+
# | 1|null|
# | 2|null|
# +---+----+
正如网络上 unionAll
.
DataFrame
时
将 null
列添加到 DataFrame
以促进 unionAll
的最优雅的解决方法是什么?
我的版本是这样的:
from pyspark.sql.types import StringType
from pyspark.sql.functions import UserDefinedFunction
to_none = UserDefinedFunction(lambda x: None, StringType())
new_df = old_df.withColumn('new_column', to_none(df_old['any_col_from_old']))
这里你只需要一个文字和演员:
from pyspark.sql.functions import lit
new_df = old_df.withColumn('new_column', lit(None).cast(StringType()))
完整示例:
df = sc.parallelize([row(1, "2"), row(2, "3")]).toDF()
df.printSchema()
## root
## |-- foo: long (nullable = true)
## |-- bar: string (nullable = true)
new_df = df.withColumn('new_column', lit(None).cast(StringType()))
new_df.printSchema()
## root
## |-- foo: long (nullable = true)
## |-- bar: string (nullable = true)
## |-- new_column: string (nullable = true)
new_df.show()
## +---+---+----------+
## |foo|bar|new_column|
## +---+---+----------+
## | 1| 2| null|
## | 2| 3| null|
## +---+---+----------+
可在此处找到 Scala 等效项:Create new Dataframe with empty/null field values
我会将 lit(None) 转换为 NullType 而不是 StringType。这样一来,如果我们不得不过滤掉该列上的非空行……可以按如下方式轻松完成
df = sc.parallelize([Row(1, "2"), Row(2, "3")]).toDF()
new_df = df.withColumn('new_column', lit(None).cast(NullType()))
new_df.printSchema()
df_null = new_df.filter(col("new_column").isNull()).show()
df_non_null = new_df.filter(col("new_column").isNotNull()).show()
如果要强制转换为 StringType,请注意不要使用 lit("None")(带引号),因为在 col("new_column").
没有import StringType
的选项
df = df.withColumn('foo', F.lit(None).cast('string'))
完整示例:
from pyspark.sql import SparkSession, functions as F
spark = SparkSession.builder.getOrCreate()
df = spark.range(1, 3).toDF('c')
df = df.withColumn('foo', F.lit(None).cast('string'))
df.printSchema()
# root
# |-- c: long (nullable = false)
# |-- foo: string (nullable = true)
df.show()
# +---+----+
# | c| foo|
# +---+----+
# | 1|null|
# | 2|null|
# +---+----+