如何加入 Pyspark 中的多个列?
How to join on multiple columns in Pyspark?
我正在使用 Spark 1.3 并想使用 python 接口 (SparkSQL)
加入多个列
以下作品:
我首先将它们注册为临时表。
numeric.registerTempTable("numeric")
Ref.registerTempTable("Ref")
test = numeric.join(Ref, numeric.ID == Ref.ID, joinType='inner')
我现在想根据多列加入它们。
我得到 SyntaxError
:语法无效:
test = numeric.join(Ref,
numeric.ID == Ref.ID AND numeric.TYPE == Ref.TYPE AND
numeric.STATUS == Ref.STATUS , joinType='inner')
您应该使用 &
/ |
运算符并注意 operator precedence(==
的优先级低于按位 AND
和 OR
):
df1 = sqlContext.createDataFrame(
[(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
("x1", "x2", "x3"))
df2 = sqlContext.createDataFrame(
[(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x3"))
df = df1.join(df2, (df1.x1 == df2.x1) & (df1.x2 == df2.x2))
df.show()
## +---+---+---+---+---+---+
## | x1| x2| x3| x1| x2| x3|
## +---+---+---+---+---+---+
## | 2| b|3.0| 2| b|0.0|
## +---+---+---+---+---+---+
另一种方法是:
df1 = sqlContext.createDataFrame(
[(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
("x1", "x2", "x3"))
df2 = sqlContext.createDataFrame(
[(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x4"))
df = df1.join(df2, ['x1','x2'])
df.show()
输出:
+---+---+---+---+
| x1| x2| x3| x4|
+---+---+---+---+
| 2| b|3.0|0.0|
+---+---+---+---+
主要优点是连接表的列在输出中不重复,减少了遇到诸如org.apache.spark.sql.AnalysisException: Reference 'x1' is ambiguous, could be: x1#50L, x1#57L.
[之类的错误的风险=18=]
只要两个表中的列具有 不同的名称,(假设在上面的示例中,df2
具有列 y1
、y2
和 y4
),您可以使用以下语法:
df = df1.join(df2.withColumnRenamed('y1','x1').withColumnRenamed('y2','x2'), ['x1','x2'])
test = numeric.join(Ref,
on=[
numeric.ID == Ref.ID,
numeric.TYPE == Ref.TYPE,
numeric.STATUS == Ref.STATUS
], how='inner')
我正在使用 Spark 1.3 并想使用 python 接口 (SparkSQL)
加入多个列以下作品:
我首先将它们注册为临时表。
numeric.registerTempTable("numeric")
Ref.registerTempTable("Ref")
test = numeric.join(Ref, numeric.ID == Ref.ID, joinType='inner')
我现在想根据多列加入它们。
我得到 SyntaxError
:语法无效:
test = numeric.join(Ref,
numeric.ID == Ref.ID AND numeric.TYPE == Ref.TYPE AND
numeric.STATUS == Ref.STATUS , joinType='inner')
您应该使用 &
/ |
运算符并注意 operator precedence(==
的优先级低于按位 AND
和 OR
):
df1 = sqlContext.createDataFrame(
[(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
("x1", "x2", "x3"))
df2 = sqlContext.createDataFrame(
[(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x3"))
df = df1.join(df2, (df1.x1 == df2.x1) & (df1.x2 == df2.x2))
df.show()
## +---+---+---+---+---+---+
## | x1| x2| x3| x1| x2| x3|
## +---+---+---+---+---+---+
## | 2| b|3.0| 2| b|0.0|
## +---+---+---+---+---+---+
另一种方法是:
df1 = sqlContext.createDataFrame(
[(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
("x1", "x2", "x3"))
df2 = sqlContext.createDataFrame(
[(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x4"))
df = df1.join(df2, ['x1','x2'])
df.show()
输出:
+---+---+---+---+
| x1| x2| x3| x4|
+---+---+---+---+
| 2| b|3.0|0.0|
+---+---+---+---+
主要优点是连接表的列在输出中不重复,减少了遇到诸如org.apache.spark.sql.AnalysisException: Reference 'x1' is ambiguous, could be: x1#50L, x1#57L.
[之类的错误的风险=18=]
只要两个表中的列具有 不同的名称,(假设在上面的示例中,df2
具有列 y1
、y2
和 y4
),您可以使用以下语法:
df = df1.join(df2.withColumnRenamed('y1','x1').withColumnRenamed('y2','x2'), ['x1','x2'])
test = numeric.join(Ref,
on=[
numeric.ID == Ref.ID,
numeric.TYPE == Ref.TYPE,
numeric.STATUS == Ref.STATUS
], how='inner')