为什么加入两个 spark 数据帧会失败,除非我向两者都添加“.as('alias)”?
Why joining two spark dataframes fail unless I add ".as('alias)" to both?
假设有 2 个 Spark DataFrames 我们想要加入,无论出于何种原因:
val df1 = Seq(("A", 1), ("B", 2), ("C", 3)).toDF("agent", "in_count")
val df2 = Seq(("A", 2), ("C", 2), ("D", 2)).toDF("agent", "out_count")
可以用这样的代码来完成:
val joinedDf = df1.as('d1).join(df2.as('d2), ($"d1.agent" === $"d2.agent"))
// Result:
val joinedDf.show
+-----+--------+-----+---------+
|agent|in_count|agent|out_count|
+-----+--------+-----+---------+
| A| 1| A| 2|
| C| 3| C| 2|
+-----+--------+-----+---------+
现在,我不明白的是,为什么它只在我使用别名 df1.as(d1)
和 df2.as(d2)
时才有效?我可以想象,如果我直截了当地写成
,列之间会发生名称冲突
val joinedDf = df1.join(df2, ($"df1.agent" === $"df2.agent")) // fails
但是...我不明白为什么我不能使用 .as(alias)
只有两个 DF:
df1.as('d1).join(df2, ($"d1.agent" === $"df2.agent")).show()
失败
org.apache.spark.sql.AnalysisException: cannot resolve '`df2.agent`' given input columns: [agent, in_count, agent, out_count];;
'Join Inner, (agent#25 = 'df2.agent)
:- SubqueryAlias d1
: +- Project [_1#22 AS agent#25, _2#23 AS in_count#26]
: +- LocalRelation [_1#22, _2#23]
+- Project [_1#32 AS agent#35, _2#33 AS out_count#36]
+- LocalRelation [_1#32, _2#33]
为什么最后一个例子无效?
你好当你使用别名DataFrame
被转换成org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [agent: string, in_count: int]
所以你可以在那边使用$"d1.agent"
如果你想加入DataFrame,你可以这样做:
scala> val joinedDf = df1.join(df2, (df1("agent") === df2("agent")))
joinedDf: org.apache.spark.sql.DataFrame = [agent: string, in_count: int ... 2 more fields]
scala> joinedDf.show
+-----+--------+-----+---------+
|agent|in_count|agent|out_count|
+-----+--------+-----+---------+
| A| 1| A| 2|
| C| 3| C| 2|
+-----+--------+-----+---------+
假设有 2 个 Spark DataFrames 我们想要加入,无论出于何种原因:
val df1 = Seq(("A", 1), ("B", 2), ("C", 3)).toDF("agent", "in_count")
val df2 = Seq(("A", 2), ("C", 2), ("D", 2)).toDF("agent", "out_count")
可以用这样的代码来完成:
val joinedDf = df1.as('d1).join(df2.as('d2), ($"d1.agent" === $"d2.agent"))
// Result:
val joinedDf.show
+-----+--------+-----+---------+
|agent|in_count|agent|out_count|
+-----+--------+-----+---------+
| A| 1| A| 2|
| C| 3| C| 2|
+-----+--------+-----+---------+
现在,我不明白的是,为什么它只在我使用别名 df1.as(d1)
和 df2.as(d2)
时才有效?我可以想象,如果我直截了当地写成
val joinedDf = df1.join(df2, ($"df1.agent" === $"df2.agent")) // fails
但是...我不明白为什么我不能使用 .as(alias)
只有两个 DF:
df1.as('d1).join(df2, ($"d1.agent" === $"df2.agent")).show()
失败
org.apache.spark.sql.AnalysisException: cannot resolve '`df2.agent`' given input columns: [agent, in_count, agent, out_count];;
'Join Inner, (agent#25 = 'df2.agent)
:- SubqueryAlias d1
: +- Project [_1#22 AS agent#25, _2#23 AS in_count#26]
: +- LocalRelation [_1#22, _2#23]
+- Project [_1#32 AS agent#35, _2#33 AS out_count#36]
+- LocalRelation [_1#32, _2#33]
为什么最后一个例子无效?
你好当你使用别名DataFrame
被转换成org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [agent: string, in_count: int]
所以你可以在那边使用$"d1.agent"
如果你想加入DataFrame,你可以这样做:
scala> val joinedDf = df1.join(df2, (df1("agent") === df2("agent")))
joinedDf: org.apache.spark.sql.DataFrame = [agent: string, in_count: int ... 2 more fields]
scala> joinedDf.show
+-----+--------+-----+---------+
|agent|in_count|agent|out_count|
+-----+--------+-----+---------+
| A| 1| A| 2|
| C| 3| C| 2|
+-----+--------+-----+---------+