将 DF 列转换为列表时出现 PySpark 错误

PySpark error when converting DF column to list

我的 Spark 脚本有问题。

我有数据框 2,它是一个单列数据框。我想要实现的是,仅返回列表中用户所在的 df1 的结果。

我已尝试以下操作,但出现错误(也在下方)

谁能指点一下?

    listx= df2.select('user2').collect()

    df_agg = df1\
        .coalesce(1000)\
        .filter((df1.dt == 20181029) &(df1.user.isin(listx)))\
        .select('list of fields')

Traceback (most recent call last):
  File "/home/keenek1/indev/rax.py", line 31, in <module>
    .filter((df1.dt == 20181029) &(df1.imsi.isin(listx)))\
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/column.py", line 444, in isin
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/column.py", line 36, in _create_column_from_literal
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
  File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.functions.lit.
: java.lang.RuntimeException: Unsupported literal type class java.util.ArrayList [234101953127315]
        at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:77)
        at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create.apply(literals.scala:163)
        at org.apache.spark.sql.catalyst.expressions.Literal$$anonfun$create.apply(literals.scala:163)
        at scala.util.Try.getOrElse(Try.scala:79)
        at org.apache.spark.sql.catalyst.expressions.Literal$.create(literals.scala:162)
        at org.apache.spark.sql.functions$.typedLit(functions.scala:113)
        at org.apache.spark.sql.functions$.lit(functions.scala:96)
        at org.apache.spark.sql.functions.lit(functions.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

不确定这是最佳答案但是:

# two single column dfs to try replicate your example:
df1 = spark.createDataFrame([{'a': 10}])
df2 = spark.createDataFrame([{'a': 10}, {'a': 18}])
l1 = df1.select('a').collect()
# l1 = [Row(a=10)]  - this is not an accepted value for the isin as it seems:
df2.select('*').where(df2.a.isin(l_x)).show()  # this will throw and error
df2.select('*').where(df2.a.isin([10])).show()  # this will NOT throw and error

所以像这样:

l2 = [item.a for item in l1]
# l2 = [10]
df2.where(F.col('a').isin(l2)).show()

(老实说这有点奇怪但是......有一张支持isin with single column dataframes的票)

希望对您有所帮助,祝您好运!

edit:这是假设收集的列表很小:) 你的例子是:

listx= [item.user2 for item in df2.select('user2').collect()]
df_agg = df1\
    .coalesce(1000)\
    .filter((df1.dt == 20181029) &(df1.user.isin(listx)))\
    .select('list of fields')