PySpark Dataframes:如何使用紧凑的代码过滤多个条件?
PySpark Dataframes: how to filter on multiple conditions with compact code?
如果我有一个列名列表,并且我想在这些列的值大于零的情况下对行进行过滤,我可以做类似的事情吗?
columns = ['colA','colB','colC','colD','colE','colF']
new_df = df.filter(any([df[c]>0 for c in columns]))
这个returns:
ValueError: Cannot convert column into bool: please use '&' for 'and',
'|' for 'or', '~' for 'not' when building DataFrame boolean
expressions
我想我可以对这些列求和,并且只能在一列上进行筛选(因为我没有负数。但是如果我有求和技巧就行不通了。无论如何,如果我必须筛选那些与总和不同的另一个条件的列,我怎么能做我想做的事?
有什么想法吗?
您可以改用 or_
运算符:
from operator import or_
from functools import reduce
newdf = df.where(reduce(or_, (df[c] > 0 for c in df.columns)))
编辑: 更多 pythonista 解决方案:
from pyspark.sql.functions import lit
def any_(*preds):
cond = lit(False)
for pred in preds:
cond = cond | pred
return cond
newdf = df.where(any_(*[df[c] > 0 for c in df.columns]))
编辑 2: 完整示例:
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.1.0-SNAPSHOT
/_/
Using Python version 3.5.1 (default, Dec 7 2015 11:16:01)
SparkSession available as 'spark'.
In [1]: from pyspark.sql.functions import lit
In [2]: %pas
%paste %pastebin
In [2]: %paste
def any_(*preds):
cond = lit(False)
for pred in preds:
cond = cond | pred
return cond
## -- End pasted text --
In [3]: df = sc.parallelize([(1, 2, 3), (-1, -2, -3), (1, -1, 0)]).toDF()
In [4]: df.where(any_(*[df[c] > 0 for c in df.columns])).show()
# +---+---+---+
# | _1| _2| _3|
# +---+---+---+
# | 1| 2| 3|
# | 1| -1| 0|
# +---+---+---+
In [5]: df[any_(*[df[c] > 0 for c in df.columns])].show()
# +---+---+---+
# | _1| _2| _3|
# +---+---+---+
# | 1| 2| 3|
# | 1| -1| 0|
# +---+---+---+
In [6]: df.show()
# +---+---+---+
# | _1| _2| _3|
# +---+---+---+
# | 1| 2| 3|
# | -1| -2| -3|
# | 1| -1| 0|
# +---+---+---+
如果我有一个列名列表,并且我想在这些列的值大于零的情况下对行进行过滤,我可以做类似的事情吗?
columns = ['colA','colB','colC','colD','colE','colF']
new_df = df.filter(any([df[c]>0 for c in columns]))
这个returns:
ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions
我想我可以对这些列求和,并且只能在一列上进行筛选(因为我没有负数。但是如果我有求和技巧就行不通了。无论如何,如果我必须筛选那些与总和不同的另一个条件的列,我怎么能做我想做的事? 有什么想法吗?
您可以改用 or_
运算符:
from operator import or_
from functools import reduce
newdf = df.where(reduce(or_, (df[c] > 0 for c in df.columns)))
编辑: 更多 pythonista 解决方案:
from pyspark.sql.functions import lit
def any_(*preds):
cond = lit(False)
for pred in preds:
cond = cond | pred
return cond
newdf = df.where(any_(*[df[c] > 0 for c in df.columns]))
编辑 2: 完整示例:
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.1.0-SNAPSHOT
/_/
Using Python version 3.5.1 (default, Dec 7 2015 11:16:01)
SparkSession available as 'spark'.
In [1]: from pyspark.sql.functions import lit
In [2]: %pas
%paste %pastebin
In [2]: %paste
def any_(*preds):
cond = lit(False)
for pred in preds:
cond = cond | pred
return cond
## -- End pasted text --
In [3]: df = sc.parallelize([(1, 2, 3), (-1, -2, -3), (1, -1, 0)]).toDF()
In [4]: df.where(any_(*[df[c] > 0 for c in df.columns])).show()
# +---+---+---+
# | _1| _2| _3|
# +---+---+---+
# | 1| 2| 3|
# | 1| -1| 0|
# +---+---+---+
In [5]: df[any_(*[df[c] > 0 for c in df.columns])].show()
# +---+---+---+
# | _1| _2| _3|
# +---+---+---+
# | 1| 2| 3|
# | 1| -1| 0|
# +---+---+---+
In [6]: df.show()
# +---+---+---+
# | _1| _2| _3|
# +---+---+---+
# | 1| 2| 3|
# | -1| -2| -3|
# | 1| -1| 0|
# +---+---+---+