如何在 pyspark 数据框中检索每个 window 中的唯一值

How to retrieve unique values in each window in pyspark dataframe

我有以下火花数据框:

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('').getOrCreate()
df = spark.createDataFrame([(1, "a", "2"), (2, "b", "2"),(3, "c", "2"), (4, "d", "2"),
                (5, "b", "3"), (6, "b", "3"),(7, "c", "2")], ["nr", "column2", "quant"])

哪个 returns 我:

+---+-------+------+
| nr|column2|quant |
+---+-------+------+
|  1|      a|     2|
|  2|      b|     2|
|  3|      c|     2|
|  4|      d|     2|
|  5|      b|     3|
|  6|      b|     3|
|  7|      c|     2|
+---+-------+------+

我想检索每 3 个分组行(来自每个 window,其中 window 大小为 3)量化列具有唯一值的行。如下图所示:

这里红色是 window 大小,每个 window 我只保留绿色行,其中 quant 是唯一的:

我想得到的输出如下:

+---+-------+------+
| nr|column2|values|
+---+-------+------+
|  1|      a|     2|
|  4|      d|     2|
|  5|      b|     3|
|  7|      c|     2|
+---+-------+------+

我是 spark 的新手,如果有任何帮助,我将不胜感激。谢谢

假设 3 条记录基于 'nr' 列进行分组,这种方法应该适合您。

使用udf,决定是否应该选择一条记录,lag,用于获取上一行数据。

def tag_selected(index, current_quant, prev_quant1, prev_quant2):                                                                                                    
    if index % 3 == 1:  # first record in each group is always selected                                                                                              
        return True                                                                                                                                                  
    if index % 3 == 2 and current_quant != prev_quant1: # second record will be selected if prev quant is not same as current                                        
        return True                                                                                                                                                  
    if index % 3 == 0 and current_quant != prev_quant1 and current_quant != prev_quant2: # third record will be selected if prev quant are not same as current       
        return True                                                                                                                                                  
    return False                                                                                                                                                     

tag_selected_udf = udf(tag_selected, BooleanType())                                                                                                                  

df = spark.createDataFrame([(1, "a", "2"), (2, "b", "2"),(3, "c", "2"), (4, "d", "2"),
                (5, "b", "3"), (6, "b", "3"),(7, "c", "2")], ["nr", "column2", "quant"])

window = Window.orderBy("nr")

df = df.withColumn("prev_quant1", lag(col("quant"),1, None).over(window))\
       .withColumn("prev_quant2", lag(col("quant"),2, None).over(window)) \
       .withColumn("selected", 
                   tag_selected_udf(col('nr'),col('quant'),col('prev_quant1'),col('prev_quant2')))\
       .filter(col('selected') == True).drop("prev_quant1","prev_quant2","selected")
df.show()

结果

+---+-------+-----+
| nr|column2|quant|
+---+-------+-----+
|  1|      a|    2|
|  4|      d|    2|
|  5|      b|    3|
|  7|      c|    2|
+---+-------+-----+