如何从一列的多行中 select 两行

How to select two rows from multiple rows of a column

我有这样的数据

ID     | Race    | start | duration
-------|---------| ------| ---------
234    | 1010    | turtle| 100
235    | 1010    | turtle| 101
236    | 1010    | turtle| 99
237    | 1010    | rabbit| 199
238    | 1010    | rabbit| 201
239    | 1010    | rabbit| 85
240    | 9898    | rabbit| 185
241    | 9898    | rabbit| 205
242    | 9898    | rabbit| 505
243    | 9898    | turtle| 155
244    | 9898    | turtle| 104

由此我想select:

示例:

根据以上数据,结果应该是:

ID     | Race    | start | duration
-------|---------| ------| ---------
236    | 1010    | turtle| 99
239    | 1010    | rabbit| 85
240    | 9898    | rabbit| 185
244    | 9898    | turtle| 104

我做了什么:

w = Window().partitionBy("race").orderBy(col("duration").desc())
(df
  .withColumn("rn", rowNumber().over(w))
  .where(col("rn") == 1)
  .select("race", "duration")).show()

但是,这对数据进行了分组,但我没有得到想要的结果。

您好,您应该使用 rank 而不是 rownumber,并使 "race" 和 "start" 列的 window 这里是可以解决您的问题的代码片段:

import pyspark.sql.functions as F
from pyspark.sql import Window
df = sqlContext.createDataFrame([[234,1010,'turtle', 100],
                        [235,1010,'turtle', 101],
                        [236,1010,'turtle', 99],
                        [237,1010,'rabbit', 199],
                        [238,1010,'rabbit', 201],
                        [239,1010,'rabbit', 85],
                        [240,9898,'rabbit', 185],
                        [241,9898,'rabbit', 205],
                        [242,9898,'rabbit', 505],
                        [243,9898,'turtle', 155],
                        [244,9898,'turtle', 104]
                       ], ['id', 'race', 'start', 'duration'])
w1 = Window().partitionBy("race", "start").orderBy(F.col("duration"))
df.withColumn("rn", F.rank().over(w1)).where(F.col("rn") == 1).select("id", "race", "start", "duration").show()