Pyspark 从列级别内的时间戳获取时间差

Pyspark get time difference from timestamps within column level

我尝试获取 Pyspark "name" 内时间戳的时差 "time_d"(以秒为单位)。

+-------------------+----+
|      timestamplast|name|
+-------------------+----+
|2019-08-01 00:00:00|   1|
|2019-08-01 00:01:00|   1|
|2019-08-01 00:01:15|   1|
|2019-08-01 03:00:00|   2|
|2019-08-01 04:00:00|   2|
|2019-08-01 00:15:00|   3|
+-------------------+----+

输出应如下所示:

+-------------------+----+--------+
|      timestamplast|name| time_d |
+-------------------+----+------- +
|2019-08-01 00:00:00|   1| 0      | 
|2019-08-01 00:01:00|   1| 60     | 
|2019-08-01 00:01:15|   1| 15     |
|2019-08-01 03:00:00|   2| 0      |
|2019-08-01 04:00:00|   2| 3600   |
|2019-08-01 00:15:00|   3| 0      |
+-------------------+----+--------+

在 Pandas 中将是:

df['time_d'] = df.groupby("name")['timestamplast'].diff().fillna(pd.Timedelta(0)).dt.total_seconds()

这在 Pyspark 中如何完成?

您可以使用 lag window function(partitioned by name),然后使用 timestamp in seconds(unix_timestamp) 计算差值。

from pyspark.sql import functions as F
from pyspark.sql.window import Window

w=Window().partitionBy("name").orderBy(F.col("timestamplast"))
df.withColumn("time_d", F.lag(F.unix_timestamp("timestamplast")).over(w))\
  .withColumn("time_d", F.when(F.col("time_d").isNotNull(), F.unix_timestamp("timestamplast")-F.col("time_d"))\
                         .otherwise(F.lit(0))).orderBy("name","timestamplast").show()

#+-------------------+----+------+
#|      timestamplast|name|time_d|
#+-------------------+----+------+
#|2019-08-01 00:00:00|   1|     0|
#|2019-08-01 00:01:00|   1|    60|
#|2019-08-01 00:01:15|   1|    15|
#|2019-08-01 03:00:00|   2|     0|
#|2019-08-01 04:00:00|   2|  3600|
#|2019-08-01 00:15:00|   3|     0|
#+-------------------+----+------+