将天数分组为周数 PySpark
Group days into weeks with totals PySpark
我最近在类似查询方面得到了帮助,但我想知道如何在 PySpark 中执行此操作,因为我是新手
day bitcoin_total dash_total
2009-01-03 1 0
2009-01-09 14 0
2009-01-10 61 0
理想的结果是一周的开始日期(可以是星期一或星期日,以哪个为准)
day bitcoin_total dash_total
2008-12-28 1 0
2009-01-04 75 0
下面的代码按数字返回周数,总数似乎不对。我似乎无法复制 .agg(sum()) 返回的总数,我什至无法添加第二个总数 (dash_total)。我试过 .col("dash_total")
有没有办法将几天分成几周?
from pyspark.sql.functions import weekofyear, sum
(df
.groupBy(weekofyear("day").alias("date_by_week"))
.agg(sum("bitcoin_total"))
.orderBy("date_by_week")
.show())
我是 运行 Databricks 上的 Spark。
在 spark 中使用 date_sub,next_day
函数尝试这种方法。
解释:
date_sub(
next_day(col("day"),"sunday"), //get next sunday date
7)) //substract week from the date
示例:
In pyspark:
from pyspark.sql.functions import *
df = sc.parallelize([("2009-01-03","1","0"),("2009-01-09","14","0"),("2009-01-10","61","0")]).toDF(["day","bitcoin_total","dash_total"])
df.withColumn("week_strt_day",date_sub(next_day(col("day"),"sunday"),7)).groupBy("week_strt_day").agg(sum("bitcoin_total").cast("int").alias("bitcoin_total"),sum("dash_total").cast("int").alias("dash_total")).orderBy("week_strt_day").show()
Result:
+-------------+-------------+----------+
|week_strt_day|bitcoin_total|dash_total|
+-------------+-------------+----------+
| 2008-12-28| 1| 0|
| 2009-01-04| 75| 0|
+-------------+-------------+----------+
In scala:
import org.apache.spark.sql.functions._
val df=Seq(("2009-01-03","1","0"),("2009-01-09","14","0"),("2009-01-10","61","0")).toDF("day","bitcoin_total","dash_total")
df.withColumn("week_strt_day",date_sub(next_day('day,"sunday"),7)).groupBy("week_strt_day").agg(sum("bitcoin_total").cast("int").alias("bitcoin_total"),sum("dash_total").cast("int").alias("dash_total")).orderBy("week_strt_day").show()
Result:
+-------------+-------------+----------+
|week_strt_day|bitcoin_total|dash_total|
+-------------+-------------+----------+
| 2008-12-28| 1| 0|
| 2009-01-04| 75| 0|
+-------------+-------------+----------+
我最近在类似查询方面得到了帮助
day bitcoin_total dash_total
2009-01-03 1 0
2009-01-09 14 0
2009-01-10 61 0
理想的结果是一周的开始日期(可以是星期一或星期日,以哪个为准)
day bitcoin_total dash_total
2008-12-28 1 0
2009-01-04 75 0
下面的代码按数字返回周数,总数似乎不对。我似乎无法复制 .agg(sum()) 返回的总数,我什至无法添加第二个总数 (dash_total)。我试过 .col("dash_total")
有没有办法将几天分成几周?
from pyspark.sql.functions import weekofyear, sum
(df
.groupBy(weekofyear("day").alias("date_by_week"))
.agg(sum("bitcoin_total"))
.orderBy("date_by_week")
.show())
我是 运行 Databricks 上的 Spark。
在 spark 中使用 date_sub,next_day
函数尝试这种方法。
解释:
date_sub(
next_day(col("day"),"sunday"), //get next sunday date
7)) //substract week from the date
示例:
In pyspark:
from pyspark.sql.functions import *
df = sc.parallelize([("2009-01-03","1","0"),("2009-01-09","14","0"),("2009-01-10","61","0")]).toDF(["day","bitcoin_total","dash_total"])
df.withColumn("week_strt_day",date_sub(next_day(col("day"),"sunday"),7)).groupBy("week_strt_day").agg(sum("bitcoin_total").cast("int").alias("bitcoin_total"),sum("dash_total").cast("int").alias("dash_total")).orderBy("week_strt_day").show()
Result:
+-------------+-------------+----------+
|week_strt_day|bitcoin_total|dash_total|
+-------------+-------------+----------+
| 2008-12-28| 1| 0|
| 2009-01-04| 75| 0|
+-------------+-------------+----------+
In scala:
import org.apache.spark.sql.functions._
val df=Seq(("2009-01-03","1","0"),("2009-01-09","14","0"),("2009-01-10","61","0")).toDF("day","bitcoin_total","dash_total")
df.withColumn("week_strt_day",date_sub(next_day('day,"sunday"),7)).groupBy("week_strt_day").agg(sum("bitcoin_total").cast("int").alias("bitcoin_total"),sum("dash_total").cast("int").alias("dash_total")).orderBy("week_strt_day").show()
Result:
+-------------+-------------+----------+
|week_strt_day|bitcoin_total|dash_total|
+-------------+-------------+----------+
| 2008-12-28| 1| 0|
| 2009-01-04| 75| 0|
+-------------+-------------+----------+