通过滚动 window 分区计算不同的客户

Count distinct customers over rolling window partition

我的问题与 类似,但我有一个滚动 window 分区。

我的查询看起来像这样,但不支持 Redshift 中的 COUNT 个不同

select p_date, seconds_read, 
count(distinct customer_id) over (order by p_date rows between unbounded preceding and current row) as total_cumulative_customer
from table_x

我的目标是计算每个日期的唯一身份客户总数(因此滚动 window)。

我尝试使用 但它会失败,因为我不能像这样使用 window 函数

select p_date, max(total_cumulative_customer) over ()
(select p_date, seconds_read, 
dense_rank() over (order by customer_id rows between unbounded preceding and current row) as total_cumulative_customer -- WILL FAIL HERE
from table_x

任何解决方法或不同的方法都会有所帮助!

编辑:

输入数据样本

+------+----------+--------------+
| Cust |  p_date  | seconds_read |
+------+----------+--------------+
|    1 | 1-Jan-20 |           10 |
|    2 | 1-Jan-20 |           20 |
|    4 | 1-Jan-20 |           30 |
|    5 | 1-Jan-20 |           40 |
|    6 | 5-Jan-20 |           50 |
|    3 | 5-Jan-20 |           60 |
|    2 | 5-Jan-20 |           70 |
|    1 | 5-Jan-20 |           80 |
|    1 | 5-Jan-20 |           90 |
|    1 | 7-Jan-20 |          100 |
|    3 | 7-Jan-20 |          110 |
|    4 | 7-Jan-20 |          120 |
|    7 | 7-Jan-20 |          130 |
+------+----------+--------------+

预期输出

+----------+--------------------------+------------------+--------------------------------------------+
|  p_date  | total_distinct_cum_cust  | sum_seconds_read |                  Comment                   |
+----------+--------------------------+------------------+--------------------------------------------+
| 1-Jan-20 |                        4 |              100 | total distinct cust = 4 i.e. 1,2,4,5       |
| 5-Jan-20 |                        6 |              450 | total distinct cust = 6 i.e. 1,2,3,4,5,6   |
| 7-Jan-20 |                        7 |              910 | total distinct cust = 6 i.e. 1,2,3,4,5,6,7 |
+----------+--------------------------+------------------+--------------------------------------------+

一种解决方法是使用子查询:

select p_date, seconds_read, 
    (
        select count(distinct t1.customer_id) 
        from table_x t1
        where t1.p_date <= t.p_date
    ) as total_cumulative_customer
from table_x t

对于此操作:

select p_date, seconds_read, 
       count(distinct customer_id) over (order by p_date rows between unbounded preceding and current row) as total_cumulative_customer
from table_x;

您可以通过两个级别的聚合来完成您想要的大部分工作:

select min_p_date,
       sum(count(*)) over (order by min_p_date rows between unbounded preceding and current row) as running_distinct_customers
from (select customer_id, min(p_date) as min_p_date
      from table_x
      group by customer_id
     ) c
group by min_p_date;

对读取的秒数求和有点棘手,但您可以使用相同的想法:

select p_date,
       sum(sum(seconds_read)) over (order by p_date rows between unbounded preceding and current row) as seconds_read,
       sum(sum(case when seqnum = 1 then 1 else 0 end)) over (order by p_date rows between unbounded preceding and current row) as running_distinct_customers
from (select customer_id, p_date, seconds_read,
             row_number() over (partition by customer_id order by p_date) as seqnum
      from table_x
     ) c
group by min_p_date;

我想补充一点,您也可以通过显式自连接来完成此操作,在我看来,这比其他答案中描述的子查询方法更直接和可读。

select 
  t1.p_date, 
  sum(t2.seconds_read) as sum_seconds_read, 
  count(distinct t2.customer_id) as distinct_cum_cust_totals
from
  table_x t1
join
  table_x t2
on
  t2.date <= t1.date
group by
  t1.date

大多数查询规划器会将上述解决方案中的相关子查询减少为这样的高效连接,因此这两种解决方案通常都可以,但对于一般情况,我认为这是更好的解决方案,因为某些引擎(如BigQuery) 不允许相关子查询,并且会强制您在查询中明确定义连接。