将多个条件 groupby + sort + sum 应用于 pandas 数据框行

Apply multiple condition groupby + sort + sum to pandas dataframe rows

我有一个包含以下列的数据框:

账户号码、往来日期、开立日期

对于每个打开的帐户,我都被要求回顾其中发生的所有通信 该帐户开放日期后 30 天,然后按以下方式分配点数:

Forty-twenty-forty: Attribute 40% (0.4 points) of the attribution to the first touch,
40% to the last touch, and divide the remaining 20% between all touches in between

所以我知道按功能申请和分组,但这超出了我的薪水范围。 我必须按帐户分组,条件是基于 2 列相互比较, 我必须这样做才能获得对应的总数,而且我想它们也必须进行排序,因为将点分配给对应的以下步骤取决于它们发生的顺序。

我想高效地执行此操作,因为我有很多行,我知道 apply() 可以运行得很快,但是我在 apply 方面很糟糕我尝试做的行级操作甚至有点复杂。

感谢任何帮助,因为我不擅长pandas。

编辑 根据要求

Acct, ContactDate, OpenDate, Points (what I need to calculate)
123, 1/1/2018, 1/1/2021, 0 (because correspondance not within 30 days of open)
123, 12/10/2020, 1/1/2021, 0.4 (first touch gets 0.4)
123, 12/11/2020, 1/1/2021, 0.2 (other 'touches' get 0.2/(num of touches-2) 'points')
123, 12/12/2020, 1/1/2021, 0.4 (last touch gets 0.4)
456, 1/1/2018, 1/1/2021, 0 (again, because correspondance not within 30 days of open)
456, 12/10/2020, 1/1/2021, 0.4 (first touch gets 0.4)
456, 12/11/2020, 1/1/2021, 0.1 (other 'touches' get 0.2/(num of touches-2) 'points')
456, 12/11/2020, 1/1/2021, 0.1 (other 'touches' get 0.2/(num of touches-2) 'points')
456, 12/12/2020, 1/1/2021, 0.4 (last touch gets 0.4)

这个 returns 一个简化的数据帧,因为它排除了超过 30 天的时间帧,然后将原始 df 合并到其中,将所有数据集中在一个 df 中。这假设您的日期排序是正确的,否则,您可能必须在应用下面的功能之前先做这件事。

df['Points'] = 0 #add column to dataframe before analysis

#df.columns
#Index(['Acct', 'ContactDate', 'OpenDate', 'Points'], dtype='object')

def points(x):
    newx = x.loc[(x['OpenDate'] - x['ContactDate']) <= timedelta(days=30)] # reduce for wide > 30 days
    # print(newx.Acct)
    if newx.Acct.count() > 2: # check more than two dates exist
        newx['Points'].iloc[0] = .4 # first row
        newx['Points'].iloc[-1] = .4 # last row
        newx['Points'].iloc[1:-1] = .2 / newx['Points'].iloc[1:-1].count() # middle rows / by count of those rows
        return newx
    elif newx.Acct.count() == 2: # placeholder for later
        #edge case logic here for two occurences
        return newx
    elif newx.Acct.count() == 1: # placeholder for later
        #edge case logic here one onccurence
        return newx

# groupby Acct then clean up the indices so it can be merged back into original df
dft = df.groupby('Acct', as_index=False).apply(points).reset_index().set_index('level_1').drop('level_0', axis=1)

# merge on index
df_points = df[['Acct', 'ContactDate', 'OpenDate']].merge(dft['Points'], how='left', left_index=True, right_index=True).fillna(0)

输出:

   Acct ContactDate   OpenDate  Points
0   123  2018-01-01 2021-01-01     0.0
1   123  2020-12-10 2021-01-01     0.4
2   123  2020-12-11 2021-01-01     0.2
3   123  2020-12-12 2021-01-01     0.4
4   456  2018-01-01 2021-01-01     0.0
5   456  2020-12-10 2021-01-01     0.4
6   456  2020-12-11 2021-01-01     0.1
7   456  2020-12-11 2021-01-01     0.1
8   456  2020-12-12 2021-01-01     0.4