要合并的大文件。如何在 pandas 中防止合并重复?

Large file to merge. How to do prevent duplicates in merge in pandas?

我有两个数据帧,合并后会创建一个 50 GB 的文件,python 无法处理。我什至无法在 python 中合并,必须在 SQLite 中进行。

这是两个数据集的样子

第一个数据集:

        a_id c_consumed
    0    sam        oil
    1    sam      bread
    2    sam       soap
    3  harry      shoes
    4  harry        oil
    5  alice       eggs
    6  alice        pen
    7  alice    eggroll

生成此数据集的代码

    df = pd.DataFrame({'a_id': 'sam sam sam harry harry alice alice alice'.split(),
               'c_consumed': 'oil bread soap shoes oil eggs pen eggroll'.split()})

第二个数据集:

       a_id b_received brand_id type_received       date
   0    sam       soap     bill       edibles 2011-01-01
   1    sam        oil    chris       utility 2011-01-02
   2    sam      brush      dan       grocery 2011-01-01
   3  harry        oil    chris      clothing 2011-01-02
   4  harry      shoes    nancy       edibles 2011-01-03
   5  alice       beer    peter     breakfast 2011-01-03
   6  alice      brush      dan      cleaning 2011-01-02
   7  alice       eggs     jaju       edibles 2011-01-03

生成此数据集的代码:

  df_id = pd.DataFrame({'a_id': 'sam sam sam harry harry alice alice alice'.split(),
                  'b_received': 'soap oil brush oil shoes beer brush eggs'.split(),
                  'brand_id': 'bill chris dan chris nancy peter dan jaju'.split(),
                  'type_received': 'edibles utility grocery clothing edibles breakfast cleaning edibles'.split()})
 date3 = ['2011-01-01','2011-01-02','2011-01-01','2011-01-02','2011-01-03','2011-01-03','2011-01-02','2011-01-03']
 date3 = pd.to_datetime(date3)
 df_id['date']= date3

我使用此代码合并数据集

 combined = pd.merge(df_id,df,on='a_id',how='left')

这是生成的数据集

      a_id b_received brand_id type_received       date c_consumed
 0     sam       soap     bill       edibles 2011-01-01        oil
 1     sam       soap     bill       edibles 2011-01-01      bread
 2     sam       soap     bill       edibles 2011-01-01       soap
 3     sam        oil    chris       utility 2011-01-02        oil
 4     sam        oil    chris       utility 2011-01-02      bread
 5     sam        oil    chris       utility 2011-01-02       soap
 6     sam      brush      dan       grocery 2011-01-01        oil
 7     sam      brush      dan       grocery 2011-01-01      bread
 8     sam      brush      dan       grocery 2011-01-01       soap
 9   harry        oil    chris      clothing 2011-01-02      shoes
10  harry        oil    chris      clothing 2011-01-02        oil
11  harry      shoes    nancy       edibles 2011-01-03      shoes
12  harry      shoes    nancy       edibles 2011-01-03        oil
13  alice       beer    peter     breakfast 2011-01-03       eggs
14  alice       beer    peter     breakfast 2011-01-03        pen
15  alice       beer    peter     breakfast 2011-01-03    eggroll
16  alice      brush      dan      cleaning 2011-01-02       eggs
17  alice      brush      dan      cleaning 2011-01-02        pen
18  alice      brush      dan      cleaning 2011-01-02    eggroll
19  alice       eggs     jaju       edibles 2011-01-03       eggs
20  alice       eggs     jaju       edibles 2011-01-03        pen
21  alice       eggs     jaju       edibles 2011-01-03    eggroll

我想知道是否有人消费了收到的产品,我需要保留其余信息,因为稍后我需要查看它是否受到品牌或产品类型的影响。为此,我使用以下代码创建了一个新列,它给出了以下结果。

代码:

  combined['output']= (combined.groupby('a_id')
           .apply(lambda x : x['b_received'].isin(x['c_consumed']).astype('i4'))
           .reset_index(level='a_id', drop=True))

结果数据框是

       a_id b_received brand_id type_received       date c_consumed  output
  0     sam       soap     bill       edibles 2011-01-01        oil       1
  1     sam       soap     bill       edibles 2011-01-01      bread       1
  2     sam       soap     bill       edibles 2011-01-01       soap       1
  3     sam        oil    chris       utility 2011-01-02        oil       1
  4     sam        oil    chris       utility 2011-01-02      bread       1
  5     sam        oil    chris       utility 2011-01-02       soap       1
  6     sam      brush      dan       grocery 2011-01-01        oil       0
  7     sam      brush      dan       grocery 2011-01-01      bread       0
  8     sam      brush      dan       grocery 2011-01-01       soap       0
  9   harry        oil    chris      clothing 2011-01-02      shoes       1
 10  harry        oil    chris      clothing 2011-01-02        oil       1
 11  harry      shoes    nancy       edibles 2011-01-03      shoes       1
 12  harry      shoes    nancy       edibles 2011-01-03        oil       1
 13  alice       beer    peter     breakfast 2011-01-03       eggs       0
 14  alice       beer    peter     breakfast 2011-01-03        pen       0
 15  alice       beer    peter     breakfast 2011-01-03    eggroll       0
 16  alice      brush      dan      cleaning 2011-01-02       eggs       0
 17  alice      brush      dan      cleaning 2011-01-02        pen       0
 18  alice      brush      dan      cleaning 2011-01-02    eggroll       0
 19  alice       eggs     jaju       edibles 2011-01-03       eggs       1
 20  alice       eggs     jaju       edibles 2011-01-03        pen       1
 21  alice       eggs     jaju       edibles 2011-01-03    eggroll       1

可以看到输出的结果是错误的,我真正想要的是一个更像这样的数据集

      a_id b_received brand_id c_consumed type_received       date  output 
 0    sam       soap     bill        oil       edibles 2011-01-01       1   
 1    sam        oil    chris        NaN       utility 2011-01-02       1   
 2    sam      brush      dan       soap       grocery 2011-01-03       0   
 3  harry        oil    chris      shoes      clothing 2011-01-04       1   
 4  harry      shoes    nancy        oil       edibles 2011-01-05       1   
 5  alice       beer    peter       eggs     breakfast 2011-01-06       0   
 6  alice      brush      dan      brush      cleaning 2011-01-07       1   
 7  alice       eggs     jaju        NaN       edibles 2011-01-08       1   

我可以使用 drop_duplicates 处理合并后的重复,但生成的数据帧太大而无法合并。

我真的需要在合并期间或合并之前处理好重复,因为生成的数据帧太大 python 无法处理,它会给我带来内存错误。

关于如何改进我的合并或以任何其他方式在不合并的情况下获取输出列的任何建议?

最后,我只需要日期列和输出列来计算对数几率,并创建时间序列。但是由于文件的大小,我一直坚持合并文件。

请注意,我执行了两次 groupby 操作以获得输出 table。我将 b_received 添加到要分组的键上,并且我在第二个 groupby 上取了第一个值,因为对于这个分组级别,所有值都是相同的。

output = ((combined
           .groupby(['a_id', 'b_received'])
           .apply(lambda x : x['b_received'].isin(x['c_consumed'])
           .astype(int)))
          .groupby(level=[0, 1])
          .first())

output.name = 'output'

>>> (df_id[['a_id', 'b_received', 'date']]
     .merge(output.reset_index(), on=['a_id', 'b_received']))
    a_id b_received       date  output
0    sam       soap 2011-01-01       1
1    sam        oil 2011-01-02       1
2    sam      brush 2011-01-01       0
3  harry        oil 2011-01-02       1
4  harry      shoes 2011-01-03       1
5  alice       beer 2011-01-03       0
6  alice      brush 2011-01-02       0
7  alice       eggs 2011-01-03       1