如何将动态命名的列连接到字典中?

How to join dynamically named columns into dictionary?

给定这些数据框:

IncomingCount
-------------------------
Venue|Date    | 08 | 10 |
-------------------------
Hotel|20190101| 15 | 03 |
Beach|20190101| 93 | 45 |

OutgoingCount
-------------------------
Venue|Date    | 07 | 10 | 
-------------------------
Beach|20190101| 30 | 5  |
Hotel|20190103| 05 | 15 |

我怎样才能合并(完全连接)两个表,从而产生如下结果,而不必手动遍历两个表的每一行?

Dictionary:
[
 {"Venue":"Hotel", "Date":"20190101", "08":{ "IncomingCount":15 }, "10":{ "IncomingCount":03 } },
 {"Venue":"Beach", "Date":"20190101", "07":{ "OutgoingCount":30 }, "08":{ "IncomingCount":93 }, "10":{ "IncomingCount":45, "OutgoingCount":15 } },
 {"Venue":"Hotel", "Date":"20190103", "07":{ "OutgoingCount":05 }, "10":{ "OutgoingCount":15 } }
]

条件是:

  1. 地点和日期列充当连接条件。
  2. 以数字表示的其他列是动态创建的。
  3. 如果动态列不存在,它将被排除(或包含在 None 作为值)。

到目前为止我可以得到这个:

import pandas as pd
import numpy as np

dd1 = {'venue': ['hotel', 'beach'], 'date':['20190101', '20190101'], '08': [15, 93], '10':[3, 45]}
dd2 = {'venue': ['beach', 'hotel'], 'date':['20190101', '20190103'], '07': [30, 5], '10':[5, 15]}

df1 = pd.DataFrame(data=dd1)
df2 = pd.DataFrame(data=dd2)

df1.columns = [f"IncomingCount:{x}" if x not in ['venue', 'date'] else x for x in df1.columns]
df2.columns = [f"OutgoingCount:{x}" if x not in ['venue', 'date'] else x for x in df2.columns ]

ll_dd = pd.merge(df1, df2, on=['venue', 'date'], how='outer').to_dict('records')
ll_dd = [{k:v for k,v in dd.items() if not pd.isnull(v)} for dd in ll_dd]

输出:

[{'venue': 'hotel',
  'date': '20190101',
  'IncomingCount:08': 15.0,
  'IncomingCount:10': 3.0},
 {'venue': 'beach',
  'date': '20190101',
  'IncomingCount:08': 93.0,
  'IncomingCount:10': 45.0,
  'OutgoingCount:07': 30.0,
  'OutgoingCount:10': 5.0},
 {'venue': 'hotel',
  'date': '20190103',
  'OutgoingCount:07': 5.0,
  'OutgoingCount:10': 15.0}]

这很麻烦,但可以通过使用 spark 中的 create_map 函数来完成。

基本上将列分为四组:keys(地点,日期),common(10),only incoming(08),only outgoing(07)。

然后为每个组创建映射器(键除外),仅映射每个组可用的映射器。应用映射,删除旧列并将映射列重命名为旧名称。

最后将所有行转换为字典(来自 df 的 rdd)并收集。

from pyspark.sql import SparkSession
from pyspark.sql.functions import create_map, col, lit

spark = SparkSession.builder.appName('hotels_and_beaches').getOrCreate()

incoming_counts = spark.createDataFrame([('Hotel', 20190101, 15, 3), ('Beach', 20190101, 93, 45)], ['Venue', 'Date', '08', '10']).alias('inc')
outgoing_counts = spark.createDataFrame([('Beach', 20190101, 30, 5), ('Hotel', 20190103, 5, 15)], ['Venue', 'Date', '07', '10']).alias('out')

df = incoming_counts.join(outgoing_counts, on=['Venue', 'Date'], how='full')

outgoing_cols = {c for c in outgoing_counts.columns if c not in {'Venue', 'Date'}}
incoming_cols = {c for c in incoming_counts.columns if c not in {'Venue', 'Date'}}

common_cols = outgoing_cols.intersection(incoming_cols)

outgoing_cols = outgoing_cols.difference(common_cols)
incoming_cols = incoming_cols.difference(common_cols)

for c in common_cols:
    df = df.withColumn(
        c + '_new', create_map(
            lit('IncomingCount'), col('inc.{}'.format(c)),
            lit('OutgoingCount'), col('out.{}'.format(c)),
        )
    ).drop(c).withColumnRenamed(c + '_new', c)

for c in incoming_cols:
    df = df.withColumn(
        c + '_new', create_map(
            lit('IncomingCount'), col('inc.{}'.format(c)),
        )
    ).drop(c).withColumnRenamed(c + '_new', c)

for c in outgoing_cols:
    df = df.withColumn(
        c + '_new', create_map(
            lit('OutgoingCount'), col('out.{}'.format(c)),
        )
    ).drop(c).withColumnRenamed(c + '_new', c)

result = df.coalesce(1).rdd.map(lambda r: r.asDict()).collect()
print(result)

结果:

[{'Venue': 'Hotel', 'Date': 20190101, '10': {'OutgoingCount': None, 'IncomingCount': 3}, '08': {'IncomingCount': 15}, '07': {'OutgoingCount': None}}, {'Venue': 'Hotel', 'Date': 20190103, '10': {'OutgoingCount': 15, 'IncomingCount': None}, '08': {'IncomingCount': None}, '07': {'OutgoingCount': 5}}, {'Venue': 'Beach', 'Date': 20190101, '10': {'OutgoingCount': 5, 'IncomingCount': 45}, '08': {'IncomingCount': 93}, '07': {'OutgoingCount': 30}}]

OP 期望的最终结果是 dictionarieslist,其中 DataFrame 中具有相同 VenueDate 的所有行都已合并一起。

# Creating the DataFrames
df_Incoming = sqlContext.createDataFrame([('Hotel','20190101',15,3),('Beach','20190101',93,45)],('Venue','Date','08','10'))
df_Incoming.show()
+-----+--------+---+---+
|Venue|    Date| 08| 10|
+-----+--------+---+---+
|Hotel|20190101| 15|  3|
|Beach|20190101| 93| 45|
+-----+--------+---+---+
df_Outgoing = sqlContext.createDataFrame([('Beach','20190101',30,5),('Hotel','20190103',5,15)],('Venue','Date','07','10'))
df_Outgoing.show()
+-----+--------+---+---+
|Venue|    Date| 07| 10|
+-----+--------+---+---+
|Beach|20190101| 30|  5|
|Hotel|20190103|  5| 15|
+-----+--------+---+---+

想法是从每个 row 创建一个 dictionary 并将 DataFrame 的所有 rows 作为字典存储在一个大的 list 中.最后一步,我们将那些具有相同 VenueDate.

的词典组合在一起

由于DataFrame中的所有rows都存储为Row()对象,我们使用collect()函数将return所有记录作为listRow()。只是为了说明输出 -

print(df_Incoming.collect())
[Row(Venue='Hotel', Date='20190101', 08=15, 10=3), Row(Venue='Beach', Date='20190101', 08=93, 10=45)]

但是,由于我们想要 listdictionaries,我们可以使用 list comprehensions 将它们转换为一个 -

list_Incoming = [row.asDict() for row in df_Incoming.collect()]
print(list_Incoming)
[{'10': 3, 'Date': '20190101', 'Venue': 'Hotel', '08': 15}, {'10': 45, 'Date': '20190101', 'Venue': 'Beach', '08': 93}]

但是,由于数字列的形式为 "08":{ "IncomingCount":15 },而不是 "08":15,因此我们使用 dictionary comprehensions 将它们转换为这种形式 -

list_Incoming = [ {k:v if k in ['Venue','Date'] else {'IncomingCount':v} for k,v in dict_element.items()} for dict_element in list_Incoming]
print(list_Incoming)
[{'10': {'IncomingCount': 3}, 'Date': '20190101', 'Venue': 'Hotel', '08': {'IncomingCount': 15}}, {'10': {'IncomingCount': 45}, 'Date': '20190101', 'Venue': 'Beach', '08': {'IncomingCount': 93}}]

同样,我们为 OutgoingCount

list_Outgoing = [row.asDict() for row in df_Outgoing.collect()]
list_Outgoing = [ {k:v if k in ['Venue','Date'] else {'OutgoingCount':v} for k,v in dict_element.items()} for dict_element in list_Outgoing]
print(list_Outgoing)
[{'10': {'OutgoingCount': 5}, 'Date': '20190101', 'Venue': 'Beach', '07': {'OutgoingCount': 30}}, {'10': {'OutgoingCount': 15}, 'Date': '20190103', 'Venue': 'Hotel', '07': {'OutgoingCount': 5}}]

最后一步: 现在,我们已经创建了 dictionaries 的必要条件 list,我们需要根据 VenueDate.

from copy import deepcopy
def merge_lists(list_Incoming, list_Outgoing):
    # create dictionary from list_Incoming:
    dict1 = {(record['Venue'], record['Date']): record  for record in list_Incoming}

    #compare elements in list_Outgoing to those on list_Incoming:

    result = {}
    for record in list_Outgoing:
        ckey = record['Venue'], record['Date']
        new_record = deepcopy(record)
        if ckey in dict1:
            for key, value in dict1[ckey].items():
                if key in ('Venue', 'Date'):
                    # Do not merge these keys
                    continue
                # Dict's "setdefault" finds a key/value, and if it is missing
                # creates a new one with the second parameter as value
                new_record.setdefault(key, {}).update(value)

        result[ckey] = new_record

    # Add values from list_Incoming that were not matched in list_Outgoing:
    for key, value in dict1.items():
        if key not in result:
            result[key] = deepcopy(value)

    return list(result.values())

res = merge_lists(list_Incoming, list_Outgoing)
print(res)
[{'10': {'OutgoingCount': 5, 'IncomingCount': 45}, 
  'Date': '20190101', 
  'Venue': 'Beach', 
  '08': {'IncomingCount': 93}, 
  '07': {'OutgoingCount': 30}
 },

 {'10': {'OutgoingCount': 15}, 
   'Date': '20190103', 
   'Venue': 'Hotel', 
   '07': {'OutgoingCount': 5}
 }, 

 {'10': {'IncomingCount': 3}, 
  'Date': '20190101', 
  'Venue': 'Hotel', 
  '08': {'IncomingCount': 15}
 }]