如何在pyspark中通过键找到两个rdd的交集?

How to find the intersection of two rdd's by keys in pyspark?

我有两个 rdds:

rdd1 = sc.parallelize([("www.page1.html", "word1"), ("www.page2.html", "word1"), 
    ("www.page1.html", "word3")])

rdd2 = sc.parallelize([("www.page1.html", 7.3), ("www.page2.html", 1.25), 
    ("www.page3.html", 5.41)])

intersection_rdd = rdd1.keys().intersection(rdd2.keys())       

// 当我这样做时,我得到的只是键的交集,即 (www.page1.html, www.page2.html).

但我需要键以及来自两个 rdds 的两个值。 输出应如下所示:

[www.page1.html, (word1, word3, 7.3)]

[www.page2.html, (word1, 1.25)]

例如,您可以 cogroup 和过滤器:

## This depends on empty resultiterable.ResultIterable
## evaluating to False

intersection_rdd = rdd1.cogroup(rdd2).filter(lambda x: x[1][0] and x[1][1])
intersection_rdd.map(lambda x: (x[0], (list(x[1][0]), list(x[1][1])))).collect()

## [('www.page1.html', (['word1', 'word3'], [7.3])),
##  ('www.page2.html', (['word1'], [1.25]))]

由于您仅对键使用设置操作,因此您的输出仅包含键。

rdd1.union(rdd2).groupByKey().mapValues(tuple).collect()

联合 GroupByKey


('www.page1.html', 'word1') ('www.page1.html', ['word1', 'word3', 7.3])
('www.page2.html', 'word1') ('www.page2.html', ['word1', 1.25])
('www.page1.html', 'word3') ('www.page3.html', [5.41])
('www.page1.html', 7.3)
('www.page2.html', 1.25)
('www.page3.html', 5.41)