Spark: Counting co-occurrence - 对大量集合进行高效多通道过滤的算法

Spark: Counting co-occurrence - Algorithm for efficient multi-pass filtering of huge collections

有一个table,其中有两列booksreaders这些书,其中booksreaders是书,reader ID,分别为:

   books readers
1:     1      30
2:     2      10
3:     3      20
4:     1      20
5:     1      10
6:     2      30

记录 book = 1, reader = 30 表示 id = 1 的书已被 id = 30 的用户阅读。 对于每一对本书,我需要计算阅读这两本书的reader人的数量,使用这个算法:

for each book
  for each reader of the book
    for each other_book in books of the reader
      increment common_reader_count ((book, other_book), cnt)

使用此算法的优势在于,与将所有书籍组合计数为 2 相比,它需要 少量运算

为了实现上述算法,我将这些数据分为两组:1)以书为键,一个包含每本书的 reader 的 RDD 和 2)以 reader 为键,一个 RDD 包含每个 reader 阅读的书籍,例如在以下程序中:

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.log4j.Logger
import org.apache.log4j.Level

object Small {

  case class Book(book: Int, reader: Int)
  case class BookPair(book1: Int, book2: Int, cnt:Int)

  val recs = Array(
    Book(book = 1, reader = 30),
    Book(book = 2, reader = 10),
    Book(book = 3, reader = 20),
    Book(book = 1, reader = 20),
    Book(book = 1, reader = 10),
    Book(book = 2, reader = 30))

  def main(args: Array[String]) {
    Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
    Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
    // set up environment
    val conf = new SparkConf()
      .setAppName("Test")
      .set("spark.executor.memory", "2g")
    val sc = new SparkContext(conf)
    val data = sc.parallelize(recs)

    val bookMap = data.map(r => (r.book, r))
    val bookGrps = bookMap.groupByKey

    val readerMap = data.map(r => (r.reader, r))
    val readerGrps = readerMap.groupByKey

    // *** Calculate book pairs
    // Iterate book groups 
    val allBookPairs = bookGrps.map(bookGrp => bookGrp match {
      case (book, recIter) =>
        // Iterate user groups 
        recIter.toList.map(rec => {
          // Find readers for this book
          val aReader = rec.reader
          // Find all books (including this one) that this reader read
          val allReaderBooks = readerGrps.filter(readerGrp => readerGrp match {
            case (reader2, recIter2) => reader2 == aReader
          })
          val bookPairs = allReaderBooks.map(readerTuple => readerTuple match {
            case (reader3, recIter3) => recIter3.toList.map(rec => ((book, rec.book), 1))
          })
          bookPairs
        })

    })
    val x = allBookPairs.flatMap(identity)
    val y = x.map(rdd => rdd.first)
    val z = y.flatMap(identity)
    val p = z.reduceByKey((cnt1, cnt2) => cnt1 + cnt2)
    val result = p.map(bookPair => bookPair match {
      case((book1, book2),cnt) => BookPair(book1, book2, cnt)
    } )

    val resultCsv = result.map(pair => resultToStr(pair))
    resultCsv.saveAsTextFile("./result.csv")
  }

   def resultToStr(pair: BookPair): String = {
     val sep = "|"
    pair.book1 + sep + pair.book2 + sep + pair.cnt
  }
}

这种实现实际上导致 不同的、低效的算法!:

for each book
  find each reader of the book scanning all readers every time!
    for each other_book in books of the reader
      increment common_reader_count ((book, other_book), cnt)

这与上述算法的主要目标相矛盾,因为它没有减少,而是增加了操作次数。查找用户书籍需要过滤每本书的所有用户。因此操作数 ~ N * M 其中 N - 用户数和 M - 书籍数。

问题:

  1. 有没有什么方法可以在 Spark 中实现原始算法而不过滤每本书的完整 reader 集合?
  2. 还有其他算法可以有效地计算图书对数吗?
  3. 此外,当实际上 运行 这段代码时,我得到 filter exception 是什么原因我无法弄清楚。有任何想法吗?

请查看下面的异常日志:

15/05/29 18:24:05 WARN util.Utils: Your hostname, localhost.localdomain resolves to a loopback address: 127.0.0.1; using 10.0.2.15 instead (on interface eth0)
15/05/29 18:24:05 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/05/29 18:24:09 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/05/29 18:24:10 INFO Remoting: Starting remoting
15/05/29 18:24:10 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.0.2.15:38910]
15/05/29 18:24:10 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@10.0.2.15:38910]
15/05/29 18:24:12 ERROR executor.Executor: Exception in task 0.0 in stage 6.0 (TID 4)
java.lang.NullPointerException
    at org.apache.spark.rdd.RDD.filter(RDD.scala:282)
    at Small$$anonfun$$anonfun$apply.apply(Small.scala:58)
    at Small$$anonfun$$anonfun$apply.apply(Small.scala:54)
    at scala.collection.TraversableLike$$anonfun$map.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map.apply(TraversableLike.scala:244)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    at Small$$anonfun.apply(Small.scala:54)
    at Small$$anonfun.apply(Small.scala:51)
    at scala.collection.Iterator$$anon.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon.hasNext(Iterator.scala:371)
    at scala.collection.Iterator$$anon.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon.hasNext(Iterator.scala:371)
    at org.apache.spark.util.collection.ExternalAppendOnlyMap.insertAll(ExternalAppendOnlyMap.scala:137)
    at org.apache.spark.Aggregator.combineValuesByKey(Aggregator.scala:58)
    at org.apache.spark.shuffle.hash.HashShuffleWriter.write(HashShuffleWriter.scala:55)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:54)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)

更新:

此代码:

val df = sc.parallelize(Array((1,30),(2,10),(3,20),(1,10)(2,30))).toDF("books","readers")
val results = df.join(
df.select($"books" as "r_books", $"readers" as "r_readers"), 
$"readers" === $"r_readers" and $"books" < $"r_books"
)
.groupBy($"books", $"r_books")
.agg($"books", $"r_books", count($"readers"))

给出以下结果:

books r_books COUNT(readers)
1     2       2     

所以 COUNT 这是两本书(这里是 1 和 2)一起阅读的次数(对数)。

如果将原始RDD转换为DataFrame,这种事情就容易多了:

val df = sc.parallelize(
  Array((1,30),(2,10),(3,20),(1,10), (2,30))
).toDF("books","readers")

一旦你这样做了,只需在 DataFrame 上做一个自连接来制作书对,然后计算每个书对有多少 reader 阅读:

val results = df.join(
  df.select($"books" as "r_books", $"readers" as "r_readers"), 
  $"readers" === $"r_readers" and $"books" < $"r_books"
).groupBy(
  $"books", $"r_books"
).agg(
  $"books", $"r_books", count($"readers")
)

至于关于该连接的额外解释,请注意我正在将 df 连接回其自身 -- 自连接:df.join(df.select(...), ...)。您要做的是将第一本书 -- $"books" -- 与第二本书 -- $"r_books",来自同一本书 reader -- $"reader" === $"r_reader" 拼接在一起。但是,如果您仅使用 $"reader" === $"r_reader" 加入,您将把同一本书重新加入自身。相反,我使用 $"books" < $"r_books" 来确保书对中的顺序始终是 (<lower_id>,<higher_id>).

完成连接后,您会得到一个 DataFrame,每对图书中的每个 reader 都有一行。 groupByagg 函数实际计算每本书配对的 reader 数量。

顺便说一句,如果 reader 读同一本书两次,我相信你最终会重复计算,这可能是你想要的,也可能不是你想要的。如果这不是您想要的,只需将 count($"readers") 更改为 countDistinct($"readers").

如果您想了解更多关于 agg 函数 count()countDistinct() 以及其他一些有趣的东西,请查看 org.apache.spark.sql.functions[= 的 scaladoc 28=]