为 Spark 作业的单元测试模拟 HTable 数据

Mocking HTable data for a unit test of a spark job

我有一个从 HBase 读取的 Scala spark 作业,如下所示:

val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat], classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable], classOf[org.apache.hadoop.hbase.client.Result])
val uniqueAttrs = calculateFreqLocation(hBaseRDD)

我正在尝试为函数 calculateFreqLocation 编写单元测试:

 def calculateFreqLocation(inputRDD: RDD[(ImmutableBytesWritable, Result)]): Map[String, Map[(String, String, String), Long]] =  {
    val valueType = classOf[Array[Attribute]]
    val family = "cf_attributes".getBytes()
    val qualifier = "attributes".getBytes()
    val rdd7 = inputRDD.map(kv => (getUUID(kv._1.get()).toString(),
      objectMapper.readValue(new String(kv._2.getValue(family, qualifier)), valueType))).flatMap(flattenRow).filter(t => location_attributes.contains(t._2))

    val countByUUID = rdd7.countByValue().groupBy(_._1._1)
    val countByUUIDandKey = countByUUID.map(kv => (kv._1, kv._2.groupBy(_._1._2)))
    val uniqueAttrs = countByUUIDandKey.map(uuidmap => (uuidmap._1,uuidmap._2.map(keymap => keymap._2.maxBy(_._2))))
    return uniqueAttrs
  }

这会计算每个 UUID 的唯一属性。我的单元测试尝试重新创建 HTable 数据,然后将 RDD 传递给函数以查看输出是否匹配:

@RunWith(classOf[JUnitRunner])
class FrequentLocationTest extends SparkJobSpec {
    "Frequent Location calculation" should {

    def longToBytes(x: Long): Array[Byte] = {
      return ByteBuffer.allocate(java.lang.Long.SIZE / java.lang.Byte.SIZE).putLong(x).array
    }
    val currTimestamp = System.currentTimeMillis / 1000
    val UUID_1 = UUID.fromString("123456aa-8f07-4190-8c40-c7e78b91a646")
    val family = "cf_attributes".getBytes()
    val column = "attributes".getBytes()
    val row = "[{'name':'Current_Location_Ip_Address', 'value':'123.456.123.248'}]"

    val resultRow = Array(new KeyValue(row.getBytes(), family, column, null))

    val key = "851971aa-8f07-4190-8c40-c7e78b91a646".getBytes() ++ longToBytes(currTimestamp)
    val input = Seq((key,row))
    val correctOutput = Map(
      ("851971aa-8f07-4190-8c40-c7e78b91a646" -> Map(("123456aa-8f07-4190-8c40-c7e78b91a646","Current_Location_Ip_Address","123.456.123.248") -> 1))
      )

    "case 1 : return with correct output (frequent location calculation)" in {
      val inputRDD = sc.makeRDD(input, 1)
      val hadoonRdd = new HadoopRDD(sc, sc.broadcast(new SerializableWritable(new Configuration()))
        .asInstanceOf[Broadcast[SerializableWritable[Configuration]]], null, classOf[InputFormat[ImmutableBytesWritable,Result]], classOf[ImmutableBytesWritable],classOf[Result],1)

      val finalInputRdd = hadoonRdd.union(inputRDD.map(kv => ( new ImmutableBytesWritable(kv._1), new Result(Array(new KeyValue(kv._2.getBytes(), family, column, null))))))

      val resultMap = FrequentLocation.calculateFreqLocation(finalInputRdd)
      resultMap == correctOutput
      //val customCorr = new FrequentLocation().calculateFreqLocation(inputRDD)
      //freqLocationMap must_== correctOutput
    }
  }
}

我得到的是org.apache.spark.SparkException:任务不可序列化。 我开始明白这是因为 LongByteWritable 和其他 HTable 类 火花无法在节点之间序列化。我提供的代码实际上进入了开发人员 Spark api(手动创建 HadoopRDD),但没有任何方法可以用数据实际填充它。我该如何测试呢?我需要 return 一个包含数据的 HadoopRDD 实例到这个函数。或者 RDD(ImmutableBytesWritable, Result) 的实例。我最初是手动创建这个 RDD,同样的错误。然后我切换到使用地图并从原始 binary/text 映射它。如有任何帮助,我们将不胜感激!

用我自己的发现回答,为其他同样坚持类似堆栈的人提供一些指导:spark 运行 over HBase。

如果您遵循了大多数 Spark 程序单元测试教程,您可能会遇到像这样的 class:

abstract class SparkJobSpec extends SpecificationWithJUnit with BeforeAfterExample {  

 @transient var sc: SparkContext = _

  def beforeAll = {
    System.clearProperty("spark.driver.port")
    System.clearProperty("spark.hostPort")

    val conf = new SparkConf()
      .setMaster("local")
      .setAppName("test")
      //this kryo stuff is of utter importance
      .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .registerKryoClasses(Array(classOf[org.apache.hadoop.hbase.client.Result],classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable]))
      //.setJars(Seq(System.getenv("JARS")))
    sc = new SparkContext(conf)
  }

  def afterAll = {
    if (sc != null) {
      sc.stop()
      sc = null
      System.clearProperty("spark.driver.port")
      System.clearProperty("spark.hostPort")
    }
  }

  def before = {}

  def after = {}

  override def map(fs: => Fragments) = Step(beforeAll) ^ super.map(fs) ^ Step(afterAll)

}

我发布的问题的解决方案实际上是 2 部分:

  • Task not serializable 异常很容易修复,方法是将 with Serializable(在下面发布)放到单元测试套件 class 以及原始 Spark 过程中.显然在 classes 之间传递 RDD 实际上序列化了整个 class 之类的东西?我不知道,但它有帮助。

  • 我遇到的最大问题 运行 是 sparkcontext.newAPIHadoopRDD() 方法非常好,但是 returns 一个非常奇怪的 RDD 形式 RDD(ImmutableBytesWritable, Result) . Serializable 也不是,当你用这个自构建的 RDD 从你的 Spark 作业调用函数时,它真的会抱怨这个。这里的关键是:.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer") .registerKryoClasses(Array(classOf[org.apache.hadoop.hbase.client.Result],classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable])) 在你的 sparkConf 上设置。出于某种原因,我不需要在原始的 spark 程序中执行此操作。不确定这是否是因为 spark 在我的 qa 集群中自己做了一些事情,或者也许我从来没有在过程之外传递这个 RDD,所以它从来没有被序列化。

单元测试的最终代码实际上看起来非常简单!

@RunWith(classOf[JUnitRunner])
class FrequentLocationTest extends SparkJobSpec with Serializable {

"Frequent Location calculation" should {
    //some UUID generator stuff here 
    val resultRow = Array(new KeyValue(Bytes.add(longToBytes(UUID_1.getMostSignificantBits()), longToBytes(UUID_1.getLeastSignificantBits())), family, column, row.getBytes()))
    val input = Seq((new ImmutableBytesWritable(key), new Result(resultRow)))
    val correctOutput = Map(
      ("851971aa-8f07-4190-8c40-c7e78b91a646" -> Map(("851971aa-8f07-4190-8c40-c7e78b91a646","Current_Location_Ip_Address","123.456.234.456") -> 1))
      )

    "case 1 : return with correct output (frequent location calculation)" in {
      val inputRDD = sc.makeRDD(input, 1)
      val resultMap = FrequentLocation.calculateFreqLocation(inputRDD)
      resultMap == correctOutput
     }

    }
  }