Spark repartitionAndSortWithinPartitions 与元组

Spark repartitionAndSortWithinPartitions with tuples

我正在尝试按照此示例对 hbase 行进行分区:https://www.opencore.com/blog/2016/10/efficient-bulk-load-of-hbase-using-spark/

但是,我已经将数据存储在 (String, String, String) 中,其中第一个是行键,第二个是列名,第三个是列值。

我尝试编写一个隐式排序来实现 OrderedRDD 隐式

 implicit val caseInsensitiveOrdering: Ordering[(String, String, String)] = new Ordering[(String, String, String)] {
    override def compare(x: (String, String, String), y: (String, String, String)): Int = ???
  }

但是 repartitionAndSortWithinPartitions 仍然不可用。有什么方法可以将此方法与此元组一起使用吗?

RDD 必须有键和值,而不仅仅是值,例如:

val data = List((("5", "6", "1"), (1)))
val rdd : RDD[((String, String, String), Int)] = sparkContext.parallelize(data)
implicit val caseInsensitiveOrdering = new Ordering[(String, String, String)] {
  override def compare(x: (String, String, String), y: (String, String, String)): Int = 1
}
rdd.repartitionAndSortWithinPartitions(..)