如何在结构化流式传输下通过 spark 从 kafka 行中提取值?

How do I extract values from a kafka row via spark under structured streaming?

给定一个我从 Kafka 中提取的数据框。如何通过模式匹配从中提取值?

数据框:

df = spark \
  .readStream \
  .format("kafka") \
  .option("kafka.bootstrap.servers", "host1:port1,host2:port2") \
  .option("subscribe", "topic1") \
  .option("startingOffsets", "earliest") \
  .load()

我的问题是架构如下所示:

df.printSchema()

root
 |-- key: binary (nullable = true)
 |-- value: binary (nullable = true)
 |-- topic: string (nullable = true)
 |-- partition: integer (nullable = true)
 |-- offset: long (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- timestampType: integer (nullable = true)

那个二进制类型是我无法进行模式匹配的。我将如何提取该值然后解析它?

Question : How would I extract that value then parse it?

我假设你使用的是 avro 消息,可以按照下面的代码片段尝试(我不知道你在这里尝试模式匹配的是什么)decodeAndParseObject 函数使用 twitters bijection api 具有以下依赖性

<!-- https://mvnrepository.com/artifact/com.twitter/bijection-avro -->
<dependency>
    <groupId>com.twitter</groupId>
    <artifactId>bijection-avro_2.10</artifactId>
    <version>0.7.0</version>
</dependency>

val ds = df.select("value").as[Array[Byte]].map(x=>decodeAndParseObject(x))

哪里

import org.apache.avro.generic.GenericRecord
import com.twitter.bijection.Injection
import com.twitter.bijection.avro.GenericAvroCodecs
/**
* decode and parse binary based on your schema... your logic goes here
*/
def decodeAndParseObject(message: Array[Byte]) =  {

val schema = new Schema.Parser().parse("yourschemahere")

val recordInjection: Injection[GenericRecord, Array[Byte]] = 

GenericAvroCodecs.toBinary(schema)

val record: GenericRecord = recordInjection.invert(message).get
println(record.getSchema)
record.getSchema.getFields.toArray().foreach(println)
println("\n\n\n\n\n\n Record " + record.toString.replaceAll(",", "\n"))
//get the column and do pattern matching....
// Prepare another generic record .... I'm leaving it as blank here...

record   

}

更新: 您可以使用上面的通用记录并获取您想要使用的列 record.get("yourcolumn") 并为此做 Scala 模式匹配案例。