将包含值的列作为列表转换为数组

Convert column containing values as List to Array

我有如下的 spark 数据框:

+------------------------------------------------------------------------+
|                              domains                                   |
+------------------------------------------------------------------------+
|["0b3642ab5be98c852890aff03b3f83d8","4d7a5a24426749f3f17dee69e13194a9", |
| "9d0f74269019ad82ae82cc7a7f2b5d1b","0b113db8e20b2985d879a7aaa43cecf6", |
| "d095db19bd909c1deb26e0a902d5ad92","f038deb6ade0f800dfcd3138d82ae9a9", |
| "ab192f73b9db26ec2aca2b776c4398d2","ff9cf0599ae553d227e3f1078957a5d3", |
| "aa717380213450746a656fe4ff4e4072","f3346928db1c6be0682eb9307e2edf38", |
| "806a006b5e0d220c2cf714789828ecf7","9f6f8502e71c325f2a6f332a76d4bebf", |
| "c0cb38016fb603e89b160e921eced896","56ad547c6292c92773963d6e6e7d5e39"] |
+------------------------------------------------------------------------+

它包含作为列表的列。我想转换成 Array[String]。 例如:

Array("0b3642ab5be98c852890aff03b3f83d8","4d7a5a24426749f3f17dee69e13194a9", "9d0f74269019ad82ae82cc7a7f2b5d1b","0b113db8e20b2985d879a7aaa43cecf6", "d095db19bd909c1deb26e0a902d5ad92","f038deb6ade0f800dfcd3138d82ae9a9", 
"ab192f73b9db26ec2aca2b776c4398d2","ff9cf0599ae553d227e3f1078957a5d3",
"aa717380213450746a656fe4ff4e4072","f3346928db1c6be0682eb9307e2edf38",
"806a006b5e0d220c2cf714789828ecf7","9f6f8502e71c325f2a6f332a76d4bebf",
"c0cb38016fb603e89b160e921eced896","56ad547c6292c92773963d6e6e7d5e39")

我尝试了以下代码,但没有得到预期的结果:

DF.select("domains").as[String].collect()

相反,我得到了这个:

[Ljava.lang.String;@7535f28 ...

有什么办法可以实现吗?

你可以先把你的domains专栏炸开再收藏,如下:

import org.apache.spark.sql.functions.{col, explode}

val result: Array[String] = DF.select(explode(col("domains"))).as[String].collect()

然后您可以使用 mkString 方法打印 result 数组:

println(result.mkString("[", ", ", "]"))

在这里您得到的 Array[String] 只是如预期的那样。 [Ljava.lang.String;@7535f28 --> 这是我们在字节码内部使用的一种类型描述符。 [代表数组,Ljava.lang.String代表Class java.lang.String.

如果要将数组值打印为字符串,可以使用 .mkString() 函数。

import spark.implicits._

val data = Seq((Seq("0b3642ab5be98c852890aff03b3f83d8","4d7a5a24426749f3f17dee69e13194a9", "9d0f74269019ad82ae82cc7a7f2b5d1b","0b113db8e20b2985d879a7aaa43cecf6", "d095db19bd909c1deb26e0a902d5ad92","f038deb6ade0f800dfcd3138d82ae9a9")))

val df = spark.sparkContext.parallelize(data).toDF("domains")
// df: org.apache.spark.sql.DataFrame = [domains: array<string>]

val array_values = df.select("domains").as[String].collect()
// array_values: Array[String] = Array([0b3642ab5be98c852890aff03b3f83d8, 4d7a5a24426749f3f17dee69e13194a9, 9d0f74269019ad82ae82cc7a7f2b5d1b, 0b113db8e20b2985d879a7aaa43cecf6, d095db19bd909c1deb26e0a902d5ad92, f038deb6ade0f800dfcd3138d82ae9a9])

val string_value = array_values.mkString(",")

print(string_value)
// [0b3642ab5be98c852890aff03b3f83d8, 4d7a5a24426749f3f17dee69e13194a9, 9d0f74269019ad82ae82cc7a7f2b5d1b, 0b113db8e20b2985d879a7aaa43cecf6, d095db19bd909c1deb26e0a902d5ad92, f038deb6ade0f800dfcd3138d82ae9a9]

这个如果你也创建普通数组,可以看到相同的。

scala> val array_values : Array[String] = Array("value1", "value2")
array_values: Array[String] = Array(value1, value2)

scala> print(array_values)
[Ljava.lang.String;@70bf2681

scala> array_values.foreach(println)
value1
value2