检查spark中矩阵每列中唯一值的数量
check number of unique values in each column of a matrix in spark
我有一个 csv 文件当前作为数据框存储在 spark
scala> df
res11: org.apache.spark.sql.DataFrame = [2013-03-25 12:49:36.000: string, OES_PSI603_EC1: string, 250.3315__SI: string, 250.7027__SI: string, 251.0738__SI: string, 251.4448__SI: string, 251.8159__SI: string, 252.1869__SI: string, 252.5579__SIF: string, 252.9288__SI: string, 253.2998__SIF: string, 253.6707__SIF: string, 254.0415__CI2: string, 254.4124__CI2: string, 254.7832__CI2: string, 255.154: string, 255.5248__NO: string, 255.8955__NO: string, 256.2662__NO: string, 256.6369: string, 257.0075: string, 257.3782: string, 257.7488: string, 258.1193: string, 258.4899: string, 258.8604__NO: string, 259.2309__NO: string, 259.6013__NO: string, 259.9717__N2: string, 260.3421__N2: string, 260.7125__N2: string, 261.4531: string, 261.8234: string, 262.1937: string, 262.5639: string, 262.9341: s...
scala>
我想计算每列中唯一元素的数量。我该怎么做?
您可以在每一列上使用 countDistinct
函数。
例如在pyspark中:
df = spark.createDataFrame([ (1, 1), (1, 3), (2, 1), (3, 2), (3, 3) ], ["user_id", "genre_id"])
>>> df.show()
+-------+--------+
|user_id|genre_id|
+-------+--------+
| 1| 1|
| 1| 3|
| 2| 1|
| 3| 2|
| 3| 3|
+-------+--------+
>>> import pyspark.sql.functions as F
>>> df.select( [ F.countDistinct(cn).alias("c_{0}".format(cn)) for cn in df.columns ] ).show()
+---------+----------+
|c_user_id|c_genre_id|
+---------+----------+
| 3| 3|
+---------+----------+
我有一个 csv 文件当前作为数据框存储在 spark
scala> df
res11: org.apache.spark.sql.DataFrame = [2013-03-25 12:49:36.000: string, OES_PSI603_EC1: string, 250.3315__SI: string, 250.7027__SI: string, 251.0738__SI: string, 251.4448__SI: string, 251.8159__SI: string, 252.1869__SI: string, 252.5579__SIF: string, 252.9288__SI: string, 253.2998__SIF: string, 253.6707__SIF: string, 254.0415__CI2: string, 254.4124__CI2: string, 254.7832__CI2: string, 255.154: string, 255.5248__NO: string, 255.8955__NO: string, 256.2662__NO: string, 256.6369: string, 257.0075: string, 257.3782: string, 257.7488: string, 258.1193: string, 258.4899: string, 258.8604__NO: string, 259.2309__NO: string, 259.6013__NO: string, 259.9717__N2: string, 260.3421__N2: string, 260.7125__N2: string, 261.4531: string, 261.8234: string, 262.1937: string, 262.5639: string, 262.9341: s...
scala>
我想计算每列中唯一元素的数量。我该怎么做?
您可以在每一列上使用 countDistinct
函数。
例如在pyspark中:
df = spark.createDataFrame([ (1, 1), (1, 3), (2, 1), (3, 2), (3, 3) ], ["user_id", "genre_id"])
>>> df.show()
+-------+--------+
|user_id|genre_id|
+-------+--------+
| 1| 1|
| 1| 3|
| 2| 1|
| 3| 2|
| 3| 3|
+-------+--------+
>>> import pyspark.sql.functions as F
>>> df.select( [ F.countDistinct(cn).alias("c_{0}".format(cn)) for cn in df.columns ] ).show()
+---------+----------+
|c_user_id|c_genre_id|
+---------+----------+
| 3| 3|
+---------+----------+