如何计算数据框每一行中缺失值的数量-spark scala?

How to count the number of missing values in each row of a data frame -spark scala?

我想计算 spark scala 中数据框每一行中缺失值的数量。

代码:

val samplesqlDF = spark.sql("SELECT * FROM sampletable")

samplesqlDF.show()

输入数据帧:

    ------------------------------------------------------------------
   | name       |     age             |  degree    | Place            |
   | -----------------------------------------------------------------|
   | Ram        |                     |    MCA     | Bangalore        |
   |            |     25              |            |                  |
   |            |     26              |     BE     |                  |
   | Raju       |     21              |     Btech  |  Chennai         |
   -----------------------------------------------------------------

Output Data frame(Row Level Count)如下:

    -----------------------------------------------------------------
   | name       |     age   |  degree    | Place      |   rowcount   |
   | ----------------------------------------------------------------|
   | Ram        |           |    MCA     | Bangalore  |   1          |
   |            |     25    |            |            |   3          |
   |            |     26    |     BE     |            |   2          |
   | Raju       |     21    |    Btech   |  Chennai   |   0          | 
   -----------------------------------------------------------------

我是 scala 和 spark 的初学者。提前致谢。

您似乎想以动态方式获取 null 计数。看看这个

val df = Seq(("Ram",null,"MCA","Bangalore"),(null,"25",null,null),(null,"26","BE",null),("Raju","21","Btech","Chennai")).toDF("name","age","degree","Place")
df.show(false)
val df2 = df.columns.foldLeft(df)( (df,c) => df.withColumn(c+"_null", when(col(c).isNull,1).otherwise(0) ) )
df2.createOrReplaceTempView("student")
val sql_str_null = df.columns.map( x => x+"_null").mkString(" ","+"," as null_count ")
val sql_str_full = df.columns.mkString( "select ", ",", " , " + sql_str_null + " from student")
spark.sql(sql_str_full).show(false)

输出:

+----+----+------+---------+----------+
|name|age |degree|Place    |null_count|
+----+----+------+---------+----------+
|Ram |null|MCA   |Bangalore|1         |
|null|25  |null  |null     |3         |
|null|26  |BE    |null     |2         |
|Raju|21  |Btech |Chennai  |0         |
+----+----+------+---------+----------+

也有可能并检查“”但不使用 foldLeft 只是为了证明这一点:

import org.apache.spark.sql.functions._

val df = Seq(("Ram",null,"MCA","Bangalore"),(null,"25",null,""),(null,"26","BE",null),("Raju","21","Btech","Chennai")).toDF("name","age","degree","place")

// Count per row the null or "" columns! 
val null_counter = Seq("name", "age", "degree", "place").map(x => when(col(x) === "" || col(x).isNull , 1).otherwise(0)).reduce(_ + _)  

val df2 = df.withColumn("nulls_cnt", null_counter)

df2.show(false)

returns:

 +----+----+------+---------+---------+
 |name|age |degree|place    |nulls_cnt|
 +----+----+------+---------+---------+
 |Ram |null|MCA   |Bangalore|1        |
 |null|25  |null  |         |3        |
 |null|26  |BE    |null     |2        |
 |Raju|21  |Btech |Chennai  |0        |
 +----+----+------+---------+---------+

@stack0114106 建议的简化版本是

val df = Seq(("Ram",null,"MCA","Bangalore"),(null,"25",null,null), 
             (null,"26","BE",null),("Raju","21","Btech","Chennai"))
        .toDF("name","age","degree","Place")
        .withColumn("null_count", lit(0))

val df2 = df.columns.foldLeft(df)((df,c) => 
            df.withColumn("null_count", 
                when(col(c).isNull,$"null_count" + 1).otherwise($"null_count")
            )
        )
df2.show(false)

输出是

+----+----+------+---------+----------+
|name|age |degree|Place    |null_count|
+----+----+------+---------+----------+
|Ram |null|MCA   |Bangalore|1         |
|null|25  |null  |null     |3         |
|null|26  |BE    |null     |2         |
|Raju|21  |Btech |Chennai  |0         |
+----+----+------+---------+----------+