Spark DataFrame 并重命名多列 (Java)

Spark DataFrame and renaming multiple columns (Java)

有没有比多次调用 dataFrame.withColumnRenamed() 更好的方法来为给定 SparkSQL DataFrame 的所有列或多个列同时添加前缀或重命名?

例如,如果我想检测更改(使用完全外部联接)。然后我剩下两个具有相同结构的DataFrame

我建议使用 select() 方法来执行此操作。事实上 withColumnRenamed() 方法本身使用 select() 。以下是如何重命名多列的示例:

import org.apache.spark.sql.functions._

val someDataframe: DataFrame = ...

val initialColumnNames = Seq("a", "b", "c")
val renamedColumns = initialColumnNames.map(name => col(name).as(s"renamed_$name"))
someDataframe.select(renamedColumns : _*)

我刚找到答案

df1_r = df1.select(*(col(x).alias(x + '_df1') for x in df1.columns))

(见已接受答案的末尾)

or (a <- 0 to newsales.columns.length - 1) 
{ 
 var new_c = newsales.columns(a).replace('(','_').replace(')',' ').trim  
 newsales_var = newsales.withColumnRenamed(newsales.columns(a),new_c) 
}

我觉得这个方法可以帮到你。

public static Dataset<Row> renameDataFrame(Dataset<Row> dataset) {
    for (String column : dataset.columns()) {
        dataset = dataset.withColumnRenamed(column, SystemUtils.underscoreToCamelCase(column));
    }
    return dataset;
}

    public static String underscoreToCamelCase(String underscoreName) {
        StringBuilder result = new StringBuilder();
        if (underscoreName != null && underscoreName.length() > 0) {
            boolean flag = false;
            for (int i = 0; i < underscoreName.length(); i++) {
                char ch = underscoreName.charAt(i);
                if ("_".charAt(0) == ch) {
                    flag = true;
                } else {
                    if (flag) {
                        result.append(Character.toUpperCase(ch));
                        flag = false;
                    } else {
                        result.append(ch);
                    }
                }
            }
        }
        return result.toString();
    }