如何将dataframe(从hive table获得)写入hadoop SequenceFile和RCFile?

How to write dataframe (obtained from hive table) into hadoop SequenceFile and RCFile?

我可以写成

使用来自数据块的附加依赖项。

    <dependency>
        <groupId>com.databricks</groupId>
        <artifactId>spark-csv_2.10</artifactId>
        <version>1.5.0</version>
    </dependency>
    <dependency>
        <groupId>com.databricks</groupId>
        <artifactId>spark-avro_2.10</artifactId>
        <version>2.0.1</version>
    </dependency>

示例代码:

    SparkContext sc = new SparkContext(conf);
    HiveContext hc = new HiveContext(sc);
    DataFrame df = hc.table(hiveTableName);
    df.printSchema();
    DataFrameWriter writer = df.repartition(1).write();

    if ("ORC".equalsIgnoreCase(hdfsFileFormat)) {
        writer.orc(outputHdfsFile);

    } else if ("PARQUET".equalsIgnoreCase(hdfsFileFormat)) {
        writer.parquet(outputHdfsFile);

    } else if ("TEXTFILE".equalsIgnoreCase(hdfsFileFormat)) {
        writer.format("com.databricks.spark.csv").option("header", "true").save(outputHdfsFile);

    } else if ("AVRO".equalsIgnoreCase(hdfsFileFormat)) {
        writer.format("com.databricks.spark.avro").save(outputHdfsFile);
    }

有什么方法可以将dataframe写入hadoop SequenceFile和RCFile吗?

您可以使用void saveAsObjectFile(String path)RDD保存为序列化对象的SequenceFile。因此,在您的情况下,您必须从 DataFrame:

中检索 RDD
JavaRDD<Row> rdd = df.javaRDD;
rdd.saveAsObjectFile(outputHdfsFile);