如何使用pyspark将csv文件转换或保存为txt文件

how to convert or save a csv file into a txt file using pyspark

我正在学习 Pyspark,但我不知道如何将 RDD 值的总和保存到文件中。我已尝试以下代码但未成功:

from typing import KeysView

counts = rdd.flatMap(lambda line: line.split(",")) \
             .map(lambda word: (word, 1)) \
             .reduceByKey(lambda a, b: a + b)

k=counts.keys().saveAsTextFile("out/out_1_2a.txt")
sc.parallelize(counts.values().sum()).saveAsTextFile('out/out_1_3.txt')

虽然我可以将键保存到文件中,但我无法保存值的总和。我得到的错误是:“类型错误:'int' 对象不可迭代”

有人可以帮忙吗:

看下面的逻辑-

counts = rdd.flatMap(lambda line: line.split(",")) \
             .map(lambda word: (word, 1)) \
             .reduceByKey(lambda a, b: a + b)

cnt_sum = counts.values().sum()

sc.parallelize([cnt_sum]).coalesce(1).saveAsTextFile("<path>/filename.txt")

更有效(代码更少):

count = len(rdd.flatMap(lambda x: x.split(",")).collect())
sc.parallelize([count]).coalesce(1).saveAsTextFile("<path>/filename.txt")