PySpark: Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: Java heap space

PySpark: Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: Java heap space

我正在尝试使用 StringIndexerOneHotEncoderVectorAssembler 将分类值转换为数值,以便在 PySpark 中应用 K 均值聚类。这是我的代码:

indexers = [
    StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c))
    for c in columnList
]

encoders = [OneHotEncoder(dropLast=False, inputCol=indexer.getOutputCol(),
                          outputCol="{0}_encoded".format(indexer.getOutputCol()))
            for indexer in indexers
            ]

assembler = VectorAssembler(inputCols=[encoder.getOutputCol() for encoder in encoders], outputCol="features")


pipeline = Pipeline(stages=indexers + encoders + [assembler])
model = pipeline.fit(df)
transformed = model.transform(df)

kmeans = KMeans().setK(2).setFeaturesCol("features").setPredictionCol("prediction")
kMeansPredictionModel = kmeans.fit(transformed)

predictionResult = kMeansPredictionModel.transform(transformed)
predictionResult.show(5)

我得到 Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: Java heap space。如何在代码中分配更多堆 space 或更好?分配更多 space 是否明智?我可以将我的程序限制为可用的线程数和堆 space 吗?

我运行陷入同样的​​问题。增加用户允许的进程数量有所帮助。 运行 例如:

ulimit -u 4096