pyspark.sql.utils.IllegalArgumentException: u'字段 "features" 不存在。'
pyspark.sql.utils.IllegalArgumentException: u'Field "features" does not exist.'
我正在尝试执行随机森林分类器并使用交叉验证评估模型。我使用 pySpark。输入 CSV 文件以 Spark DataFrame 格式加载。
但是我在构建模型时遇到了一个问题。
下面是代码。
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.mllib.evaluation import BinaryClassificationMetrics
sc = SparkContext()
sqlContext = SQLContext(sc)
trainingData =(sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true")
.option("inferSchema", "true")
.load("/PATH/CSVFile"))
numFolds = 10
rf = RandomForestClassifier(numTrees=100, maxDepth=5, maxBins=5, labelCol="V5409",featuresCol="features",seed=42)
evaluator = MulticlassClassificationEvaluator().setLabelCol("V5409").setPredictionCol("prediction").setMetricName("accuracy")
paramGrid = ParamGridBuilder().build()
pipeline = Pipeline(stages=[rf])
paramGrid=ParamGridBuilder().build()
crossval = CrossValidator(
estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
numFolds=numFolds)
model = crossval.fit(trainingData)
print accuracy
我低于错误
Traceback (most recent call last):
File "SparkDF.py", line 41, in <module>
model = crossval.fit(trainingData)
File "/usr/local/spark-2.1.1/python/pyspark/ml/base.py", line 64, in fit
return self._fit(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/tuning.py", line 236, in _fit
model = est.fit(train, epm[j])
File "/usr/local/spark-2.1.1/python/pyspark/ml/base.py", line 64, in fit
return self._fit(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/pipeline.py", line 108, in _fit
model = stage.fit(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/base.py", line 64, in fit
return self._fit(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/wrapper.py", line 236, in _fit
java_model = self._fit_java(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/wrapper.py", line 233, in _fit_java
return self._java_obj.fit(dataset._jdf)
File "/home/hadoopuser/anaconda2/lib/python2.7/site-packages/py4j/java_gateway.py", line 1160, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/spark-2.1.1/python/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u'Field "features" does not exist.'
hadoopuser@rackserver-PowerEdge-R220:~/workspace/RandomForest_CV$
请帮我解决 pySpark 中的这个问题。
谢谢。
我在这里展示数据集的细节。
不,我没有专门的功能列。下面是 trainingData.take(5) 的输出,它显示数据集的前 5 行。
[行(V4366=0.0,V4460=0.232,V4916=-0.017,V1495=-0.104,V1639=0.005,V1967=-0.008,V3049=0.177,V3746=-0.675,V3869=-3.451,V524 =0.004,V5409=0),行(V4366=0.0,V4460=0.111,V4916=-0.003,V1495=-0.137,V1639=0.001,V1967=-0.01,V3049=0.01,V3746=-0.867,V3869=-2.759 , V524=0.0, V5409=0), 行(V4366=0.0, V4460=-0.391, V4916=-0.003, V1495=-0.155, V1639=-0.006, V1967=-0.019, V3049=-0.706, V3746=0.166, V3869=0.189,V524=0.001,V5409=0),行(V4366=0.0,V4460=0.098,V4916=-0.012,V1495=-0.108,V1639=0.005,V1967=-0.002,V3049=0.033,V37487=-0 , V3869=-0.926, V524=0.002, V5409=0), 行(V4366=0.0, V4460=0.026, V4916=-0.004, V1495=-0.139, V1639=0.003, V1967=-0.006, V3049=-0.045, V3746 =-0.208,V3869=-0.782,V524=0.001,V5409=0)]
其中 V433 到 V524 是特征。 V5409 是 class 标签。
Spark 数据帧不像 Spark ML 那样使用;您的所有特征都必须是 单 列中的向量,通常命名为 features
。以下是使用您提供的 5 行作为示例的方法:
spark.version
# u'2.2.0'
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
# your sample data:
temp_df = spark.createDataFrame([Row(V4366=0.0, V4460=0.232, V4916=-0.017, V1495=-0.104, V1639=0.005, V1967=-0.008, V3049=0.177, V3746=-0.675, V3869=-3.451, V524=0.004, V5409=0), Row(V4366=0.0, V4460=0.111, V4916=-0.003, V1495=-0.137, V1639=0.001, V1967=-0.01, V3049=0.01, V3746=-0.867, V3869=-2.759, V524=0.0, V5409=0), Row(V4366=0.0, V4460=-0.391, V4916=-0.003, V1495=-0.155, V1639=-0.006, V1967=-0.019, V3049=-0.706, V3746=0.166, V3869=0.189, V524=0.001, V5409=0), Row(V4366=0.0, V4460=0.098, V4916=-0.012, V1495=-0.108, V1639=0.005, V1967=-0.002, V3049=0.033, V3746=-0.787, V3869=-0.926, V524=0.002, V5409=0), Row(V4366=0.0, V4460=0.026, V4916=-0.004, V1495=-0.139, V1639=0.003, V1967=-0.006, V3049=-0.045, V3746=-0.208, V3869=-0.782, V524=0.001, V5409=0)])
trainingData=temp_df.rdd.map(lambda x:(Vectors.dense(x[0:-1]), x[-1])).toDF(["features", "label"])
trainingData.show()
# +--------------------+-----+
# | features|label|
# +--------------------+-----+
# |[-0.104,0.005,-0....| 0|
# |[-0.137,0.001,-0....| 0|
# |[-0.155,-0.006,-0...| 0|
# |[-0.108,0.005,-0....| 0|
# |[-0.139,0.003,-0....| 0|
# +--------------------+-----+
之后,你的管道应该 运行 没问题(我假设你确实有多重 class class化,因为你的样本只包含 0 作为标签)只有如下更改 rf
和 evaluator
中的标签列:
rf = RandomForestClassifier(numTrees=100, maxDepth=5, maxBins=5, labelCol="label",featuresCol="features",seed=42)
evaluator = MulticlassClassificationEvaluator().setLabelCol("label").setPredictionCol("prediction").setMetricName("accuracy")
最后,print accuracy
将不起作用 - 您需要 model.avgMetrics
。
我想将我的 5 美分添加到 desertnaut's answer - as for now (Spark 2.2.0) there is quite handy VectorAssembler class,它处理将多列转换为一个向量列。然后代码如下所示:
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
# your sample data:
temp_df = spark.createDataFrame([Row(V4366=0.0, V4460=0.232, V4916=-0.017, V1495=-0.104, V1639=0.005, V1967=-0.008, V3049=0.177, V3746=-0.675, V3869=-3.451, V524=0.004, V5409=0), Row(V4366=0.0, V4460=0.111, V4916=-0.003, V1495=-0.137, V1639=0.001, V1967=-0.01, V3049=0.01, V3746=-0.867, V3869=-2.759, V524=0.0, V5409=0), Row(V4366=0.0, V4460=-0.391, V4916=-0.003, V1495=-0.155, V1639=-0.006, V1967=-0.019, V3049=-0.706, V3746=0.166, V3869=0.189, V524=0.001, V5409=0), Row(V4366=0.0, V4460=0.098, V4916=-0.012, V1495=-0.108, V1639=0.005, V1967=-0.002, V3049=0.033, V3746=-0.787, V3869=-0.926, V524=0.002, V5409=0), Row(V4366=0.0, V4460=0.026, V4916=-0.004, V1495=-0.139, V1639=0.003, V1967=-0.006, V3049=-0.045, V3746=-0.208, V3869=-0.782, V524=0.001, V5409=0)])
assembler = VectorAssembler(
inputCols=['V4366', 'V4460', 'V4916', 'V1495', 'V1639', 'V1967', 'V3049', 'V3746', 'V3869', 'V524'],
outputCol='features')
trainingData = assembler.transform(temp_df)
trainingData.show()
# +------+------+------+------+------+------+-----+------+------+-----+-----+--------------------+
# | V1495| V1639| V1967| V3049| V3746| V3869|V4366| V4460| V4916| V524|V5409| features|
# +------+------+------+------+------+------+-----+------+------+-----+-----+--------------------+
# |-0.104| 0.005|-0.008| 0.177|-0.675|-3.451| 0.0| 0.232|-0.017|0.004| 0|[0.0,0.232,-0.017...|
# |-0.137| 0.001| -0.01| 0.01|-0.867|-2.759| 0.0| 0.111|-0.003| 0.0| 0|[0.0,0.111,-0.003...|
# |-0.155|-0.006|-0.019|-0.706| 0.166| 0.189| 0.0|-0.391|-0.003|0.001| 0|[0.0,-0.391,-0.00...|
# |-0.108| 0.005|-0.002| 0.033|-0.787|-0.926| 0.0| 0.098|-0.012|0.002| 0|[0.0,0.098,-0.012...|
# |-0.139| 0.003|-0.006|-0.045|-0.208|-0.782| 0.0| 0.026|-0.004|0.001| 0|[0.0,0.026,-0.004...|
# +------+------+------+------+------+------+-----+------+------+-----+-----+--------------------+
这样它可以很容易地集成为管道中的一个处理步骤。
这里的另一个重要区别是新的 features
列被附加到数据框。
我正在尝试执行随机森林分类器并使用交叉验证评估模型。我使用 pySpark。输入 CSV 文件以 Spark DataFrame 格式加载。 但是我在构建模型时遇到了一个问题。
下面是代码。
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.mllib.evaluation import BinaryClassificationMetrics
sc = SparkContext()
sqlContext = SQLContext(sc)
trainingData =(sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true")
.option("inferSchema", "true")
.load("/PATH/CSVFile"))
numFolds = 10
rf = RandomForestClassifier(numTrees=100, maxDepth=5, maxBins=5, labelCol="V5409",featuresCol="features",seed=42)
evaluator = MulticlassClassificationEvaluator().setLabelCol("V5409").setPredictionCol("prediction").setMetricName("accuracy")
paramGrid = ParamGridBuilder().build()
pipeline = Pipeline(stages=[rf])
paramGrid=ParamGridBuilder().build()
crossval = CrossValidator(
estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
numFolds=numFolds)
model = crossval.fit(trainingData)
print accuracy
我低于错误
Traceback (most recent call last):
File "SparkDF.py", line 41, in <module>
model = crossval.fit(trainingData)
File "/usr/local/spark-2.1.1/python/pyspark/ml/base.py", line 64, in fit
return self._fit(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/tuning.py", line 236, in _fit
model = est.fit(train, epm[j])
File "/usr/local/spark-2.1.1/python/pyspark/ml/base.py", line 64, in fit
return self._fit(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/pipeline.py", line 108, in _fit
model = stage.fit(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/base.py", line 64, in fit
return self._fit(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/wrapper.py", line 236, in _fit
java_model = self._fit_java(dataset)
File "/usr/local/spark-2.1.1/python/pyspark/ml/wrapper.py", line 233, in _fit_java
return self._java_obj.fit(dataset._jdf)
File "/home/hadoopuser/anaconda2/lib/python2.7/site-packages/py4j/java_gateway.py", line 1160, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/spark-2.1.1/python/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u'Field "features" does not exist.'
hadoopuser@rackserver-PowerEdge-R220:~/workspace/RandomForest_CV$
请帮我解决 pySpark 中的这个问题。 谢谢。
我在这里展示数据集的细节。 不,我没有专门的功能列。下面是 trainingData.take(5) 的输出,它显示数据集的前 5 行。
[行(V4366=0.0,V4460=0.232,V4916=-0.017,V1495=-0.104,V1639=0.005,V1967=-0.008,V3049=0.177,V3746=-0.675,V3869=-3.451,V524 =0.004,V5409=0),行(V4366=0.0,V4460=0.111,V4916=-0.003,V1495=-0.137,V1639=0.001,V1967=-0.01,V3049=0.01,V3746=-0.867,V3869=-2.759 , V524=0.0, V5409=0), 行(V4366=0.0, V4460=-0.391, V4916=-0.003, V1495=-0.155, V1639=-0.006, V1967=-0.019, V3049=-0.706, V3746=0.166, V3869=0.189,V524=0.001,V5409=0),行(V4366=0.0,V4460=0.098,V4916=-0.012,V1495=-0.108,V1639=0.005,V1967=-0.002,V3049=0.033,V37487=-0 , V3869=-0.926, V524=0.002, V5409=0), 行(V4366=0.0, V4460=0.026, V4916=-0.004, V1495=-0.139, V1639=0.003, V1967=-0.006, V3049=-0.045, V3746 =-0.208,V3869=-0.782,V524=0.001,V5409=0)]
其中 V433 到 V524 是特征。 V5409 是 class 标签。
Spark 数据帧不像 Spark ML 那样使用;您的所有特征都必须是 单 列中的向量,通常命名为 features
。以下是使用您提供的 5 行作为示例的方法:
spark.version
# u'2.2.0'
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
# your sample data:
temp_df = spark.createDataFrame([Row(V4366=0.0, V4460=0.232, V4916=-0.017, V1495=-0.104, V1639=0.005, V1967=-0.008, V3049=0.177, V3746=-0.675, V3869=-3.451, V524=0.004, V5409=0), Row(V4366=0.0, V4460=0.111, V4916=-0.003, V1495=-0.137, V1639=0.001, V1967=-0.01, V3049=0.01, V3746=-0.867, V3869=-2.759, V524=0.0, V5409=0), Row(V4366=0.0, V4460=-0.391, V4916=-0.003, V1495=-0.155, V1639=-0.006, V1967=-0.019, V3049=-0.706, V3746=0.166, V3869=0.189, V524=0.001, V5409=0), Row(V4366=0.0, V4460=0.098, V4916=-0.012, V1495=-0.108, V1639=0.005, V1967=-0.002, V3049=0.033, V3746=-0.787, V3869=-0.926, V524=0.002, V5409=0), Row(V4366=0.0, V4460=0.026, V4916=-0.004, V1495=-0.139, V1639=0.003, V1967=-0.006, V3049=-0.045, V3746=-0.208, V3869=-0.782, V524=0.001, V5409=0)])
trainingData=temp_df.rdd.map(lambda x:(Vectors.dense(x[0:-1]), x[-1])).toDF(["features", "label"])
trainingData.show()
# +--------------------+-----+
# | features|label|
# +--------------------+-----+
# |[-0.104,0.005,-0....| 0|
# |[-0.137,0.001,-0....| 0|
# |[-0.155,-0.006,-0...| 0|
# |[-0.108,0.005,-0....| 0|
# |[-0.139,0.003,-0....| 0|
# +--------------------+-----+
之后,你的管道应该 运行 没问题(我假设你确实有多重 class class化,因为你的样本只包含 0 作为标签)只有如下更改 rf
和 evaluator
中的标签列:
rf = RandomForestClassifier(numTrees=100, maxDepth=5, maxBins=5, labelCol="label",featuresCol="features",seed=42)
evaluator = MulticlassClassificationEvaluator().setLabelCol("label").setPredictionCol("prediction").setMetricName("accuracy")
最后,print accuracy
将不起作用 - 您需要 model.avgMetrics
。
我想将我的 5 美分添加到 desertnaut's answer - as for now (Spark 2.2.0) there is quite handy VectorAssembler class,它处理将多列转换为一个向量列。然后代码如下所示:
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
# your sample data:
temp_df = spark.createDataFrame([Row(V4366=0.0, V4460=0.232, V4916=-0.017, V1495=-0.104, V1639=0.005, V1967=-0.008, V3049=0.177, V3746=-0.675, V3869=-3.451, V524=0.004, V5409=0), Row(V4366=0.0, V4460=0.111, V4916=-0.003, V1495=-0.137, V1639=0.001, V1967=-0.01, V3049=0.01, V3746=-0.867, V3869=-2.759, V524=0.0, V5409=0), Row(V4366=0.0, V4460=-0.391, V4916=-0.003, V1495=-0.155, V1639=-0.006, V1967=-0.019, V3049=-0.706, V3746=0.166, V3869=0.189, V524=0.001, V5409=0), Row(V4366=0.0, V4460=0.098, V4916=-0.012, V1495=-0.108, V1639=0.005, V1967=-0.002, V3049=0.033, V3746=-0.787, V3869=-0.926, V524=0.002, V5409=0), Row(V4366=0.0, V4460=0.026, V4916=-0.004, V1495=-0.139, V1639=0.003, V1967=-0.006, V3049=-0.045, V3746=-0.208, V3869=-0.782, V524=0.001, V5409=0)])
assembler = VectorAssembler(
inputCols=['V4366', 'V4460', 'V4916', 'V1495', 'V1639', 'V1967', 'V3049', 'V3746', 'V3869', 'V524'],
outputCol='features')
trainingData = assembler.transform(temp_df)
trainingData.show()
# +------+------+------+------+------+------+-----+------+------+-----+-----+--------------------+
# | V1495| V1639| V1967| V3049| V3746| V3869|V4366| V4460| V4916| V524|V5409| features|
# +------+------+------+------+------+------+-----+------+------+-----+-----+--------------------+
# |-0.104| 0.005|-0.008| 0.177|-0.675|-3.451| 0.0| 0.232|-0.017|0.004| 0|[0.0,0.232,-0.017...|
# |-0.137| 0.001| -0.01| 0.01|-0.867|-2.759| 0.0| 0.111|-0.003| 0.0| 0|[0.0,0.111,-0.003...|
# |-0.155|-0.006|-0.019|-0.706| 0.166| 0.189| 0.0|-0.391|-0.003|0.001| 0|[0.0,-0.391,-0.00...|
# |-0.108| 0.005|-0.002| 0.033|-0.787|-0.926| 0.0| 0.098|-0.012|0.002| 0|[0.0,0.098,-0.012...|
# |-0.139| 0.003|-0.006|-0.045|-0.208|-0.782| 0.0| 0.026|-0.004|0.001| 0|[0.0,0.026,-0.004...|
# +------+------+------+------+------+------+-----+------+------+-----+-----+--------------------+
这样它可以很容易地集成为管道中的一个处理步骤。
这里的另一个重要区别是新的 features
列被附加到数据框。