CrossValidator 不支持 VectorUDT 作为 spark-ml 中的标签

CrossValidator does not support VectorUDT as label in spark-ml

我在使用一个热编码器时在 scala spark 中遇到 ml.crossvalidator 问题。

这是我的代码

val tokenizer = new Tokenizer().
                    setInputCol("subjects").
                    setOutputCol("subject")

//CountVectorizer / TF
val countVectorizer = new CountVectorizer().
                        setInputCol("subject").
                        setOutputCol("features")

// convert string into numerical values
val labelIndexer = new StringIndexer().
                        setInputCol("labelss").
                        setOutputCol("labelsss")

// convert numerical to one hot encoder
val labelEncoder = new OneHotEncoder().
                   setInputCol("labelsss").
                   setOutputCol("label")

val logisticRegression = new LogisticRegression()

val pipeline = new Pipeline().setStages(Array(tokenizer,countVectorizer,labelIndexer,labelEncoder,logisticRegression))

给我这样的错误

cv: org.apache.spark.ml.tuning.CrossValidator = cv_8cc1ae985e39
java.lang.IllegalArgumentException: requirement failed: Column label must be of type NumericType but was actually of type org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7.

我不知道如何解决它。

我需要一个热编码器,因为我的标签是绝对的。

谢谢你帮助我:)

实际上没有必要对标签 (目标变量) 使用 OneHotEncoder/OneHotEncoderEstimator,而且您实际上不应该这样做。这将创建一个向量 (type org.apache.spark.ml.linalg.VectorUDT).

StringIndexer 足以定义您的标签是分类的。

让我们用一个小例子来验证一下:

val df = Seq((0, "a"),(1, "b"),(2, "c"),(3, "a"),(4, "a"),(5, "c")).toDF("category", "text")
// df: org.apache.spark.sql.DataFrame = [category: int, text: string]

val indexer = new StringIndexer().setInputCol("category").setOutputCol("categoryIndex").fit(df)
// indexer: org.apache.spark.ml.feature.StringIndexerModel = strIdx_cf691c087e1d

val indexed = indexer.transform(df)
// indexed: org.apache.spark.sql.DataFrame = [category: int, text: string ... 1 more field]

indexed.schema.map(_.metadata).foreach(println)
// {}
// {}
// {"ml_attr":{"vals":["4","5","1","0","2","3"],"type":"nominal","name":"categoryIndex"}}

如您所见,StringIndexer 实际上将元数据附加到该列 (categoryIndex) 并将其标记为 nominal a.k.a 分类.

您还可以注意到,在列的属性中,您有类别列表。

在我关于 How to handle categorical features with spark-ml?

的其他回答中有更多相关内容

关于数据准备元数据spark-ml,我强烈建议您阅读以下条目:

https://github.com/awesome-spark/spark-gotchas/blob/5ad4c399ffd2821875f608be8aff9f1338478444/06_data_preparation.md

免责声明:我是 link.

中条目的合著者

注:(文档摘录)

Because this existing OneHotEncoder is a stateless transformer, it is not usable on new data where the number of categories may differ from the training data. In order to fix this, a new OneHotEncoderEstimator was created that produces an OneHotEncoderModel when fitting. For more detail, please see SPARK-13030.

OneHotEncoder has been deprecated in 2.3.0 and will be removed in 3.0.0. Please use OneHotEncoderEstimator instead.