Spark2 - LogisticRegression 训练完成但结果未收敛,因为:线搜索失败

Spark2 - LogisticRegression training finished but the result is not converged because: line search failed

训练逻辑回归分类器时出现以下错误:

2016-08-16 20:50:23,833 ERROR [main] optimize.LBFGS (Logger.scala:error(27)) - Failure! Resetting history: breeze.optimize.FirstOrderException: Line search zoom failed
2016-08-16 20:50:24,009 INFO  [main] optimize.StrongWolfeLineSearch (Logger.scala:info(11)) - Line search t: 0.9 fval: 0.4515497761131565 rhs: 0.45154977611314895 cdd: 3.4166889881493167E-16

然后程序继续了一段时间,然后我遇到了这个错误:

2016-08-16 20:50:24,365 ERROR [main] optimize.LBFGS (Logger.scala:error(27)) - Failure again! Giving up and returning. Maybe the objective is just poorly behaved?
2016-08-16 20:50:24,367 WARN  [main] classification.LogisticRegression (Logging.scala:logWarning(66)) - LogisticRegression training finished but the result is not converged because: line search failed!
2016-08-16 20:50:27,143 INFO  [main] optimize.StrongWolfeLineSearch (Logger.scala:info(11)) - Line search t: 0.4496001808762097 fval: 0.5641490068577 rhs: 0.6931115872739131 cdd: 0.01924752705390458
2016-08-16 20:50:27,143 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Step Size: 0.4496
2016-08-16 20:50:27,144 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Val and Grad Norm: 0.564149 (rel: 0.186) 0.622296
2016-08-16 20:50:27,181 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Step Size: 1.000
2016-08-16 20:50:27,181 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Val and Grad Norm: 0.484949 (rel: 0.140) 0.285684
2016-08-16 20:50:27,226 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Step Size: 1.000
2016-08-16 20:50:27,226 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Val and Grad Norm: 0.458425 (rel: 0.0547) 0.0789000
2016-08-16 20:50:27,263 INFO  [main] optimize.LBFGS (Logger.scala:info(11)) - Step Size: 1.000

但随后训练继续进行。

即使看起来训练成功完成(我得到一个模型,我在测试集上做预测,验证分类器等),我还是担心这个错误。 任何想法错误是什么意思?任何建议如何克服它? (我使用 10、000 作为最大迭代次数)

问题出在逻辑回归算法使用的 LBFGS 优化器上。

当梯度错误或收敛公差设置得太紧时,很可能会出现此错误。

在我的例子中,我运行算法如下:

new LogisticRegression().
        setFitIntercept(true).
        setRegParam(0.3).
        setMaxIter(100000).
        setTol(0.0).
        setStandardization(true).
        setWeightCol("classWeightCol").setLabelCol("label").setFeaturesCol("features")

迭代的 收敛公差 设置为 0 (setTol(0.0)) Spark 文档状态:

"Smaller value will lead to higher accuracy with the cost of more iterations. Default is 1E-6. "

但是一旦将 setter 更改为 setTol(0.1),行搜索错误就不会再出现了。

模型不收敛的另一种可能性是增加迭代次数