如何在 scikit-learn 中 create/customize 自己的评分器功能?

How to create/customize your own scorer function in scikit-learn?

我正在使用 Support Vector Regression as an estimator in GridSearchCV。但我想更改误差函数:我想定义自己的自定义误差函数,而不是使用默认值(R 平方:决定系数)。

我试着用 make_scorer 做了一个,但没有成功。

我阅读了文档,发现可以创建 custom estimators,但我不需要重新制作整个估算器 - 只需 error/scoring 函数即可。

我想我可以通过将可调用者定义为得分手来做到这一点,就像 docs 中所说的那样。

但我不知道如何使用估算器:在我的例子中是 SVR。我是否必须切换到分类器(例如 SVC)?我将如何使用它?

我自定义的错误函数如下:

def my_custom_loss_func(X_train_scaled, Y_train_scaled):
    error, M = 0, 0
    for i in range(0, len(Y_train_scaled)):
        z = (Y_train_scaled[i] - M)
        if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) > 0:
            error_i = (abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(z))
        if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) < 0:
            error_i = -(abs((Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(z)))
        if X_train_scaled[i] > M and Y_train_scaled[i] < M:
            error_i = -(abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(-z))
    error += error_i
    return error

变量 M 不是 null/zero。为了简单起见,我只是将其设置为零。

谁能展示这个自定义评分函数的示例应用程序?感谢您的帮助!

如您所见,这是通过使用 make_scorer (docs) 完成的。

from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.svm import SVR

import numpy as np

rng = np.random.RandomState(1)

def my_custom_loss_func(X_train_scaled, Y_train_scaled):
    error, M = 0, 0
    for i in range(0, len(Y_train_scaled)):
        z = (Y_train_scaled[i] - M)
        if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) > 0:
            error_i = (abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(z))
        if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) < 0:
            error_i = -(abs((Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(z)))
        if X_train_scaled[i] > M and Y_train_scaled[i] < M:
            error_i = -(abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(-z))
    error += error_i
    return error

# Generate sample data
X = 5 * rng.rand(10000, 1)
y = np.sin(X).ravel()

# Add noise to targets
y[::5] += 3 * (0.5 - rng.rand(X.shape[0]/5))

train_size = 100

my_scorer = make_scorer(my_custom_loss_func, greater_is_better=True)

svr = GridSearchCV(SVR(kernel='rbf', gamma=0.1),
                   scoring=my_scorer,
                   cv=5,
                   param_grid={"C": [1e0, 1e1, 1e2, 1e3],
                               "gamma": np.logspace(-2, 2, 5)})

svr.fit(X[:train_size], y[:train_size])

print svr.best_params_
print svr.score(X[train_size:], y[train_size:])

Jamie 有一个充实的例子,但这里有一个使用 make_scorer 直接来自 scikit-learn documentation:

的例子
import numpy as np
def my_custom_loss_func(ground_truth, predictions):
    diff = np.abs(ground_truth - predictions).max()
    return np.log(1 + diff)

# loss_func will negate the return value of my_custom_loss_func,
#  which will be np.log(2), 0.693, given the values for ground_truth
#  and predictions defined below.
loss  = make_scorer(my_custom_loss_func, greater_is_better=False)
score = make_scorer(my_custom_loss_func, greater_is_better=True)
ground_truth = [[1, 1]]
predictions  = [0, 1]
from sklearn.dummy import DummyClassifier
clf = DummyClassifier(strategy='most_frequent', random_state=0)
clf = clf.fit(ground_truth, predictions)
loss(clf,ground_truth, predictions) 

score(clf,ground_truth, predictions)

当通过 sklearn.metrics.make_scorer, the convention is that custom functions ending in _score return a value to maximize. And for scorers ending in _loss or _error, a value is returned to be minimized. You can use this functionality by setting the greater_is_better parameter inside make_scorer. That is, this parameter would be True for scorers where higher values are better, and False for scorers where lower values are better. GridSearchCV 定义自定义记分器时,可以在适当的方向上进行优化。

然后您可以将您的函数转换为记分器,如下所示:

from sklearn.metrics.scorer import make_scorer

def custom_loss_func(X_train_scaled, Y_train_scaled):
    error, M = 0, 0
    for i in range(0, len(Y_train_scaled)):
        z = (Y_train_scaled[i] - M)
        if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) > 0:
            error_i = (abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(z))
        if X_train_scaled[i] > M and Y_train_scaled[i] > M and (X_train_scaled[i] - Y_train_scaled[i]) < 0:
            error_i = -(abs((Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(z)))
        if X_train_scaled[i] > M and Y_train_scaled[i] < M:
            error_i = -(abs(Y_train_scaled[i] - X_train_scaled[i]))**(2*np.exp(-z))
    error += error_i
    return error


custom_scorer = make_scorer(custom_loss_func, greater_is_better=True)

然后将 custom_scorer 传递给 GridSearchCV 就像任何其他评分函数一样:clf = GridSearchCV(scoring=custom_scorer).