如何在下面的代码中将性能指标从准确性更改为精度、召回率和其他指标?

How to change the performance metric from accuracy to precision, recall and other metrics in the code below?

作为 scikit-learn 的初学者,我试图对鸢尾花数据集进行分类,但我在将评分指标 从 scoring='accuracy' 调整为 [=] 时遇到了 问题31=]others like precision, recall, f1等,在交叉验证步骤。下面是完整的代码示例(足以从 # Test options and evaluation metric开始)。

# Load libraries
import pandas
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
from sklearn import model_selection # for command model_selection.cross_val_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC



# Load dataset
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']
dataset = pandas.read_csv(url, names=names)


# Split-out validation dataset
array = dataset.values
X = array[:,0:4]
Y = array[:,4]
validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)


# Test options and evaluation metric
seed = 7
scoring = 'accuracy'


#Below, we build and evaluate 6 different models
# Spot Check Algorithms
models = []
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))


# evaluate each model in turn, we calculate the cv-scores, ther mean and std for each model
# 
results = []
names = []
for name, model in models:
    #below, we do k-fold cross-validation
    kfold = model_selection.KFold(n_splits=10, random_state=seed)
    cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring)
    results.append(cv_results)
    names.append(name)
    msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
    print(msg)

现在,除了评分 ='accuracy',我还想评估此多类分类问题的其他性能指标。但是当我使用 scoring='precision' 时,它会引发:

ValueError: Target is multiclass but average='binary'. Please choose another average setting.

我的问题是:

1) 我猜是因为 'precision' 和 'recall' 是在 scikit-learn 中定义的,仅用于二元分类 - 这是正确的吗?如果是,那么上面代码中的 scoring='accuracy' 应该替换成哪个命令?

2) 如果我想在执行 k 折交叉验证时计算每个折的混淆矩阵、精度和召回率,我应该输入什么命令?

3)为了实验,我尝试了scoring='balanced_accuracy',结果发现:

ValueError: 'balanced_accuracy' is not a valid scoring value.

当模型评估文档(https://scikit-learn.org/stable/modules/model_evaluation.html)明确表示balanced_accuracy是一种评分方法时,为什么会发生这种情况?我在这里很困惑,因此将不胜感激展示如何评估其他性能指标的实际代码!提前谢谢客栈!!

1) I guess the above is happening because 'precision' and 'recall' are defined in scikit-learn only for binary classification-is that correct?

没有。精度和召回率当然也适用于多 class 问题 - 请参阅 precision & recall.

的文档

If yes, then, which command(s) should replace scoring='accuracy' in the code above?

问题的出现是因为,正如您从我上面提供的文档 links 中看到的那样,这些指标的默认设置是二进制 classification (average='binary') .在您的多 class class 化的情况下,您需要指定您感兴趣的特定指标的确切 "version"(有多个);查看 scikit-learn 文档的 relevant page,但是 scoring 参数的一些有效选项可能是:

'precision_macro'
'precision_micro'
'precision_weighted'
'recall_macro'
'recall_micro'
'recall_weighted'

上面的文档 link 甚至包含一个使用 'recall_macro' 和虹膜数据的示例 - 请务必检查它。

2) If I want to compute the confusion matrix, precision and recall for each fold while performing the k-fold cross validation, what commands should I type?

这并非微不足道,但您可以在我对

的回答中看到一个方法

3) For the sake of experimentation, I tried scoring='balanced_accuracy', only to find:

   ValueError: 'balanced_accuracy' is not a valid scoring value.

这是因为您可能使用的是旧版本的 scikit-learn。 balanced_accuracy 仅在 v0.20 中可用 - 您可以验证 it is not available in v0.18。将您的 scikit-learn 升级到 v0.20,您应该没问题。