在 sklearn 中,一方面 KFlold 与 shuffle=True 的 KFold 和 RepeatedKFold 之间存在差异

Discrepancy between KFlold on the one hand and KFold with shuffle=True and RepeatedKFold on the other hand in sklearn

我正在使用 sklearn 版本 0.22 比较 KFlold 和 RepeatedKFold。 根据 documentation:RepeatedKFold“重复 K-Fold n 次,每次重复 运行domization 不同 。”人们会期望只有 1 次重复(n_repeats = 1)的 运行 RepeatedKFold 的结果与 KFold 几乎相同。

我运行一个简单的比较:

import numpy as np
from sklearn.linear_model import SGDClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import StratifiedKFold, KFold, RepeatedKFold, RepeatedStratifiedKFold
from sklearn import metrics

X, y = load_digits(return_X_y=True)

classifier = SGDClassifier(loss='hinge', penalty='elasticnet',  fit_intercept=True)
scorer = metrics.accuracy_score
results = []
n_splits = 5
kf = KFold(n_splits=n_splits)
for train_index, test_index in kf.split(X, y):
    x_train, y_train = X[train_index], y[train_index]
    x_test, y_test = X[test_index], y[test_index]
    classifier.fit(x_train, y_train)
    results.append(scorer(y_test, classifier.predict(x_test)))
print ('KFold')
print('mean = ', np.mean(results))
print('std = ', np.std(results))
print()

results = []
n_repeats = 1
rkf = RepeatedKFold(n_splits=n_splits, n_repeats = n_repeats)
for train_index, test_index in rkf.split(X, y):
    x_train, y_train = X[train_index], y[train_index]
    x_test, y_test = X[test_index], y[test_index]
    classifier.fit(x_train, y_train)
    results.append(scorer(y_test, classifier.predict(x_test)))
print ('RepeatedKFold')
print('mean = ', np.mean(results))
print('std = ', np.std(results))

输出为

KFold
mean =  0.9082079851439182
std =  0.04697225962068869

RepeatedKFold
mean =  0.9493562364593006
std =  0.017732595698953055

我重复了这个实验足够多次,发现差异具有统计显着性。

我试图阅读并重新阅读文档以查看是否遗漏了什么但无济于事。

顺便说一句,StratifiedKFold 和 RepeatedStratifiedKFold 也是如此:

StratifiedKFold
mean =  0.9159935004642525
std =  0.026687786392525545

RepeatedStratifiedKFold
mean =  0.9560476632621479
std =  0.014405630805910506

对于这个数据集,StratifiedKFold与KFold一致; RepeatedStratifiedKFold 同意 RepeatedSKFold。

UPDATE Following the suggestion from @Dan and @SergeyBushmanov, I included shuffle and random_state

def run_nfold(X,y, classifier, scorer, cv,  n_repeats):
    results = []
    for n in range(n_repeats):
        for train_index, test_index in cv.split(X, y):
            x_train, y_train = X[train_index], y[train_index]
            x_test, y_test = X[test_index], y[test_index]
            classifier.fit(x_train, y_train)
            results.append(scorer(y_test, classifier.predict(x_test)))    
    return results
kf = KFold(n_splits=n_splits)
results_kf = run_nfold(X,y, classifier, scorer, kf, 10)
print('KFold mean = ', np.mean(results_kf))

kf_shuffle = KFold(n_splits=n_splits, shuffle=True, random_state = 11)
results_kf_shuffle = run_nfold(X,y, classifier, scorer, kf_shuffle, 10)
print('KFold Shuffled mean = ', np.mean(results_kf_shuffle))

rkf = RepeatedKFold(n_splits=n_splits, n_repeats = n_repeats, random_state = 111)
results_kf_repeated = run_nfold(X,y, classifier, scorer, rkf, 10)
print('RepeatedKFold mean = ', np.mean(results_kf_repeated)

产生

KFold mean =  0.9119255648406066
KFold Shuffled mean =  0.9505304859176724
RepeatedKFold mean =  0.950754100897555

此外,使用Kolmogorov-Smirnov测试:

print ('Compare KFold with KFold shuffled results')
ks_2samp(results_kf, results_kf_shuffle)
print ('Compare RepeatedKFold with KFold shuffled results')
ks_2samp(results_kf_repeated, results_kf_shuffle)

显示 KFold shuffled 和 RepeatedKFold(看起来默认情况下它是随机的,你是对的@Dan)在统计上是相同的,而默认的 non-shuffled KFold 产生统计上显着的较低结果:

Compare KFold with KFold shuffled results
Ks_2sampResult(statistic=0.66, pvalue=1.3182765881237494e-10)

Compare RepeatedKFold with KFold shuffled results
Ks_2sampResult(statistic=0.14, pvalue=0.7166468440414822)

现在,请注意,我对 KFold 和 RepeatedKFold 使用了 不同的 random_state。因此,答案,或者更确切地说是部分答案,是结果的差异是由于改组与 non-shuffling 造成的。这是有道理的,因为使用不同的 random_state 可以改变精确的分割,它不应该改变统计属性,比如多次运行的平均值。

我现在很困惑为什么改组会导致这种效果。我更改了问题的标题以反映这种混淆(我希望它不会违反任何 Whosebug 规则,但我不想创建另一个问题)。

UPDATE I agree with @SergeyBushmanov's suggestion. I posted it as a new question

要使 RepeatedKFold 结果与 KFold 相似,您必须:

np.random.seed(42)
n = np.random.choice([0,1],10,p=[.5,.5])
kf = KFold(2,shuffle=True, random_state=42)
list(kf.split(n))
[(array([2, 3, 4, 6, 9]), array([0, 1, 5, 7, 8])),
 (array([0, 1, 5, 7, 8]), array([2, 3, 4, 6, 9]))]
kfr = RepeatedKFold(n_splits=2, n_repeats=1, random_state=42)
list(kfr.split(n))
[(array([2, 3, 4, 6, 9]), array([0, 1, 5, 7, 8])),
 (array([0, 1, 5, 7, 8]), array([2, 3, 4, 6, 9]))]

RepeatedKFold uses KFold 生成折叠,只需要确保两者具有相似的 random_state.