如何在sklearn的交叉验证中执行特征选择(rfecv)

How to perform feature selection (rfecv) in cross validation in sklearn

我想在 sklearn 中执行 recursive feature elimination with cross validation (rfecv) 的 10 折交叉验证(即 cross_val_predictcross_validate)。

由于rfecv本身的名称中有交叉验证部分,我不清楚该怎么做。我现在的代码如下。

from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
y = iris.target

from sklearn.ensemble import RandomForestClassifier

clf = RandomForestClassifier(random_state = 0, class_weight="balanced")

k_fold = StratifiedKFold(n_splits=10, shuffle=True, random_state=0)

rfecv = RFECV(estimator=clf, step=1, cv=k_fold)

请告诉我如何将数据 Xy10-fold cross validation 中的 rfecv 一起使用。

如果需要,我很乐意提供更多详细信息。

要使用 RFE 然后 执行特征选择,然后 拟合 rf 和 10 折交叉验证,你可以这样做:

from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import confusion_matrix
from sklearn.feature_selection import RFE

rf = RandomForestClassifier(random_state = 0, class_weight="balanced")
rfe = RFE(estimator=rf, step=1)

现在通过拟合 RFECV 对原始 X 进行变换:

X_new = rfe.fit_transform(X, y)

以下是排名特征(只有 4 个问题不大):

rfe.ranking_
# array([2, 3, 1, 1])

现在拆分为训练数据和测试数据,并使用 GridSearchCV 结合网格搜索执行交叉验证(它们通常一起使用):

X_train, X_test, y_train, y_test = train_test_split(X_new,y,train_size=0.7)

k_fold = StratifiedKFold(n_splits=10, shuffle=True, random_state=0)

param_grid = {
                 'n_estimators': [5, 10, 15, 20],
                 'max_depth': [2, 5, 7, 9]
             }

grid_clf = GridSearchCV(rf, param_grid, cv=k_fold.split(X_train, y_train))
grid_clf.fit(X_train, y_train)

y_pred = grid_clf.predict(X_test)

confusion_matrix(y_test, y_pred)

array([[17,  0,  0],
       [ 0, 11,  0],
       [ 0,  3, 14]], dtype=int64)

要结合预定义的 k_fold 使用递归特征消除,您应该使用 RFE 而不是 RFECV:

from sklearn.feature_selection import RFE
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import accuracy_score
from sklearn import datasets

iris = datasets.load_iris()
X = iris.data
y = iris.target

k_fold = StratifiedKFold(n_splits=10, shuffle=True, random_state=0)
clf = RandomForestClassifier(random_state = 0, class_weight="balanced")
selector = RFE(clf, 5, step=1)

cv_acc = []

for train_index, val_index in k_fold.split(X, y):
    selector.fit(X[train_index], y[train_index])
    pred = selector.predict(X[val_index])
    acc = accuracy_score(y[val_index], pred)
    cv_acc.append(acc)

cv_acc
# result:
[1.0,
 0.9333333333333333,
 0.9333333333333333,
 1.0,
 0.9333333333333333,
 0.9333333333333333,
 0.8666666666666667,
 1.0,
 0.8666666666666667,
 0.9333333333333333]