如何将两个估计器对象传递给 sklearn 的 GridSearchCV,以便它们在每个步骤中具有相同的参数?
How to pass two estimator objects to sklearn's GridSearchCV so that they have the same parameters in each step?
我正在尝试使用 SKlearn 的 GridSearchCV 为我的估算器调整超参数。
第一步,估计器用于SequentialFeatureSelection, which is a custom library that performs wrapper based feature selection。这意味着迭代地添加新特征并确定估计器表现最佳的特征。因此,SequentialFeatureSelection 方法需要我的估算器。该库经过编程,可以完美地与 SKlearn 一起使用,因此我将其集成到 GridSearchCV 管道的第一步中,以将特征转换为 selected.
在第二步中,我想使用具有完全相同参数的完全相同的分类器进行拟合并预测结果。但是,对于参数网格,我只能将参数设置为传递给 SequentialFeatureSelector 的分类器或 'clf' 中的参数,我无法保证它们始终相同。
最后,使用 selected 特征和 selected 参数,我想在之前的测试集上进行预测。
On the bottom of the page of the SFS library,他们展示了如何将 SFS 与 GridSearchCV 一起使用,但是用于 select 特征的 KNN 算法和用于预测的算法也使用不同的参数。当我在 trainf SFS 和 GridSearchCV 之后检查自己时,参数永远不会相同,即使我按照建议使用 clf.clone() 也是如此。这是我的代码:
import sklearn.pipeline
import sklearn.tree
import sklearn.model_selection
import mlxtend.feature_selection
def sfs(x, y):
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y, test_size=0.2, random_state=0)
clf = sklearn.tree.DecisionTreeClassifier()
param_grid = {
"sfs__estimator__max_depth": [5]
}
sfs = mlxtend.feature_selection.SequentialFeatureSelector(clone_estimator=True, # Clone like in Tutorial
estimator=clf,
k_features=10,
forward=True,
floating=False,
scoring='accuracy',
cv=3,
n_jobs=1)
pipe = sklearn.pipeline.Pipeline([('sfs', sfs), ("clf", clf)])
gs = sklearn.model_selection.GridSearchCV(estimator=pipe,
param_grid=param_grid,
scoring='accuracy',
n_jobs=1,
cv=3,
refit=True)
gs = gs.fit(x_train, y_train)
# Both estimators should have depth 5!
print("SFS Final Estimator Depth: " + str(gs.best_estimator_.named_steps.sfs.estimator.max_depth))
print("CLF Final Estimator Depth: " + str(gs.best_estimator_._final_estimator.max_depth))
# Evaluate...
y_test_pred = gs.predict(x_test)
# Accuracy etc...
问题是,我如何确保它们在同一管道中始终设置相同的参数?
谢谢!
我找到了一个解决方案,我覆盖了 SequentialFeatureSelector (SFS) class 的一些方法,以便在转换后也使用它的估计器进行预测。这是通过引入自定义 SFS class 'CSequentialFeatureSelector' 来完成的,它覆盖了 SFS 中的以下方法:
在fit(self, X, y)方法中,不仅进行了正常的拟合,而且self.estimator是在变换后的数据上进行拟合,所以是可以为 SFS class.
实施预测和 predict_proba 方法
我为 SFS class 实现了预测和 predict_probba 方法,调用了拟合 self.estimator 的预测和 predict_probba 方法。
因此,我只剩下一个用于 SFS 和预测的估算器。
下面是部分代码:
import sklearn.pipeline
import sklearn.tree
import sklearn.model_selection
import mlxtend.feature_selection
class CSequentialFeatureSelector(mlxtend.feature_selection.SequentialFeatureSelector):
def predict(self, X):
X = self.transform(X)
return self.estimator.predict(X)
def predict_proba(self, X):
X = self.transform(X)
return self.estimator.predict_proba(X)
def fit(self, X, y):
self.fit_helper(X, y) # fit helper is the 'old' fit method, which I copied and renamed to fit_helper
self.estimator.fit(self.transform(X), y)
return self
def sfs(x, y):
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y, test_size=0.2, random_state=0)
clf = sklearn.tree.DecisionTreeClassifier()
param_grid = {
"sfs__estimator__max_depth": [3, 4, 5]
}
sfs = mlxtend.feature_selection.SequentialFeatureSelector(clone_estimator=True,
estimator=clf,
k_features=10,
forward=True,
floating=False,
scoring='accuracy',
cv=3,
n_jobs=1)
# Now only one object in the pipeline (in fact this is not even needed anymore)
pipe = sklearn.pipeline.Pipeline([('sfs', sfs)])
gs = sklearn.model_selection.GridSearchCV(estimator=pipe,
param_grid=param_grid,
scoring='accuracy',
n_jobs=1,
cv=3,
refit=True)
gs = gs.fit(x_train, y_train)
print("SFS Final Estimator Depth: " + str(gs.best_estimator_.named_steps.sfs.estimator.max_depth))
y_test_pred = gs.predict(x_test)
# Evaluate performance of y_test_pred
我正在尝试使用 SKlearn 的 GridSearchCV 为我的估算器调整超参数。
第一步,估计器用于SequentialFeatureSelection, which is a custom library that performs wrapper based feature selection。这意味着迭代地添加新特征并确定估计器表现最佳的特征。因此,SequentialFeatureSelection 方法需要我的估算器。该库经过编程,可以完美地与 SKlearn 一起使用,因此我将其集成到 GridSearchCV 管道的第一步中,以将特征转换为 selected.
在第二步中,我想使用具有完全相同参数的完全相同的分类器进行拟合并预测结果。但是,对于参数网格,我只能将参数设置为传递给 SequentialFeatureSelector 的分类器或 'clf' 中的参数,我无法保证它们始终相同。
最后,使用 selected 特征和 selected 参数,我想在之前的测试集上进行预测。
On the bottom of the page of the SFS library,他们展示了如何将 SFS 与 GridSearchCV 一起使用,但是用于 select 特征的 KNN 算法和用于预测的算法也使用不同的参数。当我在 trainf SFS 和 GridSearchCV 之后检查自己时,参数永远不会相同,即使我按照建议使用 clf.clone() 也是如此。这是我的代码:
import sklearn.pipeline
import sklearn.tree
import sklearn.model_selection
import mlxtend.feature_selection
def sfs(x, y):
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y, test_size=0.2, random_state=0)
clf = sklearn.tree.DecisionTreeClassifier()
param_grid = {
"sfs__estimator__max_depth": [5]
}
sfs = mlxtend.feature_selection.SequentialFeatureSelector(clone_estimator=True, # Clone like in Tutorial
estimator=clf,
k_features=10,
forward=True,
floating=False,
scoring='accuracy',
cv=3,
n_jobs=1)
pipe = sklearn.pipeline.Pipeline([('sfs', sfs), ("clf", clf)])
gs = sklearn.model_selection.GridSearchCV(estimator=pipe,
param_grid=param_grid,
scoring='accuracy',
n_jobs=1,
cv=3,
refit=True)
gs = gs.fit(x_train, y_train)
# Both estimators should have depth 5!
print("SFS Final Estimator Depth: " + str(gs.best_estimator_.named_steps.sfs.estimator.max_depth))
print("CLF Final Estimator Depth: " + str(gs.best_estimator_._final_estimator.max_depth))
# Evaluate...
y_test_pred = gs.predict(x_test)
# Accuracy etc...
问题是,我如何确保它们在同一管道中始终设置相同的参数?
谢谢!
我找到了一个解决方案,我覆盖了 SequentialFeatureSelector (SFS) class 的一些方法,以便在转换后也使用它的估计器进行预测。这是通过引入自定义 SFS class 'CSequentialFeatureSelector' 来完成的,它覆盖了 SFS 中的以下方法:
在fit(self, X, y)方法中,不仅进行了正常的拟合,而且self.estimator是在变换后的数据上进行拟合,所以是可以为 SFS class.
实施预测和 predict_proba 方法
我为 SFS class 实现了预测和 predict_probba 方法,调用了拟合 self.estimator 的预测和 predict_probba 方法。
因此,我只剩下一个用于 SFS 和预测的估算器。
下面是部分代码:
import sklearn.pipeline
import sklearn.tree
import sklearn.model_selection
import mlxtend.feature_selection
class CSequentialFeatureSelector(mlxtend.feature_selection.SequentialFeatureSelector):
def predict(self, X):
X = self.transform(X)
return self.estimator.predict(X)
def predict_proba(self, X):
X = self.transform(X)
return self.estimator.predict_proba(X)
def fit(self, X, y):
self.fit_helper(X, y) # fit helper is the 'old' fit method, which I copied and renamed to fit_helper
self.estimator.fit(self.transform(X), y)
return self
def sfs(x, y):
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y, test_size=0.2, random_state=0)
clf = sklearn.tree.DecisionTreeClassifier()
param_grid = {
"sfs__estimator__max_depth": [3, 4, 5]
}
sfs = mlxtend.feature_selection.SequentialFeatureSelector(clone_estimator=True,
estimator=clf,
k_features=10,
forward=True,
floating=False,
scoring='accuracy',
cv=3,
n_jobs=1)
# Now only one object in the pipeline (in fact this is not even needed anymore)
pipe = sklearn.pipeline.Pipeline([('sfs', sfs)])
gs = sklearn.model_selection.GridSearchCV(estimator=pipe,
param_grid=param_grid,
scoring='accuracy',
n_jobs=1,
cv=3,
refit=True)
gs = gs.fit(x_train, y_train)
print("SFS Final Estimator Depth: " + str(gs.best_estimator_.named_steps.sfs.estimator.max_depth))
y_test_pred = gs.predict(x_test)
# Evaluate performance of y_test_pred