Python 3 和 Sklearn:很难将 NOT-sklearn 模型用作 sklearn 模型

Python 3 and Sklearn: Difficulty to use a NOT-sklearn model as a sklearn model

下面的代码有效。我只有一个 运行 使用先前在 sklearn 中定义的线性模型的交叉验证方案的例程。我对此没有问题。我的问题是:如果我用 model=RBF('multiquadric') 替换代码 model=linear_model.LinearRegression()(请参阅 __main__ 中的第 14 行和第 15 行,它就不再起作用了。所以我的问题实际上是在class RBF 我尝试模仿 sklearn 模型。

如果我替换上述代码,我会收到以下错误:

  FitFailedWarning)
/home/daniel/anaconda3/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:536: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: 
ValueError: All arrays must be equal length.

  FitFailedWarning)

1) 我应该在 Class RBF 中定义一个得分函数吗?

2) 怎么做?我搞不清楚了。由于我继承了BaseEstimator和RegressorMixin,所以我预计这是内部解决的。

3) 是否还缺少其他内容?

from sklearn import datasets
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn import model_selection
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from scipy.interpolate import Rbf
np.random.seed(0)
from sklearn.base import BaseEstimator, RegressorMixin



class RBF(BaseEstimator, RegressorMixin):
    def __init__(self,function):
        self.function=function
    def fit(self,x,y):
        self.rbf = Rbf(x, y,function=self.function)
    def predict(self,x):   
        return self.rbf(x)    


if __name__ == "__main__":
    # Load Data
    targetName='HousePrice'
    data=datasets.load_boston()
    featuresNames=list(data.feature_names)
    featuresData=data.data
    targetData = data.target
    df=pd.DataFrame(featuresData,columns=featuresNames)
    df[targetName]=targetData
    independent_variable_list=featuresNames
    dependent_variable=targetName
    X=df[independent_variable_list].values
    y=np.squeeze(df[[dependent_variable]].values)    
    # Model Definition    
    model=linear_model.LinearRegression()
    #model=RBF('multiquadric')    
    # Cross validation routine
    number_splits=5
    score_list=['neg_mean_squared_error','neg_mean_absolute_error','r2']
    kfold = model_selection.KFold(n_splits=number_splits,shuffle=True, random_state=0)
    scalar = StandardScaler()
    pipeline = Pipeline([('transformer', scalar), ('estimator', model)])
    results = model_selection.cross_validate(pipeline, X, y, cv=kfold, scoring=score_list,return_train_score=True)
    for score in score_list:
        print(score+':')
        print('Train: '+'Mean',np.mean(results['train_'+score]),'Standard Error',np.std(results['train_'+score]))
        print('Test: '+'Mean',np.mean(results['test_'+score]),'Standard Error',np.std(results['test_'+score]))

让我们看看文档 here

*args : arrays

x, y, z, …, d, where x, y, z, … are the coordinates of the nodes and d is the array of values at the nodes

因此它采用可变长度参数,最后一个参数是 y 在您的情况下的值。参数 k 是所有数据点的第 k 个坐标(所有其他参数相同 z, y, z, …

根据文档,您的代码应该是

from sklearn import datasets
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn import model_selection
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from scipy.interpolate import Rbf
np.random.seed(0)
from sklearn.base import BaseEstimator, RegressorMixin

class RBF(BaseEstimator, RegressorMixin):
    def __init__(self,function):
        self.function=function
    def fit(self,X,y):        
        self.rbf = Rbf(*X.T, y,function=self.function)

    def predict(self,X):   
        return self.rbf(*X.T)


# Load Data
data=datasets.load_boston()

X = data.data
y = data.target


number_splits=5
score_list=['neg_mean_squared_error','neg_mean_absolute_error','r2']

kfold = model_selection.KFold(n_splits=number_splits,shuffle=True, random_state=0)
scalar = StandardScaler()

model = RBF(function='multiquadric')

pipeline = Pipeline([('transformer', scalar), ('estimator', model)])

results = model_selection.cross_validate(pipeline, X, y, cv=kfold, scoring=score_list,return_train_score=True)

for score in score_list:
        print(score+':')
        print('Train: '+'Mean',np.mean(results['train_'+score]),'Standard Error',np.std(results['train_'+score]))
        print('Test: '+'Mean',np.mean(results['test_'+score]),'Standard Error',np.std(results['test_'+score]))

输出

neg_mean_squared_error:
Train: Mean -1.552450953914355e-20 Standard Error 7.932530906290208e-21
Test: Mean -23.007377210596463 Standard Error 4.254629143836107
neg_mean_absolute_error:
Train: Mean -9.398502208736061e-11 Standard Error 2.4673749061941226e-11
Test: Mean -3.1319779583728673 Standard Error 0.2162343985534446
r2:
Train: Mean 1.0 Standard Error 0.0
Test: Mean 0.7144217179633185 Standard Error 0.08526294242760363

为什么 *X.T :正如我们所见,每个参数对应于所有数据点的一个轴,所以我们转置它们然后使用 *运算符扩展每个子数组并将其作为参数传递给可变长度函数。

看起来最新的实现有一个 mode 参数,我们可以在其中直接传递 N-D 数组。