在 statsmodels OLS 中使用分类变量 class
Using categorical variables in statsmodels OLS class
我想使用 statsmodels
OLS class 创建多元回归模型。考虑以下数据集:
import statsmodels.api as sm
import pandas as pd
import numpy as np
dict = {'industry': ['mining', 'transportation', 'hospitality', 'finance', 'entertainment'],
'debt_ratio':np.random.randn(5), 'cash_flow':np.random.randn(5) + 90}
df = pd.DataFrame.from_dict(dict)
x = data[['debt_ratio', 'industry']]
y = data['cash_flow']
def reg_sm(x, y):
x = np.array(x).T
x = sm.add_constant(x)
results = sm.OLS(endog = y, exog = x).fit()
return results
当我运行以下代码时:
reg_sm(x, y)
我收到以下错误:
TypeError: '>=' not supported between instances of 'float' and 'str'
我已经尝试将 industry
变量转换为分类变量,但我仍然遇到错误。我别无选择。
您在转换为分类数据类型方面走在正确的道路上。但是,一旦将 DataFrame 转换为 NumPy 数组,就会得到一个 object
dtype(NumPy 数组作为一个整体是一种统一类型)。这意味着个体价值仍然是 str
的基础,回归肯定不会喜欢。
你可能想要做的是 dummify this feature. Instead of factorizing 它,这将有效地将变量视为连续的,你想保持某种分类的外表:
>>> import statsmodels.api as sm
>>> import pandas as pd
>>> import numpy as np
>>> np.random.seed(444)
>>> data = {
... 'industry': ['mining', 'transportation', 'hospitality', 'finance', 'entertainment'],
... 'debt_ratio':np.random.randn(5),
... 'cash_flow':np.random.randn(5) + 90
... }
>>> data = pd.DataFrame.from_dict(data)
>>> data = pd.concat((
... data,
... pd.get_dummies(data['industry'], drop_first=True)), axis=1)
>>> # You could also use data.drop('industry', axis=1)
>>> # in the call to pd.concat()
>>> data
industry debt_ratio cash_flow finance hospitality mining transportation
0 mining 0.357440 88.856850 0 0 1 0
1 transportation 0.377538 89.457560 0 0 0 1
2 hospitality 1.382338 89.451292 0 1 0 0
3 finance 1.175549 90.208520 1 0 0 0
4 entertainment -0.939276 90.212690 0 0 0 0
现在您有了 statsmodels 可以更好地使用的数据类型。 drop_first
的目的是为了避免 dummy trap:
>>> y = data['cash_flow']
>>> x = data.drop(['cash_flow', 'industry'], axis=1)
>>> sm.OLS(y, x).fit()
<statsmodels.regression.linear_model.RegressionResultsWrapper object at 0x115b87cf8>
最后,只是一个小提示:它有助于尝试避免使用隐藏内置对象类型的名称来命名引用,例如 dict
.
我也有这个问题,有很多列需要被视为分类,这让处理 dummify
变得很烦人。转换为 string
对我不起作用。
对于任何在不对数据进行单热编码的情况下寻找解决方案的人,
R 接口提供了一个很好的方法来做到这一点:
import statsmodels.formula.api as smf
import pandas as pd
import numpy as np
dict = {'industry': ['mining', 'transportation', 'hospitality', 'finance', 'entertainment'],
'debt_ratio':np.random.randn(5), 'cash_flow':np.random.randn(5) + 90}
df = pd.DataFrame.from_dict(dict)
x = df[['debt_ratio', 'industry']]
y = df['cash_flow']
# NB. unlike sm.OLS, there is "intercept" term is included here
smf.ols(formula="cash_flow ~ debt_ratio + C(industry)", data=df).fit()
参考:
https://www.statsmodels.org/stable/example_formulas.html#categorical-variables
这只是分类变量的类似案例的另一个示例,与 R(芬兰汉肯)中给出的统计课程相比,它给出了正确的结果。
import wooldridge as woo
import statsmodels.formula.api as smf
import numpy as np
df = woo.dataWoo('beauty')
print(df.describe)
df['abvavg'] = (df['looks']>=4).astype(int) # good looking
df['belavg'] = (df['looks']<=2).astype(int) # bad looking
df_female = df[df['female']==1]
df_male = df[df['female']==0]
results_female = smf.ols(formula = 'np.log(wage) ~ belavg + abvavg',data=df_female).fit()
print(f"FEMALE results, summary \n {results_female.summary()}")
results_male = smf.ols(formula = 'np.log(wage) ~ belavg + abvavg',data=df_male).fit()
print(f"MALE results, summary \n {results_male.summary()}")
特维辛,马库斯
我想使用 statsmodels
OLS class 创建多元回归模型。考虑以下数据集:
import statsmodels.api as sm
import pandas as pd
import numpy as np
dict = {'industry': ['mining', 'transportation', 'hospitality', 'finance', 'entertainment'],
'debt_ratio':np.random.randn(5), 'cash_flow':np.random.randn(5) + 90}
df = pd.DataFrame.from_dict(dict)
x = data[['debt_ratio', 'industry']]
y = data['cash_flow']
def reg_sm(x, y):
x = np.array(x).T
x = sm.add_constant(x)
results = sm.OLS(endog = y, exog = x).fit()
return results
当我运行以下代码时:
reg_sm(x, y)
我收到以下错误:
TypeError: '>=' not supported between instances of 'float' and 'str'
我已经尝试将 industry
变量转换为分类变量,但我仍然遇到错误。我别无选择。
您在转换为分类数据类型方面走在正确的道路上。但是,一旦将 DataFrame 转换为 NumPy 数组,就会得到一个 object
dtype(NumPy 数组作为一个整体是一种统一类型)。这意味着个体价值仍然是 str
的基础,回归肯定不会喜欢。
你可能想要做的是 dummify this feature. Instead of factorizing 它,这将有效地将变量视为连续的,你想保持某种分类的外表:
>>> import statsmodels.api as sm
>>> import pandas as pd
>>> import numpy as np
>>> np.random.seed(444)
>>> data = {
... 'industry': ['mining', 'transportation', 'hospitality', 'finance', 'entertainment'],
... 'debt_ratio':np.random.randn(5),
... 'cash_flow':np.random.randn(5) + 90
... }
>>> data = pd.DataFrame.from_dict(data)
>>> data = pd.concat((
... data,
... pd.get_dummies(data['industry'], drop_first=True)), axis=1)
>>> # You could also use data.drop('industry', axis=1)
>>> # in the call to pd.concat()
>>> data
industry debt_ratio cash_flow finance hospitality mining transportation
0 mining 0.357440 88.856850 0 0 1 0
1 transportation 0.377538 89.457560 0 0 0 1
2 hospitality 1.382338 89.451292 0 1 0 0
3 finance 1.175549 90.208520 1 0 0 0
4 entertainment -0.939276 90.212690 0 0 0 0
现在您有了 statsmodels 可以更好地使用的数据类型。 drop_first
的目的是为了避免 dummy trap:
>>> y = data['cash_flow']
>>> x = data.drop(['cash_flow', 'industry'], axis=1)
>>> sm.OLS(y, x).fit()
<statsmodels.regression.linear_model.RegressionResultsWrapper object at 0x115b87cf8>
最后,只是一个小提示:它有助于尝试避免使用隐藏内置对象类型的名称来命名引用,例如 dict
.
我也有这个问题,有很多列需要被视为分类,这让处理 dummify
变得很烦人。转换为 string
对我不起作用。
对于任何在不对数据进行单热编码的情况下寻找解决方案的人, R 接口提供了一个很好的方法来做到这一点:
import statsmodels.formula.api as smf
import pandas as pd
import numpy as np
dict = {'industry': ['mining', 'transportation', 'hospitality', 'finance', 'entertainment'],
'debt_ratio':np.random.randn(5), 'cash_flow':np.random.randn(5) + 90}
df = pd.DataFrame.from_dict(dict)
x = df[['debt_ratio', 'industry']]
y = df['cash_flow']
# NB. unlike sm.OLS, there is "intercept" term is included here
smf.ols(formula="cash_flow ~ debt_ratio + C(industry)", data=df).fit()
参考: https://www.statsmodels.org/stable/example_formulas.html#categorical-variables
这只是分类变量的类似案例的另一个示例,与 R(芬兰汉肯)中给出的统计课程相比,它给出了正确的结果。
import wooldridge as woo
import statsmodels.formula.api as smf
import numpy as np
df = woo.dataWoo('beauty')
print(df.describe)
df['abvavg'] = (df['looks']>=4).astype(int) # good looking
df['belavg'] = (df['looks']<=2).astype(int) # bad looking
df_female = df[df['female']==1]
df_male = df[df['female']==0]
results_female = smf.ols(formula = 'np.log(wage) ~ belavg + abvavg',data=df_female).fit()
print(f"FEMALE results, summary \n {results_female.summary()}")
results_male = smf.ols(formula = 'np.log(wage) ~ belavg + abvavg',data=df_male).fit()
print(f"MALE results, summary \n {results_male.summary()}")
特维辛,马库斯