如何使用 LabelBinarizer 对训练和测试进行一次热编码
How to use LabelBinarizer to one hot encode both train and test correctly
假设我有这样一组火车:
Name | day
------------
First | 0
Second | 1
Third | 1
Forth | 2
以及不包含所有这些名称或日期的测试集。像这样:
Name | day
------------
First | 2
Second | 1
Forth | 0
我有以下代码来转换编码特征中的这些列:
features_to_encode = ['Name', 'day']
label_final = pd.DataFrame()
for feature in features_to_encode:
label_campaign = LabelBinarizer()
label_results = label_campaign.fit_transform(df[feature])
label_results = pd.DataFrame(label_results, columns=label_campaign.classes_)
label_final = pd.concat([label_final, label_results], axis=1)
df_encoded = label_final.join(df)
要在火车上产生以下输出(这很好):
First | Second | Third | Forth | 0 | 1 | 2 |
-----------------------------------------------
1 | 0 | 0 | 0 | 1 | 0 | 0 |
0 | 1 | 0 | 0 | 0 | 1 | 0 |
0 | 0 | 1 | 0 | 0 | 1 | 0 |
0 | 0 | 0 | 1 | 0 | 0 | 1 |
但是,当我 运行 在测试数据(新数据)上这样做时,如果测试数据不包含与火车数据完全相同的名称和日期,我会得到 不匹配的特征。所以如果我在这个测试样本上 运行 类似的代码,我会得到:
First | Second | Forth | 0 | 1 | 2 |
--------------------------------------
1 | 0 | 0 | 0 | 0 | 1 |
0 | 1 | 0 | 0 | 1 | 0 |
0 | 0 | 1 | 1 | 0 | 0 |
我可以做些什么来保留来自训练数据的相同转换并将其正确应用于测试数据,从而产生此所需的输出:
First | Second | Third | Forth | 0 | 1 | 2 |
-----------------------------------------------
1 | 0 | 0 | 0 | 0 | 0 | 1 |
0 | 1 | 0 | 0 | 0 | 1 | 0 |
0 | 0 | 0 | 1 | 1 | 0 | 0 |
我已经尝试添加字典来捕获 fit_transform 结果,但我不确定这是否有效或之后如何处理:
features_to_encode = ['Name', 'day']
label_final = pd.DataFrame()
labels = {}--------------------------------------------------------------------> TRIED THIS
for feature in features_to_encode:
label_campaign = LabelBinarizer()
label_results = label_campaign.fit_transform(df[feature])
labels[feature] = label_results--------------------------------------------> WITH THIS
label_results = pd.DataFrame(label_results, columns=label_campaign.classes_)
label_final = pd.concat([label_final, label_results], axis=1)
df_encoded = label_final.join(df)
感谢任何帮助。谢谢=)
像这样的东西应该有用。我通常使用数据框直到最后一次,因为它们更容易使用。 X
应该是预测之前的测试数据框。 original_cols
应该是您的训练集列的列表。让我知道它是否适合你。
def normalize_X(X, original_cols):
missing_cols= set(original_cols) - set(X.columns)
extra_cols= set(X.columns) - set(original_cols)
if len(missing_cols)>0:
print(f'missing columns: {", ".join(missing_cols)}')
for col in (missing_cols):
X[col] = 0
if len(extra_cols)>0:
print(f'Columns to drop: {", ".join(extra_cols)} ',)
X = X.drop(columns = extra_cols)
X = X[original_cols]
return X
pd.CategoricalDtype
和 pd.get_dummies
names_cat = pd.CategoricalDtype(['First', 'Second', 'Third', 'Forth'])
days_cat = pd.CategoricalDtype([0, 1, 2, 3, 4])
dumb_names = pd.get_dummies(df.Name.astype(names_cat))
dumb_names.columns = dumb_names.columns.to_numpy()
dumb_days = pd.get_dummies(df.day.astype(days_cat))
dumb_days.columns = dumb_days.columns.to_numpy()
First Second Third Forth 0 1 2 3 4
0 1 0 0 0 0 0 1 0 0
1 0 1 0 0 0 1 0 0 0
2 0 0 0 1 1 0 0 0 0
LabelBinarizer.classes_
from sklearn.preprocessing import LabelBinarizer
lb_0 = LabelBinarizer()
lb_1 = LabelBinarizer()
lb_0.classes_ = ['First', 'Second', 'Third', 'Forth']
lb_1.classes_ = [0, 1, 2, 3, 4]
a = lb_0.transform(df.Name)
b = lb_1.transform(df.day)
data = np.column_stack([a, b])
idx = df.index
col = np.concatenate([lb_0.classes_, lb_1.classes_])
result = pd.DataFrame(data, idx, col)
result
First Second Third Forth 0 1 2 3 4
0 1 0 0 0 0 0 1 0 0
1 0 1 0 0 0 1 0 0 0
2 0 0 0 1 1 0 0 0 0
reindex
cols = ['First', 'Second', 'Third', 'Forth', 0, 1, 2]
result = pd.concat(map(pd.get_dummies, map(df.get, df)), axis=1)
result.reindex(columns=cols, fill_value=0)
First Second Third Forth 0 1 2
0 1 0 0 0 0 0 1
1 0 1 0 0 0 1 0
2 0 0 0 1 1 0 0
另一种方法,可能更适合不同变量之间具有共同值的情况,以及您计划对多个列进行自动化编码的情况:
df_train = pd.DataFrame({'Name': ['First', 'Second', 'Third', 'Fourth'], 'Day': [2,1,1,2]})
df_test = pd.DataFrame({'Name': ['First', 'Second', 'Fourth'], 'Day': [2,1,0]})
categories = []
cols_to_encode = ['Name', 'Day']
# Union of all values in both training and testing datasets:
for col in cols_to_encode:
categories.append(list(set(df_train[col].unique().tolist() + df_test[col].unique().tolist())))
# Sorts the class names under each variable
for cat in categories:
cat.sort()
for col_name, cat in zip(cols_to_encode, categories):
df_test[col_name] = pd.Categorical(df_test[col_name], categories=cat)
df_test = pd.get_dummies(df_test, columns=cols_to_encode)
df_test
>>
Name_First Name_Second Name_Third Name_Fourth Day_0 Day_1 Day_2 Day_3 Day_4
0 1 0 0 0 0 0 1 0 0
1 0 1 0 0 0 1 0 0 0
2 0 0 0 1 1 0 0 0 0
假设我有这样一组火车:
Name | day
------------
First | 0
Second | 1
Third | 1
Forth | 2
以及不包含所有这些名称或日期的测试集。像这样:
Name | day
------------
First | 2
Second | 1
Forth | 0
我有以下代码来转换编码特征中的这些列:
features_to_encode = ['Name', 'day']
label_final = pd.DataFrame()
for feature in features_to_encode:
label_campaign = LabelBinarizer()
label_results = label_campaign.fit_transform(df[feature])
label_results = pd.DataFrame(label_results, columns=label_campaign.classes_)
label_final = pd.concat([label_final, label_results], axis=1)
df_encoded = label_final.join(df)
要在火车上产生以下输出(这很好):
First | Second | Third | Forth | 0 | 1 | 2 |
-----------------------------------------------
1 | 0 | 0 | 0 | 1 | 0 | 0 |
0 | 1 | 0 | 0 | 0 | 1 | 0 |
0 | 0 | 1 | 0 | 0 | 1 | 0 |
0 | 0 | 0 | 1 | 0 | 0 | 1 |
但是,当我 运行 在测试数据(新数据)上这样做时,如果测试数据不包含与火车数据完全相同的名称和日期,我会得到 不匹配的特征。所以如果我在这个测试样本上 运行 类似的代码,我会得到:
First | Second | Forth | 0 | 1 | 2 |
--------------------------------------
1 | 0 | 0 | 0 | 0 | 1 |
0 | 1 | 0 | 0 | 1 | 0 |
0 | 0 | 1 | 1 | 0 | 0 |
我可以做些什么来保留来自训练数据的相同转换并将其正确应用于测试数据,从而产生此所需的输出:
First | Second | Third | Forth | 0 | 1 | 2 |
-----------------------------------------------
1 | 0 | 0 | 0 | 0 | 0 | 1 |
0 | 1 | 0 | 0 | 0 | 1 | 0 |
0 | 0 | 0 | 1 | 1 | 0 | 0 |
我已经尝试添加字典来捕获 fit_transform 结果,但我不确定这是否有效或之后如何处理:
features_to_encode = ['Name', 'day']
label_final = pd.DataFrame()
labels = {}--------------------------------------------------------------------> TRIED THIS
for feature in features_to_encode:
label_campaign = LabelBinarizer()
label_results = label_campaign.fit_transform(df[feature])
labels[feature] = label_results--------------------------------------------> WITH THIS
label_results = pd.DataFrame(label_results, columns=label_campaign.classes_)
label_final = pd.concat([label_final, label_results], axis=1)
df_encoded = label_final.join(df)
感谢任何帮助。谢谢=)
像这样的东西应该有用。我通常使用数据框直到最后一次,因为它们更容易使用。 X
应该是预测之前的测试数据框。 original_cols
应该是您的训练集列的列表。让我知道它是否适合你。
def normalize_X(X, original_cols):
missing_cols= set(original_cols) - set(X.columns)
extra_cols= set(X.columns) - set(original_cols)
if len(missing_cols)>0:
print(f'missing columns: {", ".join(missing_cols)}')
for col in (missing_cols):
X[col] = 0
if len(extra_cols)>0:
print(f'Columns to drop: {", ".join(extra_cols)} ',)
X = X.drop(columns = extra_cols)
X = X[original_cols]
return X
pd.CategoricalDtype
和 pd.get_dummies
names_cat = pd.CategoricalDtype(['First', 'Second', 'Third', 'Forth'])
days_cat = pd.CategoricalDtype([0, 1, 2, 3, 4])
dumb_names = pd.get_dummies(df.Name.astype(names_cat))
dumb_names.columns = dumb_names.columns.to_numpy()
dumb_days = pd.get_dummies(df.day.astype(days_cat))
dumb_days.columns = dumb_days.columns.to_numpy()
First Second Third Forth 0 1 2 3 4
0 1 0 0 0 0 0 1 0 0
1 0 1 0 0 0 1 0 0 0
2 0 0 0 1 1 0 0 0 0
LabelBinarizer.classes_
from sklearn.preprocessing import LabelBinarizer
lb_0 = LabelBinarizer()
lb_1 = LabelBinarizer()
lb_0.classes_ = ['First', 'Second', 'Third', 'Forth']
lb_1.classes_ = [0, 1, 2, 3, 4]
a = lb_0.transform(df.Name)
b = lb_1.transform(df.day)
data = np.column_stack([a, b])
idx = df.index
col = np.concatenate([lb_0.classes_, lb_1.classes_])
result = pd.DataFrame(data, idx, col)
result
First Second Third Forth 0 1 2 3 4
0 1 0 0 0 0 0 1 0 0
1 0 1 0 0 0 1 0 0 0
2 0 0 0 1 1 0 0 0 0
reindex
cols = ['First', 'Second', 'Third', 'Forth', 0, 1, 2]
result = pd.concat(map(pd.get_dummies, map(df.get, df)), axis=1)
result.reindex(columns=cols, fill_value=0)
First Second Third Forth 0 1 2
0 1 0 0 0 0 0 1
1 0 1 0 0 0 1 0
2 0 0 0 1 1 0 0
另一种方法,可能更适合不同变量之间具有共同值的情况,以及您计划对多个列进行自动化编码的情况:
df_train = pd.DataFrame({'Name': ['First', 'Second', 'Third', 'Fourth'], 'Day': [2,1,1,2]})
df_test = pd.DataFrame({'Name': ['First', 'Second', 'Fourth'], 'Day': [2,1,0]})
categories = []
cols_to_encode = ['Name', 'Day']
# Union of all values in both training and testing datasets:
for col in cols_to_encode:
categories.append(list(set(df_train[col].unique().tolist() + df_test[col].unique().tolist())))
# Sorts the class names under each variable
for cat in categories:
cat.sort()
for col_name, cat in zip(cols_to_encode, categories):
df_test[col_name] = pd.Categorical(df_test[col_name], categories=cat)
df_test = pd.get_dummies(df_test, columns=cols_to_encode)
df_test
>>
Name_First Name_Second Name_Third Name_Fourth Day_0 Day_1 Day_2 Day_3 Day_4
0 1 0 0 0 0 0 1 0 0
1 0 1 0 0 0 1 0 0 0
2 0 0 0 1 1 0 0 0 0