为什么当我从所有内存学习切换到 dada 生成器时,我的验证准确率会低很多?
Why is my validation accuracy so much lower when I switch from doing all in-memory learning to a dada generator?
我有一个包含 2 列的数据集:
1.) 一个由21个不同字母组成的字符串列。
2.) 一个分类列:这些字符串中的每一个都与一个从 1-7 的数字相关联。
使用下面的代码,我首先进行整数编码。
codes = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L',
'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y']
def create_dict(codes):
char_dict = {}
for index, val in enumerate(codes):
char_dict[val] = index+1
return char_dict
def integer_encoding(data):
"""
- Encodes code sequence to integer values.
- 20 common amino acids are taken into consideration
and rest 4 are categorized as 0.
"""
encode_list = []
for row in data['Sequence'].values:
row_encode = []
for code in row:
row_encode.append(char_dict.get(code, 0))
encode_list.append(np.array(row_encode))
return encode_list
使用这段代码,我执行整数,然后在内存中进行一次性编码。
char_dict = create_dict(codes)
train_encode = integer_encoding(balanced_train_df.reset_index())
val_encode = integer_encoding(val_df.reset_index())
train_pad = pad_sequences(train_encode, maxlen=max_length, padding='post', truncating='post')
val_pad = pad_sequences(val_encode, maxlen=max_length, padding='post', truncating='post')
train_ohe = to_categorical(train_pad)
val_ohe = to_categorical(val_pad)
然后我就这样训练我的学习器。
es = EarlyStopping(monitor='val_loss', patience=3, verbose=1)
history2 = model2.fit(
train_ohe, y_train,
epochs=50, batch_size=64,
validation_data=(val_ohe, y_val),
callbacks=[es]
)
这让我的验证准确率达到了 86% 左右的水平。
即使是第一个纪元也是这样的:
Train on 431403 samples, validate on 50162 samples
Epoch 1/50
431403/431403 [==============================] - 187s 434us/sample - loss: 1.3532 - accuracy: 0.6947 - val_loss: 0.9443 - val_accuracy: 0.7730
注意第一轮的验证准确率为 77%。
但是因为我的数据集比较大,我最终消耗了大约 50+Gb。之所以如此,是因为我将整个数据集加载到内存中,并在内存中转换整个数据集和数据转换。
为了以更高效的内存方式进行学习,我引入了一个数据生成器,如下所示:
class DataGenerator(Sequence):
'Generates data for Keras'
def __init__(self, list_IDs, data_col, labels, batch_size=32, dim=(32,32,32), n_channels=1,
n_classes=10, shuffle=True):
'Initialization'
self.dim = dim
self.batch_size = batch_size
self.data_col_name = data_col
self.labels = labels
self.list_IDs = list_IDs
self.n_channels = n_channels
self.n_classes = n_classes
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.list_IDs) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# Find list of IDs
list_IDs_temp = [self.list_IDs[k] for k in indexes]
# Generate data
X, y = self.__data_generation(list_IDs_temp)
return X, y
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
X = np.empty((self.batch_size, *self.dim))
y = np.empty(self.batch_size, dtype=int)
# Generate data
for i, ID in enumerate(list_IDs_temp):
# Store sample
# Read sequence string and convert to array
# of padded categorical data in array
int_encode_dt = integer_encoding(integer_encoding([balanced_train_df.loc[ID, self.data_col_name]]))
padded_dt = pad_sequences(int_encode_dt, maxlen=660, padding='post', truncating='post')
categorical_dt = to_categorical(padded_dt)
X[i,] = categorical_dt
# Store class
y[i] = self.labels[ID]-1
return X, to_categorical(y, num_classes=self.n_classes)
代码改编自这里:https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly
然后像这样触发学习:
params = {'dim': (660, 21), # sequences are at most 660 long and are encoded in 20 common amino acids,
'batch_size': 32,
'n_classes': 7,
'n_channels': 1,
'shuffle': False}
training_generator = DataGenerator(balanced_train_df.index, 'Sequence', balanced_train_df['ec_lvl_1'], **params)
validate_generator = DataGenerator(val_df.index, 'Sequence', val_df['ec_lvl_1'], **params)
# Early Stopping
es = EarlyStopping(monitor='val_loss', patience=3, verbose=1)
history2 = model2.fit(
training_generator,
validation_data=validate_generator,
use_multiprocessing=True,
workers=6,
epochs=50,
callbacks=[es]
)
这里的问题是我使用数据生成器的验证准确度从未超过 15%。
Epoch 1/10
13469/13481 [============================>.] - ETA: 0s - loss: 2.0578 - accuracy: 0.1427
13481/13481 [==============================] - 242s 18ms/step - loss: 2.0578 - accuracy: 0.1427 - val_loss: 1.9447 - val_accuracy: 0.0919
请注意验证准确率仅为 9%。
我的问题是为什么会这样?我无法解释的一件事是:
当我做all in memory learning的时候,我把batch size设置为32或者64,但是步数还是大约413k(训练样本总数)。但是当我使用数据生成器时,我得到的数字要小得多,通常为 413k samples/batch 大小。这是在告诉我在内存学习案例中我并没有真正使用批量大小参数吗?感谢解释。
一系列愚蠢的错误导致了这种差异,它们都位于此处的这一行中:
int_encode_dt = integer_encoding(integer_encoding([balanced_train_df.loc[ID, self.data_col_name]]))
错误 1:我应该传入我要处理的数据框,这样我就可以输入训练和验证错误。我以前这样做的方式......即使我认为我传递了验证数据,我仍然会使用训练数据。
错误 2:我对我的数据进行了双整数编码(呃!)
我有一个包含 2 列的数据集:
1.) 一个由21个不同字母组成的字符串列。 2.) 一个分类列:这些字符串中的每一个都与一个从 1-7 的数字相关联。
使用下面的代码,我首先进行整数编码。
codes = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L',
'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y']
def create_dict(codes):
char_dict = {}
for index, val in enumerate(codes):
char_dict[val] = index+1
return char_dict
def integer_encoding(data):
"""
- Encodes code sequence to integer values.
- 20 common amino acids are taken into consideration
and rest 4 are categorized as 0.
"""
encode_list = []
for row in data['Sequence'].values:
row_encode = []
for code in row:
row_encode.append(char_dict.get(code, 0))
encode_list.append(np.array(row_encode))
return encode_list
使用这段代码,我执行整数,然后在内存中进行一次性编码。
char_dict = create_dict(codes)
train_encode = integer_encoding(balanced_train_df.reset_index())
val_encode = integer_encoding(val_df.reset_index())
train_pad = pad_sequences(train_encode, maxlen=max_length, padding='post', truncating='post')
val_pad = pad_sequences(val_encode, maxlen=max_length, padding='post', truncating='post')
train_ohe = to_categorical(train_pad)
val_ohe = to_categorical(val_pad)
然后我就这样训练我的学习器。
es = EarlyStopping(monitor='val_loss', patience=3, verbose=1)
history2 = model2.fit(
train_ohe, y_train,
epochs=50, batch_size=64,
validation_data=(val_ohe, y_val),
callbacks=[es]
)
这让我的验证准确率达到了 86% 左右的水平。
即使是第一个纪元也是这样的:
Train on 431403 samples, validate on 50162 samples
Epoch 1/50
431403/431403 [==============================] - 187s 434us/sample - loss: 1.3532 - accuracy: 0.6947 - val_loss: 0.9443 - val_accuracy: 0.7730
注意第一轮的验证准确率为 77%。
但是因为我的数据集比较大,我最终消耗了大约 50+Gb。之所以如此,是因为我将整个数据集加载到内存中,并在内存中转换整个数据集和数据转换。
为了以更高效的内存方式进行学习,我引入了一个数据生成器,如下所示:
class DataGenerator(Sequence):
'Generates data for Keras'
def __init__(self, list_IDs, data_col, labels, batch_size=32, dim=(32,32,32), n_channels=1,
n_classes=10, shuffle=True):
'Initialization'
self.dim = dim
self.batch_size = batch_size
self.data_col_name = data_col
self.labels = labels
self.list_IDs = list_IDs
self.n_channels = n_channels
self.n_classes = n_classes
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.list_IDs) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# Find list of IDs
list_IDs_temp = [self.list_IDs[k] for k in indexes]
# Generate data
X, y = self.__data_generation(list_IDs_temp)
return X, y
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
X = np.empty((self.batch_size, *self.dim))
y = np.empty(self.batch_size, dtype=int)
# Generate data
for i, ID in enumerate(list_IDs_temp):
# Store sample
# Read sequence string and convert to array
# of padded categorical data in array
int_encode_dt = integer_encoding(integer_encoding([balanced_train_df.loc[ID, self.data_col_name]]))
padded_dt = pad_sequences(int_encode_dt, maxlen=660, padding='post', truncating='post')
categorical_dt = to_categorical(padded_dt)
X[i,] = categorical_dt
# Store class
y[i] = self.labels[ID]-1
return X, to_categorical(y, num_classes=self.n_classes)
代码改编自这里:https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly
然后像这样触发学习:
params = {'dim': (660, 21), # sequences are at most 660 long and are encoded in 20 common amino acids,
'batch_size': 32,
'n_classes': 7,
'n_channels': 1,
'shuffle': False}
training_generator = DataGenerator(balanced_train_df.index, 'Sequence', balanced_train_df['ec_lvl_1'], **params)
validate_generator = DataGenerator(val_df.index, 'Sequence', val_df['ec_lvl_1'], **params)
# Early Stopping
es = EarlyStopping(monitor='val_loss', patience=3, verbose=1)
history2 = model2.fit(
training_generator,
validation_data=validate_generator,
use_multiprocessing=True,
workers=6,
epochs=50,
callbacks=[es]
)
这里的问题是我使用数据生成器的验证准确度从未超过 15%。
Epoch 1/10
13469/13481 [============================>.] - ETA: 0s - loss: 2.0578 - accuracy: 0.1427
13481/13481 [==============================] - 242s 18ms/step - loss: 2.0578 - accuracy: 0.1427 - val_loss: 1.9447 - val_accuracy: 0.0919
请注意验证准确率仅为 9%。
我的问题是为什么会这样?我无法解释的一件事是:
当我做all in memory learning的时候,我把batch size设置为32或者64,但是步数还是大约413k(训练样本总数)。但是当我使用数据生成器时,我得到的数字要小得多,通常为 413k samples/batch 大小。这是在告诉我在内存学习案例中我并没有真正使用批量大小参数吗?感谢解释。
一系列愚蠢的错误导致了这种差异,它们都位于此处的这一行中:
int_encode_dt = integer_encoding(integer_encoding([balanced_train_df.loc[ID, self.data_col_name]]))
错误 1:我应该传入我要处理的数据框,这样我就可以输入训练和验证错误。我以前这样做的方式......即使我认为我传递了验证数据,我仍然会使用训练数据。
错误 2:我对我的数据进行了双整数编码(呃!)