在我的 RNN 模型中的每个时期,验证损失都在增加,验证准确性在下降
Validation Loss is increase and validation accuracy is decrese on every epochs in my RNN Model
我正在研究辱骂和暴力内容检测。当我训练我的模型时,训练日志如下:
Train on 9087 samples, validate on 2125 samples
Epoch 1/5
9087/9087 [==============================] - 33s 4ms/step - loss: 0.3193 - accuracy: 0.8603 - val_loss: 0.2314 - val_accuracy: 0.9322
Epoch 2/5
9087/9087 [==============================] - 33s 4ms/step - loss: 0.1787 - accuracy: 0.9440 - val_loss: 0.2039 - val_accuracy: 0.9356
Epoch 3/5
9087/9087 [==============================] - 32s 4ms/step - loss: 0.1148 - accuracy: 0.9637 - val_loss: 0.2569 - val_accuracy: 0.9180
Epoch 4/5
9087/9087 [==============================] - 33s 4ms/step - loss: 0.0805 - accuracy: 0.9738 - val_loss: 0.3409 - val_accuracy: 0.9047
Epoch 5/5
9087/9087 [==============================] - 36s 4ms/step - loss: 0.0599 - accuracy: 0.9795 - val_loss: 0.3661 - val_accuracy: 0.9082
You can see in this graph.
如您所见,训练损失和准确率降低但验证损失和准确率增加..
模型代码:
model = Sequential()
model.add(Embedding(8941, 256,input_length=20))
model.add(LSTM(32, dropout=0.1, recurrent_dropout=0.1))
model.add(Dense(32,activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(4, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer=optimizers.Adam(lr=0.001),
metrics=['accuracy'])
history=model.fit(x, x_test,
batch_size=batch_size,
epochs=5,
verbose=1,
validation_data=(y, y_test))
我们会提供帮助。
这实际上取决于您的数据,但模型似乎很快就过拟合了训练集(在第二个时期之后)。
尝试:
- 降低你的学习率
- 增加批量大小
- 添加正则化
- 提高辍学率
此外,您似乎使用 binary_crossentropy
而您的模型为每个样本输出 4 长度的输出:model.add(Dense(4, activation='sigmoid'))
这也可能会导致问题。
我正在研究辱骂和暴力内容检测。当我训练我的模型时,训练日志如下:
Train on 9087 samples, validate on 2125 samples
Epoch 1/5
9087/9087 [==============================] - 33s 4ms/step - loss: 0.3193 - accuracy: 0.8603 - val_loss: 0.2314 - val_accuracy: 0.9322
Epoch 2/5
9087/9087 [==============================] - 33s 4ms/step - loss: 0.1787 - accuracy: 0.9440 - val_loss: 0.2039 - val_accuracy: 0.9356
Epoch 3/5
9087/9087 [==============================] - 32s 4ms/step - loss: 0.1148 - accuracy: 0.9637 - val_loss: 0.2569 - val_accuracy: 0.9180
Epoch 4/5
9087/9087 [==============================] - 33s 4ms/step - loss: 0.0805 - accuracy: 0.9738 - val_loss: 0.3409 - val_accuracy: 0.9047
Epoch 5/5
9087/9087 [==============================] - 36s 4ms/step - loss: 0.0599 - accuracy: 0.9795 - val_loss: 0.3661 - val_accuracy: 0.9082
You can see in this graph.
如您所见,训练损失和准确率降低但验证损失和准确率增加..
模型代码:
model = Sequential()
model.add(Embedding(8941, 256,input_length=20))
model.add(LSTM(32, dropout=0.1, recurrent_dropout=0.1))
model.add(Dense(32,activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(4, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer=optimizers.Adam(lr=0.001),
metrics=['accuracy'])
history=model.fit(x, x_test,
batch_size=batch_size,
epochs=5,
verbose=1,
validation_data=(y, y_test))
我们会提供帮助。
这实际上取决于您的数据,但模型似乎很快就过拟合了训练集(在第二个时期之后)。
尝试:
- 降低你的学习率
- 增加批量大小
- 添加正则化
- 提高辍学率
此外,您似乎使用 binary_crossentropy
而您的模型为每个样本输出 4 长度的输出:model.add(Dense(4, activation='sigmoid'))
这也可能会导致问题。