Keras 1D CNN 总是预测相同的结果,即使训练集的准确性很高

Keras 1D CNN always predicts the same result even if accuracy is high on training set

我的 1D CNN 的验证精度停留在 0.5,这是因为我总是从平衡的数据集中得到相同的预测。与此同时,我的训练准确率不断提高,损失也在按预期减少。

奇怪的是,如果我在我的训练集上做 model.evaluate()(在最后一个时期准确率接近 1),准确率也将是 0.5。这里的准确率怎么会和上一个 epoch 的训练准确率相差这么多呢?我也尝试过将批量大小设置为 1 进行训练和评估,但问题仍然存在。

好吧,我已经搜索了很长一段时间的不同解决方案,但仍然一无所获。我已经研究过的可能问题:

  1. 我的数据集已适当平衡和打乱;
  2. 我的标签是正确的;
  3. 尝试添加全连接层;
  4. 尝试adding/removing从全连接层中退出;
  5. 尝试了相同的架构,但最后一层有 1 个神经元和 sigmoid 激活;
  6. 尝试改变学习率(降低到 0.0001 但仍然是同样的问题)。

这是我的代码:

import pathlib
import numpy as np
import ipynb.fs.defs.preprocessDataset as preprocessDataset
import pickle
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras import Input
from tensorflow.keras.layers import Conv1D, BatchNormalization, Activation, MaxPooling1D, Flatten, Dropout, Dense
from tensorflow.keras.optimizers import SGD

main_folder = pathlib.Path.cwd().parent
datasetsFolder=f'{main_folder}\datasets'
trainDataset = preprocessDataset.loadDataset('DatasetTime_Sg12p5_Ov75_Train',datasetsFolder)
testDataset = preprocessDataset.loadDataset('DatasetTime_Sg12p5_Ov75_Test',datasetsFolder)

X_train,Y_train,Names_train=trainDataset[0],trainDataset[1],trainDataset[2]
X_test,Y_test,Names_test=testDataset[0],testDataset[1],testDataset[2]

model = Sequential()

model.add(Input(shape=X_train.shape[1:]))

model.add(Conv1D(16, 61, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(32, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(64, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(64, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(64, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dropout(0.5))

model.add(Dense(200))
model.add(Activation('relu'))

model.add(Dense(2))
model.add(Activation('softmax'))

opt = SGD(learning_rate=0.01)

model.compile(loss='binary_crossentropy',optimizer=opt,metrics=['accuracy'])

model.summary()

model.fit(X_train,Y_train,epochs=10,shuffle=False,validation_data=(X_test, Y_test))

model.evaluate(X_train,Y_train)

这里是model.fit():

model.fit(X_train,Y_train,epochs=10,shuffle=False,validation_data=(X_test, Y_test))

Epoch 1/10
914/914 [==============================] - 277s 300ms/step - loss: 0.6405 - accuracy: 0.6543 - val_loss: 7.9835 - val_accuracy: 0.5000
Epoch 2/10
914/914 [==============================] - 270s 295ms/step - loss: 0.3997 - accuracy: 0.8204 - val_loss: 19.8981 - val_accuracy: 0.5000
Epoch 3/10
914/914 [==============================] - 273s 298ms/step - loss: 0.2976 - accuracy: 0.8730 - val_loss: 1.9558 - val_accuracy: 0.5002
Epoch 4/10
914/914 [==============================] - 278s 304ms/step - loss: 0.2897 - accuracy: 0.8776 - val_loss: 20.2678 - val_accuracy: 0.5000
Epoch 5/10
914/914 [==============================] - 277s 303ms/step - loss: 0.2459 - accuracy: 0.8991 - val_loss: 5.4945 - val_accuracy: 0.5000
Epoch 6/10
914/914 [==============================] - 268s 294ms/step - loss: 0.2008 - accuracy: 0.9181 - val_loss: 32.4579 - val_accuracy: 0.5000
Epoch 7/10
914/914 [==============================] - 271s 297ms/step - loss: 0.1695 - accuracy: 0.9317 - val_loss: 14.9538 - val_accuracy: 0.5000
Epoch 8/10
914/914 [==============================] - 276s 302ms/step - loss: 0.1423 - accuracy: 0.9452 - val_loss: 1.4420 - val_accuracy: 0.4988
Epoch 9/10
914/914 [==============================] - 266s 291ms/step - loss: 0.1261 - accuracy: 0.9497 - val_loss: 4.3830 - val_accuracy: 0.5005
Epoch 10/10
914/914 [==============================] - 272s 297ms/step - loss: 0.1142 - accuracy: 0.9548 - val_loss: 1.6054 - val_accuracy: 0.5009

这里是model.evaluate():

model.evaluate(X_train,Y_train)

914/914 [==============================] - 35s 37ms/step - loss: 1.7588 - accuracy: 0.5009

这里是model.summary():

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d (Conv1D)              (None, 4096, 16)          992       
_________________________________________________________________
batch_normalization (BatchNo (None, 4096, 16)          64        
_________________________________________________________________
activation (Activation)      (None, 4096, 16)          0         
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 2048, 16)          0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 2048, 32)          1568      
_________________________________________________________________
batch_normalization_1 (Batch (None, 2048, 32)          128       
_________________________________________________________________
activation_1 (Activation)    (None, 2048, 32)          0         
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 1024, 32)          0         
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 1024, 64)          6208      
_________________________________________________________________
batch_normalization_2 (Batch (None, 1024, 64)          256       
_________________________________________________________________
activation_2 (Activation)    (None, 1024, 64)          0         
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 512, 64)           0         
_________________________________________________________________
conv1d_3 (Conv1D)            (None, 512, 64)           12352     
_________________________________________________________________
batch_normalization_3 (Batch (None, 512, 64)           256       
_________________________________________________________________
activation_3 (Activation)    (None, 512, 64)           0         
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 256, 64)           0         
_________________________________________________________________
conv1d_4 (Conv1D)            (None, 256, 64)           12352     
_________________________________________________________________
batch_normalization_4 (Batch (None, 256, 64)           256       
_________________________________________________________________
activation_4 (Activation)    (None, 256, 64)           0         
_________________________________________________________________
flatten (Flatten)            (None, 16384)             0         
_________________________________________________________________
dropout (Dropout)            (None, 16384)             0         
_________________________________________________________________
dense (Dense)                (None, 200)               3277000   
_________________________________________________________________
activation_5 (Activation)    (None, 200)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 402       
_________________________________________________________________
activation_6 (Activation)    (None, 2)                 0         
=================================================================
Total params: 3,311,834
Trainable params: 3,311,354
Non-trainable params: 480
_________________________________________________________________

... also tried with sigmoid but the issue persists ...

您不想“尝试” 激活函数或损失函数以获得定义明确的问题陈述。您似乎混淆了单标签多 class 和多标签多 class 架构。

你的输出是一个 2 class multi-class 输出,带有 softmax 激活,这很好,但是你使用 binary_crossentropy 只有在使用时才有意义针对多标签问题的多 class 设置。

您可能希望使用 categorical_crossentropy。此外,如果存在 class 不平衡,我会建议 focal loss 但看起来你有 50,50 class 比例,所以没有必要。

请记住,accuracy 是根据使用的损失来决定的!检查不同的 classes here。当您使用 binary_crossentropy 时,使用的精度是 binaryaccuracy,而使用 categorical_crossentropy 时,它使用 categoricalaccuracy

查看此图表以了解在哪种类型的问题陈述中使用什么的详细信息。


除此之外,您的网络在 flatten()Dense() 处存在瓶颈。相对于其他层,可训练参数的数量相当多。我建议使用另一个 CNN 层来使过滤器的数量达到 128,并且序列的大小甚至更小。并减少该密集层的神经元数量。

98.9% (3,277,000/3,311,354) of all of your trainable parameters reside between the Flatten and Dense layer! Not a great architectural choice!

除上述几点外,模型结果完全取决于您的数据本身。如果不了解数据,我将无法提供更多帮助。

我的问题的解决方案是实施批量重新规范化:BatchNormalization(renorm=True)。此外,规范化输入有助于提高神经网络的整体性能。