为什么我的深度学习模型既没有改变它的损失也没有改变它的准确性
Why My Deep Learning Model is not changing neither its loss nor its accuracy
我正在尝试使用 2030 个预处理眼睛图像训练 CNN 模型。我的输入数据的形状是 (2030, 200,200, 1)
。起初,数据的形状是 1527。然后我使用 imblearn.over_sampling.RandomOverSampler
来增加数据集的大小。我用 Keras
构建了模型,这里是我的模型的总结:
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', activation='relu',
input_shape=(img_cols, img_rows, 1)))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
optimizer = tf.keras.optimizers.SGD(learning_rate=0.000001)
model.compile(optimizer=optimizer,loss='binary_crossentropy',
metrics=[tf.keras.metrics.SpecificityAtSensitivity(0.5),
tf.keras.metrics.SensitivityAtSpecificity(0.5),'accuracy'])
# Augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
horizontal_flip=True,
vertical_flip=True,
width_shift_range=0.3,
height_shift_range=0.5,
rotation_range=10,
zoom_range=0.2
)
test_datagen = ImageDataGenerator(rescale=1./255)
train_data = train_datagen.flow(x_train, y_train)
test_data = test_datagen.flow(x_test, y_test)
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',
factor=0.9,
patience=2, min_lr=0.0000000000000000001)
history=model.fit(train_data, epochs=10, batch_size=32,
validation_data=test_data, callbacks=[reduce_lr])
我用不同的参数训练模型(批量大小为 32、64、128、256、512、1024,添加 128 和 256 个神经元卷积层,降低和提高学习率,使用回调,改变密集层大小with 32, 64, ..., 1024) 但我总是得到以下学习过程:
*Epoch 1/10
51/51 [==============================] - 14s 238ms/step - loss: 0.6962 - specificity_at_sensitivity_15: 0.4548 - sensitivity_at_specificity_15: 0.4777 - accuracy: 0.4969 - val_loss: 0.6957 - val_specificity_at_sensitivity_15: 0.4112 - val_sensitivity_at_specificity_15: 0.3636 - val_accuracy: 0.4852 - lr: 1.0000e-04
Epoch 2/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6945 - specificity_at_sensitivity_15: 0.4829 - sensitivity_at_specificity_15: 0.4615 - accuracy: 0.5018 - val_loss: 0.6949 - val_specificity_at_sensitivity_15: 0.4467 - val_sensitivity_at_specificity_15: 0.3206 - val_accuracy: 0.4877 - lr: 1.0000e-04
Epoch 3/10
51/51 [==============================] - 12s 227ms/step - loss: 0.6955 - specificity_at_sensitivity_15: 0.4328 - sensitivity_at_specificity_15: 0.4082 - accuracy: 0.5043 - val_loss: 0.6945 - val_specificity_at_sensitivity_15: 0.5584 - val_sensitivity_at_specificity_15: 0.5167 - val_accuracy: 0.4852 - lr: 1.0000e-04
Epoch 4/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6971 - specificity_at_sensitivity_15: 0.4034 - sensitivity_at_specificity_15: 0.4256 - accuracy: 0.5049 - val_loss: 0.6941 - val_specificity_at_sensitivity_15: 0.4010 - val_sensitivity_at_specificity_15: 0.3923 - val_accuracy: 0.4852 - lr: 1.0000e-04
Epoch 5/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6954 - specificity_at_sensitivity_15: 0.4670 - sensitivity_at_specificity_15: 0.4640 - accuracy: 0.4969 - val_loss: 0.6938 - val_specificity_at_sensitivity_15: 0.5584 - val_sensitivity_at_specificity_15: 0.5407 - val_accuracy: 0.4729 - lr: 1.0000e-04
Epoch 6/10
51/51 [==============================] - 12s 227ms/step - loss: 0.6972 - specificity_at_sensitivity_15: 0.4352 - sensitivity_at_specificity_15: 0.3883 - accuracy: 0.4791 - val_loss: 0.6935 - val_specificity_at_sensitivity_15: 0.4772 - val_sensitivity_at_specificity_15: 0.3206 - val_accuracy: 0.4729 - lr: 1.0000e-04
Epoch 7/10
51/51 [==============================] - 12s 227ms/step - loss: 0.6943 - specificity_at_sensitivity_15: 0.4474 - sensitivity_at_specificity_15: 0.4814 - accuracy: 0.5031 - val_loss: 0.6933 - val_specificity_at_sensitivity_15: 0.3604 - val_sensitivity_at_specificity_15: 0.4880 - val_accuracy: 0.4729 - lr: 1.0000e-04
Epoch 8/10
51/51 [==============================] - 12s 225ms/step - loss: 0.6974 - specificity_at_sensitivity_15: 0.4609 - sensitivity_at_specificity_15: 0.4355 - accuracy: 0.4926 - val_loss: 0.6930 - val_specificity_at_sensitivity_15: 0.5279 - val_sensitivity_at_specificity_15: 0.5885 - val_accuracy: 0.4655 - lr: 1.0000e-04
Epoch 9/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6945 - specificity_at_sensitivity_15: 0.4425 - sensitivity_at_specificity_15: 0.4777 - accuracy: 0.5031 - val_loss: 0.6929 - val_specificity_at_sensitivity_15: 0.4619 - val_sensitivity_at_specificity_15: 0.3876 - val_accuracy: 0.4655 - lr: 1.0000e-04
Epoch 10/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6977 - specificity_at_sensitivity_15: 0.4389 - sensitivity_at_specificity_15: 0.4367 - accuracy: 0.4766 - val_loss: 0.6927 - val_specificity_at_sensitivity_15: 0.6091 - val_sensitivity_at_specificity_15: 0.5024 - val_accuracy: 0.4951 - lr: 1.0000e-04*
并且使用从 x_test 数据(2030 年图像的 2%)生成的测试数据进行评估导致:
13/13 [==============================] - 1s 69ms/step - loss: 0.6927 - specificity_at_sensitivity_15: 0.6091 - sensitivity_at_specificity_15: 0.5024 - accuracy: 0.4951
Accuracy score is : 0.4950738847255707
如何提高我的准确度分数?我尝试了所有可能的方法,但我能增加的最大值是 53%。我在网上看到的类似代码达到了76%。是医学影像项目,相信能获得更好的准确率。
如果您可以尝试将优化器从 SGD
更改为 Adam
,您将获得更好的准确性以及进行一些其他更改,例如添加更多 convolution layers
、增加 learning rate
,删除 dropout
层如下:
model = Sequential()
model.add(Conv2D(16, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(128, (5, 5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=optimizer,loss='binary_crossentropy',
metrics=[tf.keras.metrics.SpecificityAtSensitivity(0.5),
tf.keras.metrics.SensitivityAtSpecificity(0.5),'accuracy'])
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',factor=0.2,
patience=2, min_lr=0.00001)
history=model.fit(train_dataset, epochs=10, batch_size=32,
validation_data=validation_dataset, callbacks=[reduce_lr])
输出:
Epoch 1/10
63/63 [==============================] - 9s 104ms/step - loss: 0.7631 - specificity_at_sensitivity_2: 0.5760 - sensitivity_at_specificity_2: 0.5700 - accuracy: 0.5445 - val_loss: 0.6751 - val_specificity_at_sensitivity_2: 0.7760 - val_sensitivity_at_specificity_2: 0.7460 - val_accuracy: 0.5050 - lr: 0.0010
Epoch 2/10
63/63 [==============================] - 5s 77ms/step - loss: 0.6570 - specificity_at_sensitivity_2: 0.7260 - sensitivity_at_specificity_2: 0.7030 - accuracy: 0.6030 - val_loss: 0.6652 - val_specificity_at_sensitivity_2: 0.7480 - val_sensitivity_at_specificity_2: 0.6920 - val_accuracy: 0.5990 - lr: 0.0010
Epoch 3/10
63/63 [==============================] - 4s 57ms/step - loss: 0.6277 - specificity_at_sensitivity_2: 0.7920 - sensitivity_at_specificity_2: 0.7650 - accuracy: 0.6565 - val_loss: 0.6696 - val_specificity_at_sensitivity_2: 0.6960 - val_sensitivity_at_specificity_2: 0.6820 - val_accuracy: 0.5930 - lr: 0.0010
Epoch 4/10
63/63 [==============================] - 4s 56ms/step - loss: 0.6163 - specificity_at_sensitivity_2: 0.8080 - sensitivity_at_specificity_2: 0.7830 - accuracy: 0.6570 - val_loss: 0.6330 - val_specificity_at_sensitivity_2: 0.8320 - val_sensitivity_at_specificity_2: 0.7840 - val_accuracy: 0.6520 - lr: 0.0010
Epoch 5/10
63/63 [==============================] - 4s 58ms/step - loss: 0.5710 - specificity_at_sensitivity_2: 0.8710 - sensitivity_at_specificity_2: 0.8420 - accuracy: 0.6995 - val_loss: 0.5940 - val_specificity_at_sensitivity_2: 0.8600 - val_sensitivity_at_specificity_2: 0.8420 - val_accuracy: 0.7030 - lr: 0.0010
Epoch 6/10
63/63 [==============================] - 4s 58ms/step - loss: 0.5426 - specificity_at_sensitivity_2: 0.8930 - sensitivity_at_specificity_2: 0.8790 - accuracy: 0.7250 - val_loss: 0.6158 - val_specificity_at_sensitivity_2: 0.8740 - val_sensitivity_at_specificity_2: 0.8360 - val_accuracy: 0.7060 - lr: 0.0010
Epoch 7/10
63/63 [==============================] - 4s 60ms/step - loss: 0.4991 - specificity_at_sensitivity_2: 0.9260 - sensitivity_at_specificity_2: 0.9100 - accuracy: 0.7550 - val_loss: 0.5927 - val_specificity_at_sensitivity_2: 0.8760 - val_sensitivity_at_specificity_2: 0.8460 - val_accuracy: 0.7280 - lr: 0.0010
Epoch 8/10
63/63 [==============================] - 4s 58ms/step - loss: 0.4597 - specificity_at_sensitivity_2: 0.9480 - sensitivity_at_specificity_2: 0.9300 - accuracy: 0.7885 - val_loss: 0.6473 - val_specificity_at_sensitivity_2: 0.8900 - val_sensitivity_at_specificity_2: 0.8260 - val_accuracy: 0.7320 - lr: 0.0010
Epoch 9/10
63/63 [==============================] - 4s 58ms/step - loss: 0.4682 - specificity_at_sensitivity_2: 0.9500 - sensitivity_at_specificity_2: 0.9310 - accuracy: 0.7900 - val_loss: 0.5569 - val_specificity_at_sensitivity_2: 0.9080 - val_sensitivity_at_specificity_2: 0.8880 - val_accuracy: 0.7330 - lr: 0.0010
Epoch 10/10
63/63 [==============================] - 4s 60ms/step - loss: 0.3974 - specificity_at_sensitivity_2: 0.9740 - sensitivity_at_specificity_2: 0.9600 - accuracy: 0.8155 - val_loss: 0.6180 - val_specificity_at_sensitivity_2: 0.9180 - val_sensitivity_at_specificity_2: 0.8940 - val_accuracy: 0.7540 - lr: 0.0010
我正在尝试使用 2030 个预处理眼睛图像训练 CNN 模型。我的输入数据的形状是 (2030, 200,200, 1)
。起初,数据的形状是 1527。然后我使用 imblearn.over_sampling.RandomOverSampler
来增加数据集的大小。我用 Keras
构建了模型,这里是我的模型的总结:
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same', activation='relu',
input_shape=(img_cols, img_rows, 1)))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
optimizer = tf.keras.optimizers.SGD(learning_rate=0.000001)
model.compile(optimizer=optimizer,loss='binary_crossentropy',
metrics=[tf.keras.metrics.SpecificityAtSensitivity(0.5),
tf.keras.metrics.SensitivityAtSpecificity(0.5),'accuracy'])
# Augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
horizontal_flip=True,
vertical_flip=True,
width_shift_range=0.3,
height_shift_range=0.5,
rotation_range=10,
zoom_range=0.2
)
test_datagen = ImageDataGenerator(rescale=1./255)
train_data = train_datagen.flow(x_train, y_train)
test_data = test_datagen.flow(x_test, y_test)
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',
factor=0.9,
patience=2, min_lr=0.0000000000000000001)
history=model.fit(train_data, epochs=10, batch_size=32,
validation_data=test_data, callbacks=[reduce_lr])
我用不同的参数训练模型(批量大小为 32、64、128、256、512、1024,添加 128 和 256 个神经元卷积层,降低和提高学习率,使用回调,改变密集层大小with 32, 64, ..., 1024) 但我总是得到以下学习过程:
*Epoch 1/10
51/51 [==============================] - 14s 238ms/step - loss: 0.6962 - specificity_at_sensitivity_15: 0.4548 - sensitivity_at_specificity_15: 0.4777 - accuracy: 0.4969 - val_loss: 0.6957 - val_specificity_at_sensitivity_15: 0.4112 - val_sensitivity_at_specificity_15: 0.3636 - val_accuracy: 0.4852 - lr: 1.0000e-04
Epoch 2/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6945 - specificity_at_sensitivity_15: 0.4829 - sensitivity_at_specificity_15: 0.4615 - accuracy: 0.5018 - val_loss: 0.6949 - val_specificity_at_sensitivity_15: 0.4467 - val_sensitivity_at_specificity_15: 0.3206 - val_accuracy: 0.4877 - lr: 1.0000e-04
Epoch 3/10
51/51 [==============================] - 12s 227ms/step - loss: 0.6955 - specificity_at_sensitivity_15: 0.4328 - sensitivity_at_specificity_15: 0.4082 - accuracy: 0.5043 - val_loss: 0.6945 - val_specificity_at_sensitivity_15: 0.5584 - val_sensitivity_at_specificity_15: 0.5167 - val_accuracy: 0.4852 - lr: 1.0000e-04
Epoch 4/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6971 - specificity_at_sensitivity_15: 0.4034 - sensitivity_at_specificity_15: 0.4256 - accuracy: 0.5049 - val_loss: 0.6941 - val_specificity_at_sensitivity_15: 0.4010 - val_sensitivity_at_specificity_15: 0.3923 - val_accuracy: 0.4852 - lr: 1.0000e-04
Epoch 5/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6954 - specificity_at_sensitivity_15: 0.4670 - sensitivity_at_specificity_15: 0.4640 - accuracy: 0.4969 - val_loss: 0.6938 - val_specificity_at_sensitivity_15: 0.5584 - val_sensitivity_at_specificity_15: 0.5407 - val_accuracy: 0.4729 - lr: 1.0000e-04
Epoch 6/10
51/51 [==============================] - 12s 227ms/step - loss: 0.6972 - specificity_at_sensitivity_15: 0.4352 - sensitivity_at_specificity_15: 0.3883 - accuracy: 0.4791 - val_loss: 0.6935 - val_specificity_at_sensitivity_15: 0.4772 - val_sensitivity_at_specificity_15: 0.3206 - val_accuracy: 0.4729 - lr: 1.0000e-04
Epoch 7/10
51/51 [==============================] - 12s 227ms/step - loss: 0.6943 - specificity_at_sensitivity_15: 0.4474 - sensitivity_at_specificity_15: 0.4814 - accuracy: 0.5031 - val_loss: 0.6933 - val_specificity_at_sensitivity_15: 0.3604 - val_sensitivity_at_specificity_15: 0.4880 - val_accuracy: 0.4729 - lr: 1.0000e-04
Epoch 8/10
51/51 [==============================] - 12s 225ms/step - loss: 0.6974 - specificity_at_sensitivity_15: 0.4609 - sensitivity_at_specificity_15: 0.4355 - accuracy: 0.4926 - val_loss: 0.6930 - val_specificity_at_sensitivity_15: 0.5279 - val_sensitivity_at_specificity_15: 0.5885 - val_accuracy: 0.4655 - lr: 1.0000e-04
Epoch 9/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6945 - specificity_at_sensitivity_15: 0.4425 - sensitivity_at_specificity_15: 0.4777 - accuracy: 0.5031 - val_loss: 0.6929 - val_specificity_at_sensitivity_15: 0.4619 - val_sensitivity_at_specificity_15: 0.3876 - val_accuracy: 0.4655 - lr: 1.0000e-04
Epoch 10/10
51/51 [==============================] - 12s 226ms/step - loss: 0.6977 - specificity_at_sensitivity_15: 0.4389 - sensitivity_at_specificity_15: 0.4367 - accuracy: 0.4766 - val_loss: 0.6927 - val_specificity_at_sensitivity_15: 0.6091 - val_sensitivity_at_specificity_15: 0.5024 - val_accuracy: 0.4951 - lr: 1.0000e-04*
并且使用从 x_test 数据(2030 年图像的 2%)生成的测试数据进行评估导致:
13/13 [==============================] - 1s 69ms/step - loss: 0.6927 - specificity_at_sensitivity_15: 0.6091 - sensitivity_at_specificity_15: 0.5024 - accuracy: 0.4951
Accuracy score is : 0.4950738847255707
如何提高我的准确度分数?我尝试了所有可能的方法,但我能增加的最大值是 53%。我在网上看到的类似代码达到了76%。是医学影像项目,相信能获得更好的准确率。
如果您可以尝试将优化器从 SGD
更改为 Adam
,您将获得更好的准确性以及进行一些其他更改,例如添加更多 convolution layers
、增加 learning rate
,删除 dropout
层如下:
model = Sequential()
model.add(Conv2D(16, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (5, 5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(128, (5, 5), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=optimizer,loss='binary_crossentropy',
metrics=[tf.keras.metrics.SpecificityAtSensitivity(0.5),
tf.keras.metrics.SensitivityAtSpecificity(0.5),'accuracy'])
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',factor=0.2,
patience=2, min_lr=0.00001)
history=model.fit(train_dataset, epochs=10, batch_size=32,
validation_data=validation_dataset, callbacks=[reduce_lr])
输出:
Epoch 1/10
63/63 [==============================] - 9s 104ms/step - loss: 0.7631 - specificity_at_sensitivity_2: 0.5760 - sensitivity_at_specificity_2: 0.5700 - accuracy: 0.5445 - val_loss: 0.6751 - val_specificity_at_sensitivity_2: 0.7760 - val_sensitivity_at_specificity_2: 0.7460 - val_accuracy: 0.5050 - lr: 0.0010
Epoch 2/10
63/63 [==============================] - 5s 77ms/step - loss: 0.6570 - specificity_at_sensitivity_2: 0.7260 - sensitivity_at_specificity_2: 0.7030 - accuracy: 0.6030 - val_loss: 0.6652 - val_specificity_at_sensitivity_2: 0.7480 - val_sensitivity_at_specificity_2: 0.6920 - val_accuracy: 0.5990 - lr: 0.0010
Epoch 3/10
63/63 [==============================] - 4s 57ms/step - loss: 0.6277 - specificity_at_sensitivity_2: 0.7920 - sensitivity_at_specificity_2: 0.7650 - accuracy: 0.6565 - val_loss: 0.6696 - val_specificity_at_sensitivity_2: 0.6960 - val_sensitivity_at_specificity_2: 0.6820 - val_accuracy: 0.5930 - lr: 0.0010
Epoch 4/10
63/63 [==============================] - 4s 56ms/step - loss: 0.6163 - specificity_at_sensitivity_2: 0.8080 - sensitivity_at_specificity_2: 0.7830 - accuracy: 0.6570 - val_loss: 0.6330 - val_specificity_at_sensitivity_2: 0.8320 - val_sensitivity_at_specificity_2: 0.7840 - val_accuracy: 0.6520 - lr: 0.0010
Epoch 5/10
63/63 [==============================] - 4s 58ms/step - loss: 0.5710 - specificity_at_sensitivity_2: 0.8710 - sensitivity_at_specificity_2: 0.8420 - accuracy: 0.6995 - val_loss: 0.5940 - val_specificity_at_sensitivity_2: 0.8600 - val_sensitivity_at_specificity_2: 0.8420 - val_accuracy: 0.7030 - lr: 0.0010
Epoch 6/10
63/63 [==============================] - 4s 58ms/step - loss: 0.5426 - specificity_at_sensitivity_2: 0.8930 - sensitivity_at_specificity_2: 0.8790 - accuracy: 0.7250 - val_loss: 0.6158 - val_specificity_at_sensitivity_2: 0.8740 - val_sensitivity_at_specificity_2: 0.8360 - val_accuracy: 0.7060 - lr: 0.0010
Epoch 7/10
63/63 [==============================] - 4s 60ms/step - loss: 0.4991 - specificity_at_sensitivity_2: 0.9260 - sensitivity_at_specificity_2: 0.9100 - accuracy: 0.7550 - val_loss: 0.5927 - val_specificity_at_sensitivity_2: 0.8760 - val_sensitivity_at_specificity_2: 0.8460 - val_accuracy: 0.7280 - lr: 0.0010
Epoch 8/10
63/63 [==============================] - 4s 58ms/step - loss: 0.4597 - specificity_at_sensitivity_2: 0.9480 - sensitivity_at_specificity_2: 0.9300 - accuracy: 0.7885 - val_loss: 0.6473 - val_specificity_at_sensitivity_2: 0.8900 - val_sensitivity_at_specificity_2: 0.8260 - val_accuracy: 0.7320 - lr: 0.0010
Epoch 9/10
63/63 [==============================] - 4s 58ms/step - loss: 0.4682 - specificity_at_sensitivity_2: 0.9500 - sensitivity_at_specificity_2: 0.9310 - accuracy: 0.7900 - val_loss: 0.5569 - val_specificity_at_sensitivity_2: 0.9080 - val_sensitivity_at_specificity_2: 0.8880 - val_accuracy: 0.7330 - lr: 0.0010
Epoch 10/10
63/63 [==============================] - 4s 60ms/step - loss: 0.3974 - specificity_at_sensitivity_2: 0.9740 - sensitivity_at_specificity_2: 0.9600 - accuracy: 0.8155 - val_loss: 0.6180 - val_specificity_at_sensitivity_2: 0.9180 - val_sensitivity_at_specificity_2: 0.8940 - val_accuracy: 0.7540 - lr: 0.0010