Keras - 验证损失和准确性停留在 0

Keras - Validation Loss and Accuracy stuck at 0

我正在尝试为 Tensorflow keras 中的二元分类训练一个简单的 2 层全连接神经网络。我使用 sklearn 的 train_test_split().

将我的数据分成 80-20 的训练集和验证集

当我调用 model.fit(X_train, y_train, validation_data=[X_val, y_val]) 时,它显示所有时期的验证损失和准确性为 0,但它训练得很好。

此外,当我尝试在验证集上对其进行评估时,输出是非零的。

有人可以解释一下为什么我在验证时遇到这个 0 损失 0 准确度错误。感谢您的帮助。

这是此错误的完整示例代码 (MCVE):https://colab.research.google.com/drive/1P8iCUlnD87vqtuS5YTdoePcDOVEKpBHr?usp=sharing

  • 如果您使用 keras 而不是 tf.keras 一切正常。

  • 使用tf.keras,我什至尝试了validation_data = [X_train, y_train],这也给出了零精度。

这里有一个演示:

model.fit(X_train, y_train, validation_data=[X_train.to_numpy(), y_train.to_numpy()], 
epochs=10, batch_size=64)

Epoch 1/10
8/8 [==============================] - 0s 6ms/step - loss: 0.7898 - accuracy: 0.6087 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 2/10
8/8 [==============================] - 0s 6ms/step - loss: 0.6710 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 3/10
8/8 [==============================] - 0s 5ms/step - loss: 0.6748 - accuracy: 0.6500 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 4/10
8/8 [==============================] - 0s 6ms/step - loss: 0.6716 - accuracy: 0.6370 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 5/10
8/8 [==============================] - 0s 6ms/step - loss: 0.6085 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 6/10
8/8 [==============================] - 0s 6ms/step - loss: 0.6744 - accuracy: 0.6326 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 7/10
8/8 [==============================] - 0s 6ms/step - loss: 0.6102 - accuracy: 0.6522 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 8/10
8/8 [==============================] - 0s 6ms/step - loss: 0.7032 - accuracy: 0.6109 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 9/10
8/8 [==============================] - 0s 5ms/step - loss: 0.6283 - accuracy: 0.6717 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 10/10
8/8 [==============================] - 0s 5ms/step - loss: 0.6120 - accuracy: 0.6652 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00

因此,fittensorflow 实施肯定存在一些问题。

我翻了一下源码,好像是负责validation_data的部分:

...
...
        # Run validation.
        if validation_data and self._should_eval(epoch, validation_freq):
          val_x, val_y, val_sample_weight = (
              data_adapter.unpack_x_y_sample_weight(validation_data))
          val_logs = self.evaluate(
              x=val_x,
              y=val_y,
              sample_weight=val_sample_weight,
              batch_size=validation_batch_size or batch_size,
              steps=validation_steps,
              callbacks=callbacks,
              max_queue_size=max_queue_size,
              workers=workers,
              use_multiprocessing=use_multiprocessing,
              return_dict=True)
          val_logs = {'val_' + name: val for name, val in val_logs.items()}
          epoch_logs.update(val_logs)

内部调用 model.evaluate,因为我们已经确定 evaluate 工作正常,我意识到唯一的罪魁祸首可能是 unpack_x_y_sample_weight

所以,我研究了实现:

def unpack_x_y_sample_weight(data):
  """Unpacks user-provided data tuple."""
  if not isinstance(data, tuple):
    return (data, None, None)
  elif len(data) == 1:
    return (data[0], None, None)
  elif len(data) == 2:
    return (data[0], data[1], None)
  elif len(data) == 3:
    return (data[0], data[1], data[2])

  raise ValueError("Data not understood.")

这太疯狂了,但是如果您只传递一个元组而不是列表,由于 unpack_x_y_sample_weight 内部的检查,一切都可以正常工作。 (在这一步之后你的标签丢失了,并且数据以某种方式在 evaluate 中得到修复,所以你在没有合理标签的情况下进行训练,这似乎是一个错误,但文档明确指出要传递元组)

以下代码给出了正确的验证精度和损失:

model.fit(X_train, y_train, validation_data=(X_train.to_numpy(), y_train.to_numpy()), 
epochs=10, batch_size=64)

Epoch 1/10
8/8 [==============================] - 0s 7ms/step - loss: 0.5832 - accuracy: 0.6696 - val_loss: 0.6892 - val_accuracy: 0.6674
Epoch 2/10
8/8 [==============================] - 0s 7ms/step - loss: 0.6385 - accuracy: 0.6804 - val_loss: 0.8984 - val_accuracy: 0.5565
Epoch 3/10
8/8 [==============================] - 0s 7ms/step - loss: 0.6822 - accuracy: 0.6391 - val_loss: 0.6556 - val_accuracy: 0.6739
Epoch 4/10
8/8 [==============================] - 0s 6ms/step - loss: 0.6276 - accuracy: 0.6609 - val_loss: 1.0691 - val_accuracy: 0.5630
Epoch 5/10
8/8 [==============================] - 0s 7ms/step - loss: 0.7048 - accuracy: 0.6239 - val_loss: 0.6474 - val_accuracy: 0.6326
Epoch 6/10
8/8 [==============================] - 0s 7ms/step - loss: 0.6545 - accuracy: 0.6500 - val_loss: 0.6659 - val_accuracy: 0.6043
Epoch 7/10
8/8 [==============================] - 0s 7ms/step - loss: 0.5796 - accuracy: 0.6913 - val_loss: 0.6891 - val_accuracy: 0.6435
Epoch 8/10
8/8 [==============================] - 0s 7ms/step - loss: 0.5915 - accuracy: 0.6891 - val_loss: 0.5307 - val_accuracy: 0.7152
Epoch 9/10
8/8 [==============================] - 0s 7ms/step - loss: 0.5571 - accuracy: 0.7000 - val_loss: 0.5465 - val_accuracy: 0.6957
Epoch 10/10
8/8 [==============================] - 0s 7ms/step - loss: 0.7133 - accuracy: 0.6283 - val_loss: 0.7046 - val_accuracy: 0.6413

因此,由于这似乎是一个 bug,我刚刚在 Tensorflow Github repo:

打开了一个相关问题

https://github.com/tensorflow/tensorflow/issues/39370