我可以在分类问题中使用 MSE 作为损失函数和标签编码吗?

Can I use MSE as loss function and label encoding in classification problem?

from keras.datasets import mnist
from keras import models, layers
from keras.utils import to_categorical

(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
 
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,))) 
network.add(layers.Dense(10, activation='softmax')) 

network.compile(optimizer='rmsprop',
                loss='mean_squared_error',
                metrics=['accuracy'])


train_images = train_images.reshape((60000, 28 * 28)) 
train_images = train_images.astype('float32') / 255

test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255        

train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

network.fit(train_images, train_labels, epochs=5, batch_size=128)

test_loss, test_acc = network.evaluate(test_images, test_labels, batch_size=128)

print("test_acc: ", test_acc)
Epoch 1/5
60000/60000 [==============================] - 2s 41us/step - loss: 0.2600 - acc: 0.9244
Epoch 2/5
60000/60000 [==============================] - 2s 34us/step - loss: 0.1055 - acc: 0.9679
Epoch 3/5
60000/60000 [==============================] - 2s 33us/step - loss: 0.0688 - acc: 0.9791
Epoch 4/5
60000/60000 [==============================] - 2s 35us/step - loss: 0.0504 - acc: 0.9848
Epoch 5/5
60000/60000 [==============================] - 2s 38us/step - loss: 0.0373 - acc: 0.9889
10000/10000 [==============================] - 0s 18us/step
test_acc:  0.9791

训练过程好像没有问题,但是不知道MSE是怎么计算的。这种情况下keras(或者tensorflow)在计算MSE的时候会自动把标签编码转成one-hot编码吗?

您已经通过以下方式手动将标签转换为单热编码:

train_labels = to_categorical(train_labels)

由于您的 softmax 层包含 10 个节点,我假设您打算对 10 个标签进行分类,这意味着 train_labels 看起来像:

[
 [0,0,0,1,0,0,0,0,0,0],...  <--- One of these per training row
]

参见documentation

该行的 softmax 输出可能如下所示:

[0.033,0.45,0.01,0.9,0,0,0.5,0.4,0.3,0.95]

如本手册所述resource:

The softmax function will output a probability of class membership for each class label and attempt to best approximate the expected target for a given input.

For example, if the integer encoded class 1 was expected for one example, the target vector would be:

[0, 1, 0] The softmax output might look as follows, which puts the most weight on class 1 and less weight on the other classes.

[0.09003057 0.66524096 0.24472847]

然后计算这两组数据的均方误差,真实标签 y_true 根据 to_categorical 输出,预测标签 y_pred 是 softmax从你的网络输出。

来自 MSE 上的 tensorflow 源代码,此作品由:

  1. 首先计算 y_truey_pred 之间的差异,然后将结果平方,即与上面两个:
import tensorflow as tf
from tensorflow.python.keras import backend as K
from tensorflow.python.ops import math_ops

y_true = [0,0,0,1,0,0,0,0,0,0]
y_pred = [0.033,0.45,0.01,0.9,0,0,0.5,0.4,0.3,0.95]

math_ops.squared_difference(y_pred, y_true)


<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([1.0890000e-03, 2.0249999e-01, 9.9999997e-05, 1.0000004e-02,
       0.0000000e+00, 0.0000000e+00, 2.5000000e-01, 1.6000001e-01,
       9.0000004e-02, 9.0249997e-01], dtype=float32)>
  1. 然后结果的平均值:
K.mean(math_ops.squared_difference(y_pred, y_true))
<tf.Tensor: shape=(), dtype=float32, numpy=0.1616189>

这显然只是针对单个示例,但它处理多维计算的方式与下面的简化示例相同:

>>> y_true = [[1,0],[0,1]]
>>> y_pred = [[0.95,0.03],[0.3,0.8]]
>>> K.mean(math_ops.squared_difference(y_pred, y_true))

<tf.Tensor: shape=(), dtype=float32, numpy=0.03335>

你可以看到每次结果都是一个数字,那是你的损失。