fashion_mnist Data ML 准确度分数仅为 0.1

fashion_mnist Data ML Accuracy Score is only 0.1

我是 ML 的新手,正在尝试进行典型的 fashion_mnist 分类。问题是我 运行 代码后的准确度分数仅为 0.1,损失低于 0。所以我猜 ML 没有学习,但我不知道问题是什么? 谢谢

from tensorflow.keras.datasets import fashion_mnist 
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

x_train = x_train.astype('float32')
print(type(x_train))
x_train =x_train.reshape(60000,784)
x_train = x_train / 255.0
x_test =x_test.reshape(60000,784)
x_test= x_test/ 255.0


from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense

model= Sequential()
model.add(Dense(100, activation="sigmoid", input_shape=(784,)))
model.add(Dense(1, activation="sigmoid"))
model.compile(optimizer='sgd', loss="binary_crossentropy", metrics=["accuracy"])

model.fit(
    x_train,
    y_train,
    epochs=10,
    batch_size=1000)

输出:

您的代码存在多个问题 -

  1. 您在重塑 x_test = x_test.reshape(10000,784) 时遇到一些错误,因为它只有 10000 张图像。
  2. 您在第一个密集层中使用 sigmoid 激活,这不是一个好的做法。相反,使用 relu.
  3. 您的输出 Dense 只有 1 个节点。您正在使用具有 10 个唯一 classes 的数据集。输出必须是 Dense(10)。请理解,即使 y_train 有 classes 0-10,神经网络也无法预测具有 softmaxsigmoid 激活的整数值。相反,您要做的是预测 10 个 classes.
  4. 中每个的概率值
  5. 您在最后一层使用了不正确的激活来进行多重class class化。使用 softmax.
  6. 您使用的损失函数不正确。对于多 class class 化,请使用 categorical_crossentropy。由于您的输出是 10 维概率分布,但您的 y_train 是每个 class 标签的单个值,您可以使用 sparse_categorical_crossentropy 代替,这是同一件事,但处理标签编码 y .
  7. 尝试使用更好的优化器以避免陷入局部最小值,例如adam
  8. 最好将 CNN 用于图像数据,因为简单的密集层将无法捕获构成图像的空间特征。由于图像很小 (28,28) 并且这是一个玩具示例,所以这样就可以了。

请参阅此 table 以查看要使用的内容。不过,您必须确保首先知道要解决什么问题。

在你的情况下,你想做一个多class单标签class化,但你正在做一个多class多标签class使用不正确的损失和输出层激活进行的化。

from tensorflow.keras.datasets import fashion_mnist 
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense

#Load data
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

#Normalize
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')

#Reshape
x_train = x_train.reshape(60000,784)
x_train = x_train / 255.0
x_test = x_test.reshape(10000,784)
x_test = x_test / 255.0

print('Data shapes->',[i.shape for i in [x_train, y_train, x_test, y_test]])

#Contruct computation graph
model = Sequential()
model.add(Dense(100, activation="relu", input_shape=(784,)))
model.add(Dense(10, activation="softmax"))

#Compile with loss as cross_entropy and optimizer as adam
model.compile(optimizer='adam', loss="sparse_categorical_crossentropy", metrics=["accuracy"])

#Fit model
model.fit(x_train, y_train, epochs=10, batch_size=1000)
Data shapes-> [(60000, 784), (60000,), (10000, 784), (10000,)]
Epoch 1/10
60/60 [==============================] - 0s 5ms/step - loss: 0.8832 - accuracy: 0.7118
Epoch 2/10
60/60 [==============================] - 0s 6ms/step - loss: 0.5125 - accuracy: 0.8281
Epoch 3/10
60/60 [==============================] - 0s 6ms/step - loss: 0.4585 - accuracy: 0.8425
Epoch 4/10
60/60 [==============================] - 0s 6ms/step - loss: 0.4238 - accuracy: 0.8547
Epoch 5/10
60/60 [==============================] - 0s 7ms/step - loss: 0.4038 - accuracy: 0.8608
Epoch 6/10
60/60 [==============================] - 0s 6ms/step - loss: 0.3886 - accuracy: 0.8656
Epoch 7/10
60/60 [==============================] - 0s 6ms/step - loss: 0.3788 - accuracy: 0.8689
Epoch 8/10
60/60 [==============================] - 0s 6ms/step - loss: 0.3669 - accuracy: 0.8725
Epoch 9/10
60/60 [==============================] - 0s 6ms/step - loss: 0.3560 - accuracy: 0.8753
Epoch 10/10
60/60 [==============================] - 0s 6ms/step - loss: 0.3451 - accuracy: 0.8794

我还使用 Convolutional layers 添加代码供您参考,使用 categorical_crossentropyfunctional API 而不是 Sequential。为了更加清晰,请阅读代码内嵌的注释。这应该可以帮助您了解使用 Keras 时的一些良好做法。

from tensorflow.keras.datasets import fashion_mnist 
from tensorflow.keras import layers, Model, utils

#Load data
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

#Normalize
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')

#Reshape
x_train = x_train.reshape(60000,28,28,1)
x_train = x_train / 255.0
x_test = x_test.reshape(10000,28,28,1)
x_test = x_test / 255.0

#Set y to onehot instead of label encoded
y_train = utils.to_categorical(y_train)
y_test = utils.to_categorical(y_test)

#print([i.shape for i in [x_train, y_train, x_test, y_test]])

#Contruct computation graph
inp = layers.Input((28,28,1))
x = layers.Conv2D(32, (3,3), activation='relu', padding='same')(inp)
x = layers.MaxPooling2D((2,2))(x)
x = layers.Conv2D(32, (3,3), activation='relu', padding='same')(x)
x = layers.MaxPooling2D((2,2))(x)
x = layers.Flatten()(x)
out = Dense(10, activation='softmax')(x)

#Define model
model = Model(inp, out)

#Compile with loss as cross_entropy and optimizer as adam
model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=["accuracy"])

#Fit model
model.fit(x_train, y_train, epochs=10, batch_size=1000)
utils.plot_model(model, show_layer_names=False, show_shapes=True)