Keras 适合不训练模型权重

Keras fit not training model weights

我正在尝试拟合一个具有自定义权重且没有输入的简单直方图模型。它应该适合以下生成的数据的直方图:

train_data = [max(0,int(np.round(np.random.randn()*2+5))) for i in range(1000)]

模型由

定义
d = 15
class hist_model(tf.keras.Model):
    def __init__(self):
        super(hist_model,self).__init__()
        self._theta = self.add_weight(shape=[1,d],initializer='zero',trainable=True)
        
    
    def call(self,x):
        return self._theta

我遇到的问题是使用 model.fit 进行训练不起作用:模型权重在训练期间根本没有变化。我试过了:

model = hist_model()
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2),
                loss="sparse_categorical_crossentropy")
history = model.fit(train_data,train_data,verbose=2,batch_size=1,epochs=10)
model.summary()

哪个returns:

Epoch 1/3
1000/1000 - 1s - loss: 2.7080
Epoch 2/3
1000/1000 - 1s - loss: 2.7080
Epoch 3/3
1000/1000 - 1s - loss: 2.7080
Model: "hist_model_17"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
Total params: 15
Trainable params: 15
Non-trainable params: 0
________________________

我尝试为同一模型编写自定义训练循环,效果很好。这是自定义训练的代码:

optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
for epoch in range(3):
    running_loss = 0
    for data in train_data:
        with tf.GradientTape() as tape:
            loss_value = loss_fn(data,model(data))
            running_loss += loss_value.numpy()
            grad = tape.gradient(loss_value,model.trainable_weights)
            optimizer.apply_gradients(zip(grad, model.trainable_weights))
    print(f'Epoch {epoch} loss: {loss_value}')

我仍然不明白为什么 fit 方法不起作用。我错过了什么?谢谢!

这两种方法的区别大概就是损失函数了。试试 运行:

model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2),
                loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))

因为 from_logits 参数默认设置为 False。这意味着您的模型的输出应该已经编码了概率分布。注意现在 from_logits=True:

的损失差异
import numpy as np
import tensorflow as tf

d = 15
class hist_model(tf.keras.Model):
    def __init__(self):
        super(hist_model,self).__init__()
        self._theta = self.add_weight(shape=[1,d],initializer='zero',trainable=True)       
    
    def call(self,x):
        return self._theta

train_data = [max(0,int(np.round(np.random.randn()*2+5))) for i in range(15)]

model = hist_model()
model.compile(optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2),
                loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))

history = model.fit(train_data, train_data,verbose=2,batch_size=1,epochs=10)
Epoch 1/10
15/15 - 0s - loss: 2.7021 - 247ms/epoch - 16ms/step
Epoch 2/10
15/15 - 0s - loss: 2.6812 - 14ms/epoch - 915us/step
Epoch 3/10
15/15 - 0s - loss: 2.6607 - 15ms/epoch - 1ms/step
Epoch 4/10
15/15 - 0s - loss: 2.6406 - 14ms/epoch - 955us/step
Epoch 5/10
15/15 - 0s - loss: 2.6209 - 19ms/epoch - 1ms/step
Epoch 6/10
15/15 - 0s - loss: 2.6017 - 18ms/epoch - 1ms/step
Epoch 7/10
15/15 - 0s - loss: 2.5829 - 15ms/epoch - 999us/step
Epoch 8/10
15/15 - 0s - loss: 2.5645 - 15ms/epoch - 1ms/step
Epoch 9/10
15/15 - 0s - loss: 2.5464 - 27ms/epoch - 2ms/step
Epoch 10/10
15/15 - 0s - loss: 2.5288 - 20ms/epoch - 1ms/step

我认为使用的还原方法也可能有影响。查看 docs 了解更多详情。