keras:如何阻止卷积层权重
keras: how to block convolution layer weights
Here 我有一个适用于 Keras 的 GoogleNet 模型。有没有可能的方法来阻止网络各个层的变化?我想阻止预训练模型的前两层发生变化。
'block the changing of the individual layers' 我假设您不想训练这些层,也就是说您不想修改加载的权重(可能在之前的训练中学到)。
如果是这样,您可以将 trainable=False
传递给图层,参数不会用于训练更新规则。
示例:
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(32, input_dim=100),
Dense(output_dim=10),
Activation('sigmoid'),
])
model.summary()
model2 = Sequential([
Dense(32, input_dim=100,trainable=False),
Dense(output_dim=10),
Activation('sigmoid'),
])
model2.summary()
您可以在第二个模型的模型摘要中看到参数被计为不可训练的参数。
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
dense_1 (Dense) (None, 32) 3232 dense_input_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 10) 330 dense_1[0][0]
____________________________________________________________________________________________________
activation_1 (Activation) (None, 10) 0 dense_2[0][0]
====================================================================================================
Total params: 3,562
Trainable params: 3,562
Non-trainable params: 0
____________________________________________________________________________________________________
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
dense_3 (Dense) (None, 32) 3232 dense_input_2[0][0]
____________________________________________________________________________________________________
dense_4 (Dense) (None, 10) 330 dense_3[0][0]
____________________________________________________________________________________________________
activation_2 (Activation) (None, 10) 0 dense_4[0][0]
====================================================================================================
Total params: 3,562
Trainable params: 330
Non-trainable params: 3,232
Here 我有一个适用于 Keras 的 GoogleNet 模型。有没有可能的方法来阻止网络各个层的变化?我想阻止预训练模型的前两层发生变化。
'block the changing of the individual layers' 我假设您不想训练这些层,也就是说您不想修改加载的权重(可能在之前的训练中学到)。
如果是这样,您可以将 trainable=False
传递给图层,参数不会用于训练更新规则。
示例:
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(32, input_dim=100),
Dense(output_dim=10),
Activation('sigmoid'),
])
model.summary()
model2 = Sequential([
Dense(32, input_dim=100,trainable=False),
Dense(output_dim=10),
Activation('sigmoid'),
])
model2.summary()
您可以在第二个模型的模型摘要中看到参数被计为不可训练的参数。
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
dense_1 (Dense) (None, 32) 3232 dense_input_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 10) 330 dense_1[0][0]
____________________________________________________________________________________________________
activation_1 (Activation) (None, 10) 0 dense_2[0][0]
====================================================================================================
Total params: 3,562
Trainable params: 3,562
Non-trainable params: 0
____________________________________________________________________________________________________
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
dense_3 (Dense) (None, 32) 3232 dense_input_2[0][0]
____________________________________________________________________________________________________
dense_4 (Dense) (None, 10) 330 dense_3[0][0]
____________________________________________________________________________________________________
activation_2 (Activation) (None, 10) 0 dense_4[0][0]
====================================================================================================
Total params: 3,562
Trainable params: 330
Non-trainable params: 3,232