通过 Keras 2.4.3 和 Tensorflow 2.2 从 SavedModel 中提取特征
Extracting Features from a SavedModel via Keras 2.4.3 and Tensorflow 2.2
我想从我的 CNN 模型的最后一个密集层中提取特征。然而,我对我所做的所有 google 研究感到非常矛盾。 Tensorflow 有很多不同的方法,我正在努力寻找一些有用的方法。
我已经在 CIFAR10 上成功地训练了一个模型。我已将模型保存到一个目录并有一个 saved_model.pb 文件。我已经通过 tensorboard 可视化了模型,但不完全确定我的最后一层的名称。可视化看起来有点混乱。
我怎样才能继续提取这些特征?我想用它们进行 t-SNE 分析。
我正在尝试使用 gfile 加载 pb 图,但不确定这是否正确 approach.Thank 你。
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
from tensorflow.python.platform import gfile
pb_graph_file = '../data/processed/saved_models/saved_model.pb'
f = gfile.GFile(pb_graph_file, 'rb')
graph_def = tf.GraphDef()
f.close()
我的 Keras Sequential 模型如下所示:
"""
This is the CNN model's architecture
"""
weight_decay = 1e-4
model = Sequential()
model.add(Conv2D(32, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding = 'same', input_shape = (32, 32, 3)))
model.add(BatchNormalization())
model.add(Conv2D(32, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding = 'same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(64, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.3))
model.add(Conv2D(128, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(128, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.4))
# model.add(Conv2D(256, (3, 3), activation = 'relu', kernel_initializer = 'he_uniform', kernel_regularizer = l2(weight_decay), padding='same'))
# model.add(Conv2D(256, (3, 3), activation = 'relu', kernel_initializer = 'he_uniform', kernel_regularizer = l2(weight_decay), padding='same'))
# model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
# model.add(Dense(128, acti vation='relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay)))
# model.add(BatchNormalization())
# model.add(Dropout(0.5))
# output layer
model.add(Dense(10, activation = 'softmax'))
# optimize and compile model
opt = Adam(learning_rate = 1e-3)
model.compile(optimizer = opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])
return model
首先使用 model.summary().
获取所需图层的名称
然后在下面给定的代码中使用该图层的名称代替 desired_layer:
from keras.models import Model
extractor = Model(inputs=model.inputs, outputs=model.get_layer(desired_layer).output)
features = extractor.predict(x)
此处x是您要从中提取特征的数据。
我想从我的 CNN 模型的最后一个密集层中提取特征。然而,我对我所做的所有 google 研究感到非常矛盾。 Tensorflow 有很多不同的方法,我正在努力寻找一些有用的方法。
我已经在 CIFAR10 上成功地训练了一个模型。我已将模型保存到一个目录并有一个 saved_model.pb 文件。我已经通过 tensorboard 可视化了模型,但不完全确定我的最后一层的名称。可视化看起来有点混乱。
我怎样才能继续提取这些特征?我想用它们进行 t-SNE 分析。
我正在尝试使用 gfile 加载 pb 图,但不确定这是否正确 approach.Thank 你。
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
from tensorflow.python.platform import gfile
pb_graph_file = '../data/processed/saved_models/saved_model.pb'
f = gfile.GFile(pb_graph_file, 'rb')
graph_def = tf.GraphDef()
f.close()
我的 Keras Sequential 模型如下所示:
"""
This is the CNN model's architecture
"""
weight_decay = 1e-4
model = Sequential()
model.add(Conv2D(32, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding = 'same', input_shape = (32, 32, 3)))
model.add(BatchNormalization())
model.add(Conv2D(32, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding = 'same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(64, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.3))
model.add(Conv2D(128, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(128, (3, 3), activation = 'relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay), padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.4))
# model.add(Conv2D(256, (3, 3), activation = 'relu', kernel_initializer = 'he_uniform', kernel_regularizer = l2(weight_decay), padding='same'))
# model.add(Conv2D(256, (3, 3), activation = 'relu', kernel_initializer = 'he_uniform', kernel_regularizer = l2(weight_decay), padding='same'))
# model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
# model.add(Dense(128, acti vation='relu', kernel_initializer = 'he_normal', kernel_regularizer = l2(weight_decay)))
# model.add(BatchNormalization())
# model.add(Dropout(0.5))
# output layer
model.add(Dense(10, activation = 'softmax'))
# optimize and compile model
opt = Adam(learning_rate = 1e-3)
model.compile(optimizer = opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])
return model
首先使用 model.summary().
获取所需图层的名称然后在下面给定的代码中使用该图层的名称代替 desired_layer:
from keras.models import Model
extractor = Model(inputs=model.inputs, outputs=model.get_layer(desired_layer).output)
features = extractor.predict(x)
此处x是您要从中提取特征的数据。