如何在 Keras 中获取图层的输出形状?
How to get the output shape of a layer in Keras?
我在 Keras 中有以下代码(基本上我正在修改此代码以供使用)但我收到此错误:
'ValueError: Error when checking target: expected conv3d_3 to have 5 dimensions, but got array with shape (10, 4096)'
代码:
from keras.models import Sequential
from keras.layers.convolutional import Conv3D
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import pylab as plt
from keras import layers
# We create a layer which take as input movies of shape
# (n_frames, width, height, channels) and returns a movie
# of identical shape.
model = Sequential()
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
input_shape=(None, 64, 64, 1),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(Conv3D(filters=1, kernel_size=(3, 3, 3),
activation='sigmoid',
padding='same', data_format='channels_last'))
model.compile(loss='binary_crossentropy', optimizer='adadelta')
我输入的数据格式如下:[1, 10, 64, 64, 1]。
所以我想知道我错在哪里以及如何查看每一层的output_shape。
你可以通过layer.output_shape
得到图层的输出形状。
for layer in model.layers:
print(layer.output_shape)
给你:
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 1)
或者,您可以使用 model.summary
:
漂亮地打印模型
model.summary()
以漂亮的格式向您详细介绍每层的参数数量和输出形状以及整体模型结构:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D) (None, None, 64, 64, 40) 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_4 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_4 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv3d_1 (Conv3D) (None, None, 64, 64, 1) 1081
=================================================================
Total params: 407,001
Trainable params: 406,681
Non-trainable params: 320
_________________________________________________________________
如果只想访问特定层的信息,可以在构建该层时使用name
参数,然后像这样调用:
...
model.add(ConvLSTM2D(..., name='conv3d_0'))
...
model.get_layer('conv3d_0')
编辑: 仅供参考,它将始终与 layer.output_shape
相同,请不要为此实际使用 Lambda 或自定义层。但是您可以使用 Lambda
层来回显传递张量的形状。
...
def print_tensor_shape(x):
print(x.shape)
return x
model.add(Lambda(print_tensor_shape))
...
或者编写一个自定义层并在 call()
上打印张量的形状。
class echo_layer(Layer):
...
def call(self, x):
print(x.shape)
return x
...
model.add(echo_layer())
我在 Keras 中有以下代码(基本上我正在修改此代码以供使用)但我收到此错误:
'ValueError: Error when checking target: expected conv3d_3 to have 5 dimensions, but got array with shape (10, 4096)'
代码:
from keras.models import Sequential
from keras.layers.convolutional import Conv3D
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import pylab as plt
from keras import layers
# We create a layer which take as input movies of shape
# (n_frames, width, height, channels) and returns a movie
# of identical shape.
model = Sequential()
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
input_shape=(None, 64, 64, 1),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(Conv3D(filters=1, kernel_size=(3, 3, 3),
activation='sigmoid',
padding='same', data_format='channels_last'))
model.compile(loss='binary_crossentropy', optimizer='adadelta')
我输入的数据格式如下:[1, 10, 64, 64, 1]。 所以我想知道我错在哪里以及如何查看每一层的output_shape。
你可以通过layer.output_shape
得到图层的输出形状。
for layer in model.layers:
print(layer.output_shape)
给你:
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 40)
(None, None, 64, 64, 1)
或者,您可以使用 model.summary
:
model.summary()
以漂亮的格式向您详细介绍每层的参数数量和输出形状以及整体模型结构:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D) (None, None, 64, 64, 40) 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_4 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_4 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv3d_1 (Conv3D) (None, None, 64, 64, 1) 1081
=================================================================
Total params: 407,001
Trainable params: 406,681
Non-trainable params: 320
_________________________________________________________________
如果只想访问特定层的信息,可以在构建该层时使用name
参数,然后像这样调用:
...
model.add(ConvLSTM2D(..., name='conv3d_0'))
...
model.get_layer('conv3d_0')
编辑: 仅供参考,它将始终与 layer.output_shape
相同,请不要为此实际使用 Lambda 或自定义层。但是您可以使用 Lambda
层来回显传递张量的形状。
...
def print_tensor_shape(x):
print(x.shape)
return x
model.add(Lambda(print_tensor_shape))
...
或者编写一个自定义层并在 call()
上打印张量的形状。
class echo_layer(Layer):
...
def call(self, x):
print(x.shape)
return x
...
model.add(echo_layer())