Keras 序列模型中的数据增强层
Data Augmentation Layer in Keras Sequential Model
我正在尝试将数据增强作为一个层添加到模型中,但我遇到了我认为是形状问题的问题。我也尝试在增强层中指定输入形状。当我从模型中取出 data_augmentation
层时,它运行良好。
preprocessing.RandomFlip('horizontal', input_shape=(224, 224, 3))
data_augmentation_layer = keras.Sequential([
preprocessing.RandomFlip('horizontal'),
preprocessing.RandomRotation(0.2),
preprocessing.RandomZoom(0.2),
preprocessing.RandomWidth(0.2),
preprocessing.RandomHeight(0.2),
preprocessing.RandomContrast(0.2)
], name='data_augmentation')
model = keras.Sequential([
data_augmentation_layer,
Conv2D(filters=32,
kernel_size=1,
strides=1,
input_shape=(224, 224, 3)),
Activation(activation='relu'),
MaxPool2D(),
Conv2D(filters=32,
kernel_size=1,
strides=1),
Activation(activation='relu'),
MaxPool2D(),
Flatten(),
Dense(1, activation='sigmoid')
])```
The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received: (None, None)
Call arguments received:
• inputs=tf.Tensor(shape=(None, 224, 224, 3), dtype=float32)
• training=True
• mask=None
层 RandomWidth
和 RandomHeight
导致此错误,因为它们导致 None
维度:请参阅评论 here:
[...]RandomHeight will lead to a None shape on
the height dimension, as not all outputs from the layer will be the
same height (by design). That is ok for things like the Conv2D layer,
which can accept variable shaped image input (with None shapes on some
dimensions).
This will not work for then calling into a Flatten followed by a
Dense, because the flattened batches will also be of variable size
(because of the variable height), and the Dense layer needs a fixed
shape for the last dimension. You could probably pad output of flatten
before the dense, but if you want this architecture, you may just want
to avoid image augmentation layer that lead to a variable output
shape.
因此,您可以不使用 Flatten
层,而是使用 GlobalMaxPool2D
层,它不需要事先知道其他维度:
import tensorflow as tf
data_augmentation_layer = tf.keras.Sequential([
tf.keras.layers.RandomFlip('horizontal',
input_shape=(224, 224, 3)),
tf.keras.layers.RandomRotation(0.2),
tf.keras.layers.RandomZoom(0.2),
tf.keras.layers.RandomWidth(0.2),
tf.keras.layers.RandomHeight(0.2),
tf.keras.layers.RandomContrast(0.2)
], name='data_augmentation')
model = tf.keras.Sequential([
data_augmentation_layer,
tf.keras.layers.Conv2D(filters=32,
kernel_size=1,
strides=1),
tf.keras.layers.Activation(activation='relu'),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Conv2D(filters=32,
kernel_size=1,
strides=1),
tf.keras.layers.Activation(activation='relu'),
tf.keras.layers.GlobalMaxPool2D(),
tf.keras.layers.Dense(1, activation='sigmoid')
])
print(model.summary())
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
data_augmentation (Sequenti (None, None, None, 3) 0
al)
conv2d_8 (Conv2D) (None, None, None, 32) 128
activation_8 (Activation) (None, None, None, 32) 0
max_pooling2d_6 (MaxPooling (None, None, None, 32) 0
2D)
conv2d_9 (Conv2D) (None, None, None, 32) 1056
activation_9 (Activation) (None, None, None, 32) 0
global_max_pooling2d_1 (Glo (None, 32) 0
balMaxPooling2D)
dense_4 (Dense) (None, 1) 33
=================================================================
Total params: 1,217
Trainable params: 1,217
Non-trainable params: 0
_________________________________________________________________
None
我正在尝试将数据增强作为一个层添加到模型中,但我遇到了我认为是形状问题的问题。我也尝试在增强层中指定输入形状。当我从模型中取出 data_augmentation
层时,它运行良好。
preprocessing.RandomFlip('horizontal', input_shape=(224, 224, 3))
data_augmentation_layer = keras.Sequential([
preprocessing.RandomFlip('horizontal'),
preprocessing.RandomRotation(0.2),
preprocessing.RandomZoom(0.2),
preprocessing.RandomWidth(0.2),
preprocessing.RandomHeight(0.2),
preprocessing.RandomContrast(0.2)
], name='data_augmentation')
model = keras.Sequential([
data_augmentation_layer,
Conv2D(filters=32,
kernel_size=1,
strides=1,
input_shape=(224, 224, 3)),
Activation(activation='relu'),
MaxPool2D(),
Conv2D(filters=32,
kernel_size=1,
strides=1),
Activation(activation='relu'),
MaxPool2D(),
Flatten(),
Dense(1, activation='sigmoid')
])```
The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received: (None, None)
Call arguments received:
• inputs=tf.Tensor(shape=(None, 224, 224, 3), dtype=float32)
• training=True
• mask=None
层 RandomWidth
和 RandomHeight
导致此错误,因为它们导致 None
维度:请参阅评论 here:
[...]RandomHeight will lead to a None shape on the height dimension, as not all outputs from the layer will be the same height (by design). That is ok for things like the Conv2D layer, which can accept variable shaped image input (with None shapes on some dimensions).
This will not work for then calling into a Flatten followed by a Dense, because the flattened batches will also be of variable size (because of the variable height), and the Dense layer needs a fixed shape for the last dimension. You could probably pad output of flatten before the dense, but if you want this architecture, you may just want to avoid image augmentation layer that lead to a variable output shape.
因此,您可以不使用 Flatten
层,而是使用 GlobalMaxPool2D
层,它不需要事先知道其他维度:
import tensorflow as tf
data_augmentation_layer = tf.keras.Sequential([
tf.keras.layers.RandomFlip('horizontal',
input_shape=(224, 224, 3)),
tf.keras.layers.RandomRotation(0.2),
tf.keras.layers.RandomZoom(0.2),
tf.keras.layers.RandomWidth(0.2),
tf.keras.layers.RandomHeight(0.2),
tf.keras.layers.RandomContrast(0.2)
], name='data_augmentation')
model = tf.keras.Sequential([
data_augmentation_layer,
tf.keras.layers.Conv2D(filters=32,
kernel_size=1,
strides=1),
tf.keras.layers.Activation(activation='relu'),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Conv2D(filters=32,
kernel_size=1,
strides=1),
tf.keras.layers.Activation(activation='relu'),
tf.keras.layers.GlobalMaxPool2D(),
tf.keras.layers.Dense(1, activation='sigmoid')
])
print(model.summary())
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
data_augmentation (Sequenti (None, None, None, 3) 0
al)
conv2d_8 (Conv2D) (None, None, None, 32) 128
activation_8 (Activation) (None, None, None, 32) 0
max_pooling2d_6 (MaxPooling (None, None, None, 32) 0
2D)
conv2d_9 (Conv2D) (None, None, None, 32) 1056
activation_9 (Activation) (None, None, None, 32) 0
global_max_pooling2d_1 (Glo (None, 32) 0
balMaxPooling2D)
dense_4 (Dense) (None, 1) 33
=================================================================
Total params: 1,217
Trainable params: 1,217
Non-trainable params: 0
_________________________________________________________________
None