Keras 示例中的自动编码器

Autoencoder in Keras' Example

在 Keras 的文档中,有一个 DAE (Denoising AutoEncoder) 示例。以下是link https://keras.io/examples/mnist_denoising_autoencoder/

众所周知,自动编码器由编码器和解码器网络组成,编码器的输出是编码器的输入。但是当我一遍又一遍地检查代码时,我发现示例中解码器(称为潜在)的输入也是编码器的输入。这让我很困惑。

以下为关联代码段

# Build the Autoencoder Model
# First build the Encoder Model
inputs = Input(shape=input_shape, name='encoder_input')
x = inputs
# Stack of Conv2D blocks
# Notes:
# 1) Use Batch Normalization before ReLU on deep networks
# 2) Use MaxPooling2D as alternative to strides>1
# - faster but not as good as strides>1
for filters in layer_filters:
    x = Conv2D(filters=filters,
               kernel_size=kernel_size,
               strides=2,
               activation='relu',
               padding='same')(x)

# Shape info needed to build Decoder Model
shape = K.int_shape(x)

# Generate the latent vector
x = Flatten()(x)
latent = Dense(latent_dim, name='latent_vector')(x)

# Instantiate Encoder Model
encoder = Model(inputs, latent, name='encoder')
encoder.summary()

# Build the Decoder Model
latent_inputs = Input(shape=(latent_dim,), name='decoder_input')
x = Dense(shape[1] * shape[2] * shape[3])(latent_inputs)
x = Reshape((shape[1], shape[2], shape[3]))(x)
# Stack of Transposed Conv2D blocks
# Notes:
# 1) Use Batch Normalization before ReLU on deep networks
# 2) Use UpSampling2D as alternative to strides>1
# - faster but not as good as strides>1
for filters in layer_filters[::-1]:
    x = Conv2DTranspose(filters=filters,
                        kernel_size=kernel_size,
                        strides=2,
                        activation='relu',
                        padding='same')(x)

x = Conv2DTranspose(filters=1,
                    kernel_size=kernel_size,
                    padding='same')(x)

outputs = Activation('sigmoid', name='decoder_output')(x)

# Instantiate Decoder Model
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()

请注意解码器使用 latent_inputs 作为其输入,但 latent_inputs 来自输入,而不是来自编码器的潜在输出。

谁能告诉我为什么会这样?或者这是文档中的错误?非常感谢。

您对 Model(..) 的输入和解码器的输入所使用的命名约定感到困惑。

在此代码中,为编码器和解码器创建了两个单独的 Model(...)。当您创建最终的自动编码器模型时,例如在此图中,您需要将编码器的输出提供给解码器的输入。

如您所述,"decoder uses latent_inputs as its input, but latent_inputs comes from Input (this input is the input of the Decoder Model only not the Autoencoder model)"。

encoder = Model(inputs, latent, name='encoder') 创建编码器模型,decoder = Model(latent_inputs, outputs, name='decoder') 创建使用 latent_inputs 作为编码器模型输出的输入的解码器模型。

最终自动编码器模型将由

生成
autoencoder = Model(inputs, decoder(encoder(inputs)), name='autoencoder')

在这里,您对编码器模型的输入来自 inputs,您对解码器模型的输出是您的自动编码器的最终输出。为了创建编码器的输出,首先它将 inputs 提供给 encoder(...),然后编码器的输出作为 decoder(encoder(...))

提供给解码器

为了简单起见,您也可以像这样创建模型,

# Build the Autoencoder Model
# Encoder
inputs = Input(shape=input_shape, name='encoder_input')
x = inputs
for filters in layer_filters:
    x = Conv2D(filters=filters,
               kernel_size=kernel_size,
               strides=2,
               activation='relu',
               padding='same')(x)
shape = K.int_shape(x)
x = Flatten()(x)
latent = Dense(latent_dim, name='latent_vector')(x)

# Decoder

x = Dense(shape[1] * shape[2] * shape[3])(latent)
x = Reshape((shape[1], shape[2], shape[3]))(x)

for filters in layer_filters[::-1]:
    x = Conv2DTranspose(filters=filters,
                        kernel_size=kernel_size,
                        strides=2,
                        activation='relu',
                        padding='same')(x)

x = Conv2DTranspose(filters=1,
                    kernel_size=kernel_size,
                    padding='same')(x)

outputs = Activation('sigmoid', name='decoder_output')(x)


autoencoder = Model(inputs, outputs, name='autoencoder')
autoencoder.summary()