如何将 Keras 合并层用于具有两个输出的自动编码器

How to use Keras merge layer for autoencoder with two ouput

假设我有两个输入:XY,我想设计联合自动编码器来重构X'Y'

如图,X为音频输入,Y为视频输入。这种深度架构很酷,因为它有两个输入和两个输出。而且,他们共享中间的一些层,。我的问题是如何使用 Keras 来编写这个自动编码器。假设除中间的共享层外,每一层都是全连接的。

我的代码如下:

 from keras.layers import Input, Dense
 from keras.models import Model
 import numpy as np

 X = np.random.random((1000, 100))
 y = np.random.random((1000, 300))  # x and y can be different size

 # the X autoencoder layer 

 Xinput = Input(shape=(100,))

 encoded = Dense(50, activation='relu')(Xinput)
 encoded = Dense(20, activation='relu')(encoded)
 encoded = Dense(15, activation='relu')(encoded)

 decoded = Dense(20, activation='relu')(encoded)
 decoded = Dense(50, activation='relu')(decoded)
 decoded = Dense(100, activation='relu')(decoded)



 # the Y autoencoder layer 
 Yinput = Input(shape=(300,))

 encoded = Dense(120, activation='relu')(Yinput)
 encoded = Dense(50, activation='relu')(encoded)
 encoded = Dense(15, activation='relu')(encoded)

 decoded = Dense(50, activation='relu')(encoded)
 decoded = Dense(120, activation='relu')(decoded)
 decoded = Dense(300, activation='relu')(decoded)

我只是中间有 15 个节点用于 XY。 我的问题是如何用损失函数 \|X-X'\|^2 + \|Y-Y'\|^2?

训练这个联合自动编码器

谢谢

让我澄清一下,您想在一个模型中使用共享层的两个输入层和两个输出层,对吗?

我想这可以给你一个思路:

from keras.layers import Input, Dense, Concatenate
from keras.models import Model
import numpy as np

X = np.random.random((1000, 100))
y = np.random.random((1000, 300))  # x and y can be different size

# the X autoencoder layer 
Xinput = Input(shape=(100,))

encoded_x = Dense(50, activation='relu')(Xinput)
encoded_x = Dense(20, activation='relu')(encoded_x)

# the Y autoencoder layer 
Yinput = Input(shape=(300,))

encoded_y = Dense(120, activation='relu')(Yinput)
encoded_y = Dense(50, activation='relu')(encoded_y)

# concatenate encoding layers
c_encoded = Concatenate(name="concat", axis=1)([encoded_x, encoded_y])
encoded = Dense(15, activation='relu')(c_encoded)

decoded_x = Dense(20, activation='relu')(encoded)
decoded_x = Dense(50, activation='relu')(decoded_x)
decoded_x = Dense(100, activation='relu')(decoded_x)

out_x = SomeOuputLayers(..)(decoded_x)

decoded_y = Dense(50, activation='relu')(encoded)
decoded_y = Dense(120, activation='relu')(decoded_y)
decoded_y = Dense(300, activation='relu')(decoded_y)

out_y = SomeOuputLayers(..)(decoded_y)

# Now you have two input and two output with shared layer
model = Model([Xinput, Yinput], [out_x, out_y])

你的代码是这样的,你有两个独立的模型。虽然您可以简单地为以下两个子网使用共享表示层的输出两次,但您必须合并两个子网的输入:

Xinput = Input(shape=(100,))
Yinput = Input(shape=(300,))

Xencoded = Dense(50, activation='relu')(Xinput)
Xencoded = Dense(20, activation='relu')(Xencoded)


Yencoded = Dense(120, activation='relu')(Yinput)
Yencoded = Dense(50, activation='relu')(Yencoded)

shared_input = Concatenate()([Xencoded, Yencoded])
shared_output = Dense(15, activation='relu')(shared_input)

Xdecoded = Dense(20, activation='relu')(shared_output)
Xdecoded = Dense(50, activation='relu')(Xdecoded)
Xdecoded = Dense(100, activation='relu')(Xdecoded)

Ydecoded = Dense(50, activation='relu')(shared_output)
Ydecoded = Dense(120, activation='relu')(Ydecoded)
Ydecoded = Dense(300, activation='relu')(Ydecoded)

现在你有两个独立的输出。所以你需要两个单独的损失函数,无论如何都会添加它们来编译模型:

model = Model([Xinput, Yinput], [Xdecoded, Ydecoded])
model.compile(optimizer='adam', loss=['mse', 'mse'], loss_weights=[1., 1.])

然后您可以通过以下方式简单地训练模型:

model.fit([X_input, Y_input], [X_label, Y_label])