如何获得自动编码器生成的压缩表示?
How to get the compressed representation generated by the autoencoder?
我正在制作一个深度多模式自动编码器,它接受两个输入并产生两个输出(这是重构的输入)。两个输入的形状分别为 (1000, 50) 和 (1000,60),模型有 3 个隐藏层,旨在连接 input1 和 input2 的两个潜在层。
这里是模型的完整代码:
input_X = Input(shape=(X[0].shape))
dense_X = Dense(40,activation='relu')(input_X)
dense1_X = Dense(20,activation='relu')(dense_X)
latent_X= Dense(2,activation='relu')(dense1_X)
input_X1 = Input(shape=(X1[0].shape))
dense_X1 = Dense(40,activation='relu')(input_X1)
dense1_X1 = Dense(20,activation='relu')(dense_X1)
latent_X1= Dense(2,activation='relu')(dense1_X1)
Concat_X_X1 = concatenate([latent_X, latent_X1])
decoding_X = Dense(20,activation='relu')(Concat_X_X1)
decoding1_X = Dense(40,activation='relu')(decoding_X)
output_X = Dense(X[0].shape[0],activation='sigmoid')(decoding1_X)
decoding_X1 = Dense(20,activation='relu')(Concat_X_X1)
decoding1_X1 = Dense(40,activation='relu')(decoding_X1)
output_X1 = Dense(X1[0].shape[0],activation='sigmoid')(decoding1_X1)
multi_modal_autoencoder = Model([input_X, input_X1], [output_X, output_X1], name='multi_modal_autoencoder')
encoder = Model([input_X, input_X1], Concat_X_X1)
encoder.save('encoder.h5')
multi_modal_autoencoder.compile(optimizer=keras.optimizers.Adam(lr=0.001),loss='mse')
model = multi_modal_autoencoder.fit([X,X1], [X, X1], epochs=70, batch_size=150)
我想 return 来自编码器的潜在表示,它表现为一个形状为 (1000,4) 的 numpy 数组,然后将其用作另一个模型的输入。希望,有这种知识的人可以帮助我实现它。为此,我按照建议尝试了以下操作:
file = h5py.File('encoder.h5', 'r')
keys = list(file.keys()) #it returns models weights as key
value = file.get('model_weights') #<HDF5 group "/model_weights" (9 members)>
the 9 members are ['concatenate_1', 'dense_1', 'dense_2', 'dense_3', 'dense_4', 'dense_5', 'dense_6', 'input_1', 'input_2'].
file['/model_weights/concatenate_1']) returns <HDF5 group "/model_weights/concatenate_1" (0 members)>
value = file['/model_weights/concatenate_1'][:]
但它 return 是一个错误 :
AttributeError Traceback (most recent call last)
<ipython-input-18-7bc6cbac9468> in <module>
----> 1 value = file['/model_weights/concatenate_1'][:]
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
~\Anaconda3\envs\tensorflow\lib\site-packages\h5py\_hl\group.py in __getitem__(self, name)
260 raise ValueError("Invalid HDF5 object reference")
261 else:
--> 262 oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
263
264 otype = h5i.get_type(oid)
~\Anaconda3\envs\tensorflow\lib\site-packages\h5py\_hl\base.py in _e(self, name, lcpl)
135 else:
136 try:
--> 137 name = name.encode('ascii')
138 coding = h5t.CSET_ASCII
139 except UnicodeEncodeError:
AttributeError: 'slice' object has no attribute 'encode'
我假设 X[0].shape[0]
和 X1[0].shape[0]
是相等的,因为它是一个密集层,所以它应该是 4000。你已经设法进入训练阶段,但更好的是我说 return 的值 Model.fit
是训练期间取得损失的历史对象。您名为 model
的对象实际上不是模型。
要使用此训练模型预测值,您需要调用 Model.predict()
,在您的情况下应如下所示:
multi_modal_autoencoder.predict([D1,D2])
Model.predict()
returns numpy 预测数组,在您的情况下有两个数组,并且在检索输入的预测后可能需要重塑方法。然后,您可以将此输出用作下一个网络的输入。
强烈建议您阅读 docs
我正在制作一个深度多模式自动编码器,它接受两个输入并产生两个输出(这是重构的输入)。两个输入的形状分别为 (1000, 50) 和 (1000,60),模型有 3 个隐藏层,旨在连接 input1 和 input2 的两个潜在层。
这里是模型的完整代码:
input_X = Input(shape=(X[0].shape))
dense_X = Dense(40,activation='relu')(input_X)
dense1_X = Dense(20,activation='relu')(dense_X)
latent_X= Dense(2,activation='relu')(dense1_X)
input_X1 = Input(shape=(X1[0].shape))
dense_X1 = Dense(40,activation='relu')(input_X1)
dense1_X1 = Dense(20,activation='relu')(dense_X1)
latent_X1= Dense(2,activation='relu')(dense1_X1)
Concat_X_X1 = concatenate([latent_X, latent_X1])
decoding_X = Dense(20,activation='relu')(Concat_X_X1)
decoding1_X = Dense(40,activation='relu')(decoding_X)
output_X = Dense(X[0].shape[0],activation='sigmoid')(decoding1_X)
decoding_X1 = Dense(20,activation='relu')(Concat_X_X1)
decoding1_X1 = Dense(40,activation='relu')(decoding_X1)
output_X1 = Dense(X1[0].shape[0],activation='sigmoid')(decoding1_X1)
multi_modal_autoencoder = Model([input_X, input_X1], [output_X, output_X1], name='multi_modal_autoencoder')
encoder = Model([input_X, input_X1], Concat_X_X1)
encoder.save('encoder.h5')
multi_modal_autoencoder.compile(optimizer=keras.optimizers.Adam(lr=0.001),loss='mse')
model = multi_modal_autoencoder.fit([X,X1], [X, X1], epochs=70, batch_size=150)
我想 return 来自编码器的潜在表示,它表现为一个形状为 (1000,4) 的 numpy 数组,然后将其用作另一个模型的输入。希望,有这种知识的人可以帮助我实现它。为此,我按照建议尝试了以下操作:
file = h5py.File('encoder.h5', 'r')
keys = list(file.keys()) #it returns models weights as key
value = file.get('model_weights') #<HDF5 group "/model_weights" (9 members)>
the 9 members are ['concatenate_1', 'dense_1', 'dense_2', 'dense_3', 'dense_4', 'dense_5', 'dense_6', 'input_1', 'input_2'].
file['/model_weights/concatenate_1']) returns <HDF5 group "/model_weights/concatenate_1" (0 members)>
value = file['/model_weights/concatenate_1'][:]
但它 return 是一个错误 :
AttributeError Traceback (most recent call last)
<ipython-input-18-7bc6cbac9468> in <module>
----> 1 value = file['/model_weights/concatenate_1'][:]
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
~\Anaconda3\envs\tensorflow\lib\site-packages\h5py\_hl\group.py in __getitem__(self, name)
260 raise ValueError("Invalid HDF5 object reference")
261 else:
--> 262 oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
263
264 otype = h5i.get_type(oid)
~\Anaconda3\envs\tensorflow\lib\site-packages\h5py\_hl\base.py in _e(self, name, lcpl)
135 else:
136 try:
--> 137 name = name.encode('ascii')
138 coding = h5t.CSET_ASCII
139 except UnicodeEncodeError:
AttributeError: 'slice' object has no attribute 'encode'
我假设 X[0].shape[0]
和 X1[0].shape[0]
是相等的,因为它是一个密集层,所以它应该是 4000。你已经设法进入训练阶段,但更好的是我说 return 的值 Model.fit
是训练期间取得损失的历史对象。您名为 model
的对象实际上不是模型。
要使用此训练模型预测值,您需要调用 Model.predict()
,在您的情况下应如下所示:
multi_modal_autoencoder.predict([D1,D2])
Model.predict()
returns numpy 预测数组,在您的情况下有两个数组,并且在检索输入的预测后可能需要重塑方法。然后,您可以将此输出用作下一个网络的输入。
强烈建议您阅读 docs