Pytorch代码的Tensorflow实现:添加卷积层
Tensorflow implementation of Pytorch code: adding convolutional layers
我想在 Tensorflow 中实现此 PyTorch 代码,但我是新手,正在寻找一些 assistance/resources。
Pytorch中的代码在前向传播中结合了两个卷积:
class PytorchLayer(nn.Module):
def __init__(self, in_features, out_features):
super(PytorchLayer, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.layer1 = nn.Conv1d(in_features, out_features, 1)
self.layer2 = nn.Conv1d(in_features, out_features, 1, bias=False)
def forward(self, x):
return self.layer1(x) + self.layer2(x - x.mean(dim=2, keepdim=True))
我如何在 tensorflow 中执行此操作?
我知道我可以像这样做一维卷积:
tf.keras.layers.Conv1D(in_features, kernel_size = 1, strides=1)
我也明白我可以像这样创建一个前馈网络:
tf.keras.Sequential([tf.keras.layers.Conv1D(in_features, kernel_size = 1, strides=1)])
但是,在 tensorflow 中,我如何从 Pytorch 代码实现这一行,它转换了卷积:
self.layer1(x) + self.layer2(x - x.mean(dim=2, keepdim=True))
很抱歉这个业余问题。找了半天也没找到和我的相似的post
您可能会找到 Keras 教程:
此任务的信息。使用 Keras 功能模型 API,这可能类似于:
out_features = 5 # Arbitrary for the example
layer1 = tf.keras.layers.Conv1D(
out_features, kernel_size=1, strides=1, name='Conv1')
layer2 = tf.keras.layers.Conv1D(
out_features, kernel_size=1, strides=1, use_bias=False, name='Conv2')
subtract = tf.keras.layers.Subtract(name='SubtractMean')
mean = tf.keras.layers.Lambda(
lambda t: tf.reduce_mean(t, axis=2, keepdims=True), name='Mean')
# Connect the layers in a model.
x = tf.keras.Input(shape=(5,5))
average_x = mean(x)
normalized_x = subtract([x, average_x])
y = tf.keras.layers.Add(name='AddConvolutions')([layer1(x), layer2(normalized_x)])
m = tf.keras.Model(inputs=x, outputs=y)
m.summary()
>>> Model: "model"
>>> __________________________________________________________________________________________________
>>> Layer (type) Output Shape Param # Connected to
>>> ==================================================================================================
>>> input_1 (InputLayer) [(None, 5, 5)] 0 []
>>>
>>> Mean (Lambda) (None, 5, 1) 0 ['input_1[0][0]']
>>>
>>> SubtractMean (Subtract) (None, 5, 5) 0 ['input_1[0][0]',
>>> 'Mean[0][0]']
>>>
>>> Conv1 (Conv1D) (None, 5, 5) 30 ['input_1[0][0]']
>>>
>>> Conv2 (Conv1D) (None, 5, 5) 25 ['SubtractMean[0][0]']
>>>
>>> AddConvolutions (Add) (None, 5, 5) 0 ['Conv1[0][0]',
>>> 'Conv2[0][0]']
>>>
>>> ==================================================================================================
>>> Total params: 55
>>> Trainable params: 55
>>> Non-trainable params: 0
>>> __________________________________________________________________________________________________
我想在 Tensorflow 中实现此 PyTorch 代码,但我是新手,正在寻找一些 assistance/resources。
Pytorch中的代码在前向传播中结合了两个卷积:
class PytorchLayer(nn.Module):
def __init__(self, in_features, out_features):
super(PytorchLayer, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.layer1 = nn.Conv1d(in_features, out_features, 1)
self.layer2 = nn.Conv1d(in_features, out_features, 1, bias=False)
def forward(self, x):
return self.layer1(x) + self.layer2(x - x.mean(dim=2, keepdim=True))
我如何在 tensorflow 中执行此操作?
我知道我可以像这样做一维卷积:
tf.keras.layers.Conv1D(in_features, kernel_size = 1, strides=1)
我也明白我可以像这样创建一个前馈网络:
tf.keras.Sequential([tf.keras.layers.Conv1D(in_features, kernel_size = 1, strides=1)])
但是,在 tensorflow 中,我如何从 Pytorch 代码实现这一行,它转换了卷积:
self.layer1(x) + self.layer2(x - x.mean(dim=2, keepdim=True))
很抱歉这个业余问题。找了半天也没找到和我的相似的post
您可能会找到 Keras 教程:
此任务的信息。使用 Keras 功能模型 API,这可能类似于:
out_features = 5 # Arbitrary for the example
layer1 = tf.keras.layers.Conv1D(
out_features, kernel_size=1, strides=1, name='Conv1')
layer2 = tf.keras.layers.Conv1D(
out_features, kernel_size=1, strides=1, use_bias=False, name='Conv2')
subtract = tf.keras.layers.Subtract(name='SubtractMean')
mean = tf.keras.layers.Lambda(
lambda t: tf.reduce_mean(t, axis=2, keepdims=True), name='Mean')
# Connect the layers in a model.
x = tf.keras.Input(shape=(5,5))
average_x = mean(x)
normalized_x = subtract([x, average_x])
y = tf.keras.layers.Add(name='AddConvolutions')([layer1(x), layer2(normalized_x)])
m = tf.keras.Model(inputs=x, outputs=y)
m.summary()
>>> Model: "model"
>>> __________________________________________________________________________________________________
>>> Layer (type) Output Shape Param # Connected to
>>> ==================================================================================================
>>> input_1 (InputLayer) [(None, 5, 5)] 0 []
>>>
>>> Mean (Lambda) (None, 5, 1) 0 ['input_1[0][0]']
>>>
>>> SubtractMean (Subtract) (None, 5, 5) 0 ['input_1[0][0]',
>>> 'Mean[0][0]']
>>>
>>> Conv1 (Conv1D) (None, 5, 5) 30 ['input_1[0][0]']
>>>
>>> Conv2 (Conv1D) (None, 5, 5) 25 ['SubtractMean[0][0]']
>>>
>>> AddConvolutions (Add) (None, 5, 5) 0 ['Conv1[0][0]',
>>> 'Conv2[0][0]']
>>>
>>> ==================================================================================================
>>> Total params: 55
>>> Trainable params: 55
>>> Non-trainable params: 0
>>> __________________________________________________________________________________________________