Convolution1D 到 Convolution2D
Convolution1D to Convolution2D
总结问题
我有一个来自传感器的原始信号,长度为 76000 个数据点。我想要
用 CNN 处理这些数据。为此,我想我可以使用 Lambda 层从原始信号形成短时傅立叶变换,例如
x = Lambda(lambda v: tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)))(x)
这完全有效。但我想更进一步,提前处理Raw数据。希望 Convolution1D 层充当过滤器,让一些频率通过并阻止其他频率。
我试过的
我确实有两个独立的(原始数据处理的 Conv1D 示例和我处理 STFT "image" 的 Conv2D 示例)向上和 运行。但是我想结合这些。
Conv1D 其中输入为:input = Input(shape = (76000,))
x = Lambda(lambda v: tf.expand_dims(v,-1))(input)
x = layers.Conv1D(filters =10,kernel_size=100,activation = 'relu')(x)
x = Flatten()(x)
output = Model(input, x)
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 76000)] 0
_________________________________________________________________
lambda_2 (Lambda) (None, 76000, 1) 0
_________________________________________________________________
conv1d (Conv1D) (None, 75901, 10) 1010
________________________________________________________________
Conv2D 相同输入
x = Lambda(lambda v:tf.expand_dims(tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)),-1))(input)
x = BatchNormalization()(x)
Model: "model_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 76000)] 0
_________________________________________________________________
lambda_8 (Lambda) (None, 751, 513, 1) 0
_________________________________________________________________
batch_normalization_3 (Batch (None, 751, 513, 1) 4
_________________________________________________________________
. . .
. . .
flatten_4 (Flatten) (None, 1360) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 1360) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 1361
我正在寻找一种方法来组合从 "conv1d" 到 "lambda_8" 层的起点。如果我把它们放在一起,我会得到:
x = Lambda(lambda v: tf.expand_dims(v,-1))(input)
x = layers.Conv1D(filters =10,kernel_size=100,activation = 'relu')(x)
#x = Flatten()(x)
x = Lambda(lambda v:tf.expand_dims(tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)),-1))(x)
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 76000)] 0
_________________________________________________________________
lambda_17 (Lambda) (None, 76000, 1) 0
_________________________________________________________________
conv1d_6 (Conv1D) (None, 75901, 10) 1010
_________________________________________________________________
lambda_18 (Lambda) (None, 75901, 0, 513, 1) 0 <-- Wrong
=================================================================
这不是我要找的。它应该看起来更像 (None,751,513,10,1)。
到目前为止,我找不到合适的解决方案。
有人可以帮助我吗?
提前致谢!
从the documentation看来,stft
似乎只接受(..., length)
输入,不接受(..., length, channels)
。
因此,第一个建议是先将通道移动到另一个维度,以保持最后一个索引处的长度并使功能正常工作。
现在,当然,您需要匹配长度,不能将 76000 与 75901 匹配。因此第二个建议是在一维卷积中使用 padding='same'
以保持长度相等。
最后,由于 stft
的结果中已经有 10 个通道,因此不需要在最后一个 lambda 中扩展 dims。
总结:
一维部分
inputs = Input((76000,)) #(batch, 76000)
c1Out = Lambda(lambda x: K.expand_dims(x, axis=-1))(inputs) #(batch, 76000, 1)
c1Out = Conv1D(10, 100, activation = 'relu', padding='same')(c1Out) #(batch, 76000, 10)
#permute for putting length last, apply stft, put the channels back to their position
c1Stft = Permute((2,1))(c1Out) #(batch, 10, 76000)
c1Stft = x = Lambda(lambda v: tf.abs(tf.signal.stft(v,
frame_length=frame_length,
frame_step=frame_step)
)
)(c1Stft) #(batch, 10, probably 751, probably 513)
c1Stft = Permute((2,3,1))(c1Stft) #(batch, 751, 513, 10)
二维部分,你的代码似乎没问题:
c2Out = Lambda(lambda v: tf.expand_dims(tf.abs(tf.signal.stft(v,
frame_length=frame_length,
frame_step=frame_step)
),
-1))(inputs) #(batch, 751, 513, 1)
现在一切都具有兼容的尺寸
#maybe
#c2Out = Conv2D(10, ..., padding='same')(c2Out)
joined = Concatenate()([c1Stft, c2Out]) #(batch, 751, 513, 11) #maybe (batch, 751, 513, 20)
further = BatchNormalization()(joined)
further = Conv2D(...)(further)
警告:我不知道他们是否使 stft
可微,Conv1D
部分仅在定义了渐变时才有效。
总结问题 我有一个来自传感器的原始信号,长度为 76000 个数据点。我想要 用 CNN 处理这些数据。为此,我想我可以使用 Lambda 层从原始信号形成短时傅立叶变换,例如
x = Lambda(lambda v: tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)))(x)
这完全有效。但我想更进一步,提前处理Raw数据。希望 Convolution1D 层充当过滤器,让一些频率通过并阻止其他频率。
我试过的 我确实有两个独立的(原始数据处理的 Conv1D 示例和我处理 STFT "image" 的 Conv2D 示例)向上和 运行。但是我想结合这些。
Conv1D 其中输入为:input = Input(shape = (76000,))
x = Lambda(lambda v: tf.expand_dims(v,-1))(input)
x = layers.Conv1D(filters =10,kernel_size=100,activation = 'relu')(x)
x = Flatten()(x)
output = Model(input, x)
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 76000)] 0
_________________________________________________________________
lambda_2 (Lambda) (None, 76000, 1) 0
_________________________________________________________________
conv1d (Conv1D) (None, 75901, 10) 1010
________________________________________________________________
Conv2D 相同输入
x = Lambda(lambda v:tf.expand_dims(tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)),-1))(input)
x = BatchNormalization()(x)
Model: "model_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 76000)] 0
_________________________________________________________________
lambda_8 (Lambda) (None, 751, 513, 1) 0
_________________________________________________________________
batch_normalization_3 (Batch (None, 751, 513, 1) 4
_________________________________________________________________
. . .
. . .
flatten_4 (Flatten) (None, 1360) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 1360) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 1361
我正在寻找一种方法来组合从 "conv1d" 到 "lambda_8" 层的起点。如果我把它们放在一起,我会得到:
x = Lambda(lambda v: tf.expand_dims(v,-1))(input)
x = layers.Conv1D(filters =10,kernel_size=100,activation = 'relu')(x)
#x = Flatten()(x)
x = Lambda(lambda v:tf.expand_dims(tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)),-1))(x)
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 76000)] 0
_________________________________________________________________
lambda_17 (Lambda) (None, 76000, 1) 0
_________________________________________________________________
conv1d_6 (Conv1D) (None, 75901, 10) 1010
_________________________________________________________________
lambda_18 (Lambda) (None, 75901, 0, 513, 1) 0 <-- Wrong
=================================================================
这不是我要找的。它应该看起来更像 (None,751,513,10,1)。 到目前为止,我找不到合适的解决方案。 有人可以帮助我吗?
提前致谢!
从the documentation看来,stft
似乎只接受(..., length)
输入,不接受(..., length, channels)
。
因此,第一个建议是先将通道移动到另一个维度,以保持最后一个索引处的长度并使功能正常工作。
现在,当然,您需要匹配长度,不能将 76000 与 75901 匹配。因此第二个建议是在一维卷积中使用 padding='same'
以保持长度相等。
最后,由于 stft
的结果中已经有 10 个通道,因此不需要在最后一个 lambda 中扩展 dims。
总结:
一维部分
inputs = Input((76000,)) #(batch, 76000)
c1Out = Lambda(lambda x: K.expand_dims(x, axis=-1))(inputs) #(batch, 76000, 1)
c1Out = Conv1D(10, 100, activation = 'relu', padding='same')(c1Out) #(batch, 76000, 10)
#permute for putting length last, apply stft, put the channels back to their position
c1Stft = Permute((2,1))(c1Out) #(batch, 10, 76000)
c1Stft = x = Lambda(lambda v: tf.abs(tf.signal.stft(v,
frame_length=frame_length,
frame_step=frame_step)
)
)(c1Stft) #(batch, 10, probably 751, probably 513)
c1Stft = Permute((2,3,1))(c1Stft) #(batch, 751, 513, 10)
二维部分,你的代码似乎没问题:
c2Out = Lambda(lambda v: tf.expand_dims(tf.abs(tf.signal.stft(v,
frame_length=frame_length,
frame_step=frame_step)
),
-1))(inputs) #(batch, 751, 513, 1)
现在一切都具有兼容的尺寸
#maybe
#c2Out = Conv2D(10, ..., padding='same')(c2Out)
joined = Concatenate()([c1Stft, c2Out]) #(batch, 751, 513, 11) #maybe (batch, 751, 513, 20)
further = BatchNormalization()(joined)
further = Conv2D(...)(further)
警告:我不知道他们是否使 stft
可微,Conv1D
部分仅在定义了渐变时才有效。