ConvLSTMCell 的 Tensorflow 错误:输入的维度应该匹配
Tensorflow error with ConvLSTMCell: Dimensions of inputs should match
我尝试根据 Tensorflow 文档输入 ConvLSTMCell
输入参数,但我仍然收到此错误:
InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [10,64,64,1] vs. shape[1] = [1,64,64,16]
[[Node: rnn/while/rnn/Encoder_1/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rnn/while/TensorArrayReadV3, rnn/while/Switch_4:1, rnn/while/rnn/Encoder_1/split/split_dim)]]
我的代码是:
num_channels = 1
img_size = 64
filter_size1 = 5
num_filters1 = 16
#If time_major == True, this must be a Tensor of shape: [max_time, batch_size, ...], or a nested tuple of such elements.
x = tf.placeholder(tf.float32, shape=[None,1, img_size, img_size, num_channels], name='x')
InputShape = [img_size,img_size, 1]
encoder_1_KernelShape = [filter_size1,filter_size1]
# create a ConvLSTMCell
rnn_cell = ConvLSTMCell(2, InputShape, num_filters1, encoder_1_KernelShape, use_bias=True, forget_bias=1.0, name='Encoder_1')
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
#initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)
initial_state = rnn_cell.zero_state(1, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
encoder_1_outputs, encoder_1_state = tf.nn.dynamic_rnn(rnn_cell, x,
initial_state=initial_state,
dtype=tf.float32)
for i in range(2):
x_train = data_3[0:10, i, :, :]
x_train = x_train.flatten()
x_train = x_train.reshape([10, 1, img_size, img_size, 1])
x_train = np.float32(x_train)
feed_dict_train = {x: x_train}
试试这个:
num_channels = 1
img_size = 64
filter_size1 = 5
num_filters1 = 16
x = tf.placeholder(tf.float32, shape=[None,None,img_size,img_size,num_channels],
name='x')
InputShape = [img_size, img_size, num_channels]
encoder_1_KernelShape = [filter_size1, filter_size1]
rnn_cell = ConvLSTMCell(2, InputShape, num_filters1, encoder_1_KernelShape,
use_bias=True, forget_bias=1.0, name='Encoder_1')
initial_state = rnn_cell.zero_state(10, dtype=tf.float32)
encoder_1_outputs, encoder_1_state = tf.nn.dynamic_rnn(rnn_cell, x,
initial_state=initial_state,
dtype=tf.float32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
x_train = np.zeros([10, 1, img_size, img_size, num_channels], dtype=np.float32)
sess.run(encoder_1_outputs, feed_dict={x: x_train})
请注意,x
中的第一个维度是 batch_size
(示例中等于 10),第二个维度是 sequence_num
(等于 1)。
我尝试根据 Tensorflow 文档输入 ConvLSTMCell
输入参数,但我仍然收到此错误:
InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [10,64,64,1] vs. shape[1] = [1,64,64,16]
[[Node: rnn/while/rnn/Encoder_1/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](rnn/while/TensorArrayReadV3, rnn/while/Switch_4:1, rnn/while/rnn/Encoder_1/split/split_dim)]]
我的代码是:
num_channels = 1
img_size = 64
filter_size1 = 5
num_filters1 = 16
#If time_major == True, this must be a Tensor of shape: [max_time, batch_size, ...], or a nested tuple of such elements.
x = tf.placeholder(tf.float32, shape=[None,1, img_size, img_size, num_channels], name='x')
InputShape = [img_size,img_size, 1]
encoder_1_KernelShape = [filter_size1,filter_size1]
# create a ConvLSTMCell
rnn_cell = ConvLSTMCell(2, InputShape, num_filters1, encoder_1_KernelShape, use_bias=True, forget_bias=1.0, name='Encoder_1')
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
#initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)
initial_state = rnn_cell.zero_state(1, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
encoder_1_outputs, encoder_1_state = tf.nn.dynamic_rnn(rnn_cell, x,
initial_state=initial_state,
dtype=tf.float32)
for i in range(2):
x_train = data_3[0:10, i, :, :]
x_train = x_train.flatten()
x_train = x_train.reshape([10, 1, img_size, img_size, 1])
x_train = np.float32(x_train)
feed_dict_train = {x: x_train}
试试这个:
num_channels = 1
img_size = 64
filter_size1 = 5
num_filters1 = 16
x = tf.placeholder(tf.float32, shape=[None,None,img_size,img_size,num_channels],
name='x')
InputShape = [img_size, img_size, num_channels]
encoder_1_KernelShape = [filter_size1, filter_size1]
rnn_cell = ConvLSTMCell(2, InputShape, num_filters1, encoder_1_KernelShape,
use_bias=True, forget_bias=1.0, name='Encoder_1')
initial_state = rnn_cell.zero_state(10, dtype=tf.float32)
encoder_1_outputs, encoder_1_state = tf.nn.dynamic_rnn(rnn_cell, x,
initial_state=initial_state,
dtype=tf.float32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
x_train = np.zeros([10, 1, img_size, img_size, num_channels], dtype=np.float32)
sess.run(encoder_1_outputs, feed_dict={x: x_train})
请注意,x
中的第一个维度是 batch_size
(示例中等于 10),第二个维度是 sequence_num
(等于 1)。