tensorflow, image segmentation convnet InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600
tensorflow, image segmentation convnet InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600
我正在尝试从 BRATS 挑战中分割图像。我在这两个存储库的组合中使用 U-net:
https://github.com/zsdonghao/u-net-brain-tumor
https://github.com/jakeret/tf_unet
当我尝试输出预测统计数据时出现形状不匹配错误:
InvalidArgumentError: Input to reshape is a tensor with 28800000
values, but the requested shape has 57600 [[Node: Reshape_2 =
Reshape[T=DT_FLOAT, Tshape=DT_INT32,
_device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Cast_0_0, Reshape_2/shape)]]
我正在使用 240x240 的图像切片,batch_verification_size = 500
然后,
- 这是形状 test_x: (500, 240, 240, 1)
- 这是形状 test_y: (500, 240, 240, 1)
- 这是形状测试 x: (500, 240, 240, 1)
- 这是形状测试 y: (500, 240, 240, 1)
- 这是形状批次 x:(500, 240, 240, 1)
- 这是形状批次 y:(500, 240, 240, 1)
- 这是形状预测:(500, 240, 240, 1)
- 这是成本:Tensor("add_88:0", shape=(), dtype=float32)
- 这是成本:Tensor("Mean_2:0",shape=(), dtype=float32)
- 这是形状预测:(?, ?, ?, 1)
- 这是形状批次 x:(500, 240, 240, 1)
- 这是形状批次 y:(500, 240, 240, 1)
240 x 240 x 500 = 28800000
我不知道为什么要求 57600
看起来错误来自 output_minibatch_stats
函数:
summary_str, loss, acc, predictions = sess.run([self.summary_op,
self.net.cost, self.net.accuracy,
self.net.predicter],
feed_dict={self.net.x: batch_x,
self.net.y: batch_y,
self.net.keep_prob: 1.})
因此 sess.run tf 函数有问题。下面是出现错误的一些代码。有人知道会发生什么吗?
def store_prediction(self, sess, batch_x, batch_y, name):
print('track 1')
prediction = sess.run(self.net.predicter, feed_dict={self.net.x: batch_x,
self.net.y: batch_y,
self.net.keep_prob: 1.})
print('track 2')
pred_shape = prediction.shape
loss = sess.run(self.net.cost, feed_dict={self.net.x: batch_x,
self.net.y: batch_y, `
self.net.keep_prob: 1.})
print('track 3')
logging.info("Verification error= {:.1f}%, loss= {:.4f}".format(error_rate(prediction,
util.crop_to_shape(batch_y,
prediction.shape)),
loss))
print('track 4')
print('this is shape batch x: ' + str(batch_x.shape))
print('this is shape batch y: ' + str(batch_y.shape))
print('this is shape prediction: ' + str(prediction.shape))
#img = util.combine_img_prediction(batch_x, batch_y, prediction)
print('track 5')
#util.save_image(img, "%s/%s.jpg"%(self.prediction_path, name))
return pred_shape
def output_epoch_stats(self, epoch, total_loss, training_iters, lr):
logging.info("Epoch {:}, Average loss: {:.4f}, learning rate: {:.4f}".format(epoch, (total_loss / training_iters), lr))
def output_minibatch_stats(self, sess, summary_writer, step, batch_x, batch_y):
print('this is shape cost : ' + str(self.net.cost.shape))
print('this is cost : ' + str(self.net.cost))
print('this is acc : ' + str(self.net.accuracy.shape))
print('this is cost : ' + str(self.net.accuracy))
print('this is shape prediction: ' + str(self.net.predicter.shape))
print('this is shape batch x: ' + str(batch_x.shape))
print('this is shape batch y: ' + str(batch_y.shape))
# Calculate batch loss and accuracy
summary_str, loss, acc, predictions = sess.run([self.summary_op,
self.net.cost,
self.net.accuracy,
self.net.predicter],
feed_dict={self.net.x: batch_x,
self.net.y: batch_y,
self.net.keep_prob: 1.})
print('track 6')
summary_writer.add_summary(summary_str, step)
print('track 7')
summary_writer.flush()
logging.info("Iter {:}, Minibatch Loss= {:.4f}, Training Accuracy= {:.4f}, Minibatch error= {:.1f}%".format(step,
loss,
acc,
error_rate(predictions, batch_y)))
print('track 8')
您在训练期间在 tensorflow 管道中将批量大小设置为 1,但在测试数据中输入了 500 批量大小。这就是为什么网络只请求形状为 57600 的张量。
您可以将训练批量大小设置为 500 或将测试批量大小设置为 1。
我正在尝试从 BRATS 挑战中分割图像。我在这两个存储库的组合中使用 U-net:
https://github.com/zsdonghao/u-net-brain-tumor
https://github.com/jakeret/tf_unet
当我尝试输出预测统计数据时出现形状不匹配错误:
InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600 [[Node: Reshape_2 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Cast_0_0, Reshape_2/shape)]]
我正在使用 240x240 的图像切片,batch_verification_size = 500
然后,
- 这是形状 test_x: (500, 240, 240, 1)
- 这是形状 test_y: (500, 240, 240, 1)
- 这是形状测试 x: (500, 240, 240, 1)
- 这是形状测试 y: (500, 240, 240, 1)
- 这是形状批次 x:(500, 240, 240, 1)
- 这是形状批次 y:(500, 240, 240, 1)
- 这是形状预测:(500, 240, 240, 1)
- 这是成本:Tensor("add_88:0", shape=(), dtype=float32)
- 这是成本:Tensor("Mean_2:0",shape=(), dtype=float32)
- 这是形状预测:(?, ?, ?, 1)
- 这是形状批次 x:(500, 240, 240, 1)
- 这是形状批次 y:(500, 240, 240, 1)
240 x 240 x 500 = 28800000 我不知道为什么要求 57600
看起来错误来自 output_minibatch_stats
函数:
summary_str, loss, acc, predictions = sess.run([self.summary_op,
self.net.cost, self.net.accuracy,
self.net.predicter],
feed_dict={self.net.x: batch_x,
self.net.y: batch_y,
self.net.keep_prob: 1.})
因此 sess.run tf 函数有问题。下面是出现错误的一些代码。有人知道会发生什么吗?
def store_prediction(self, sess, batch_x, batch_y, name):
print('track 1')
prediction = sess.run(self.net.predicter, feed_dict={self.net.x: batch_x,
self.net.y: batch_y,
self.net.keep_prob: 1.})
print('track 2')
pred_shape = prediction.shape
loss = sess.run(self.net.cost, feed_dict={self.net.x: batch_x,
self.net.y: batch_y, `
self.net.keep_prob: 1.})
print('track 3')
logging.info("Verification error= {:.1f}%, loss= {:.4f}".format(error_rate(prediction,
util.crop_to_shape(batch_y,
prediction.shape)),
loss))
print('track 4')
print('this is shape batch x: ' + str(batch_x.shape))
print('this is shape batch y: ' + str(batch_y.shape))
print('this is shape prediction: ' + str(prediction.shape))
#img = util.combine_img_prediction(batch_x, batch_y, prediction)
print('track 5')
#util.save_image(img, "%s/%s.jpg"%(self.prediction_path, name))
return pred_shape
def output_epoch_stats(self, epoch, total_loss, training_iters, lr):
logging.info("Epoch {:}, Average loss: {:.4f}, learning rate: {:.4f}".format(epoch, (total_loss / training_iters), lr))
def output_minibatch_stats(self, sess, summary_writer, step, batch_x, batch_y):
print('this is shape cost : ' + str(self.net.cost.shape))
print('this is cost : ' + str(self.net.cost))
print('this is acc : ' + str(self.net.accuracy.shape))
print('this is cost : ' + str(self.net.accuracy))
print('this is shape prediction: ' + str(self.net.predicter.shape))
print('this is shape batch x: ' + str(batch_x.shape))
print('this is shape batch y: ' + str(batch_y.shape))
# Calculate batch loss and accuracy
summary_str, loss, acc, predictions = sess.run([self.summary_op,
self.net.cost,
self.net.accuracy,
self.net.predicter],
feed_dict={self.net.x: batch_x,
self.net.y: batch_y,
self.net.keep_prob: 1.})
print('track 6')
summary_writer.add_summary(summary_str, step)
print('track 7')
summary_writer.flush()
logging.info("Iter {:}, Minibatch Loss= {:.4f}, Training Accuracy= {:.4f}, Minibatch error= {:.1f}%".format(step,
loss,
acc,
error_rate(predictions, batch_y)))
print('track 8')
您在训练期间在 tensorflow 管道中将批量大小设置为 1,但在测试数据中输入了 500 批量大小。这就是为什么网络只请求形状为 57600 的张量。 您可以将训练批量大小设置为 500 或将测试批量大小设置为 1。