Tensorflow 在制作 batch 时混淆了图像和标签

Tensorflow mixes up images and labels when making batch

所以我已经在这个问题上停留了几个星期了。我想从图像文件名列表中创建一个图像批次。我将文件名列表插入队列并使用 reader 来获取文件。 reader 然后 returns 文件名和读取的图像文件。

我的问题是,当我使用解码后的 jpg 和来自 reader 的标签进行批处理时,tf.train.shuffle_batch() 混淆了图像和文件名,因此现在标签在图像文件的顺序错误。我在 queue/shuffle_batch 上做错了什么吗?我该如何解决它,以便批处理为正确的文件提供正确的标签?

非常感谢!

import tensorflow as tf
from tensorflow.python.framework import ops


def preprocess_image_tensor(image_tf):
  image = tf.image.convert_image_dtype(image_tf, dtype=tf.float32)
  image = tf.image.resize_image_with_crop_or_pad(image, 300, 300)
  image = tf.image.per_image_standardization(image)
return image

# original image names and labels
image_paths = ["image_0.jpg", "image_1.jpg", "image_2.jpg", "image_3.jpg", "image_4.jpg", "image_5.jpg", "image_6.jpg", "image_7.jpg", "image_8.jpg"]

labels = [0, 1, 2, 3, 4, 5, 6, 7, 8]

# converting arrays to tensors
image_paths_tf = ops.convert_to_tensor(image_paths, dtype=tf.string, name="image_paths_tf")
labels_tf = ops.convert_to_tensor(labels, dtype=tf.int32, name="labels_tf")

# getting tensor slices
image_path_tf, label_tf = tf.train.slice_input_producer([image_paths_tf, labels_tf], shuffle=False)

# getting image tensors from jpeg and performing preprocessing
image_buffer_tf = tf.read_file(image_path_tf, name="image_buffer")
image_tf = tf.image.decode_jpeg(image_buffer_tf, channels=3, name="image")
image_tf = preprocess_image_tensor(image_tf)

# creating a batch of images and labels
batch_size = 5
num_threads = 4
images_batch_tf, labels_batch_tf = tf.train.batch([image_tf, label_tf], batch_size=batch_size, num_threads=num_threads)

# running testing session to check order of images and labels 
init = tf.global_variables_initializer()
with tf.Session() as sess:
  sess.run(init)

  coord = tf.train.Coordinator()
  threads = tf.train.start_queue_runners(coord=coord)

  print image_path_tf.eval()
  print label_tf.eval()

  coord.request_stop()
  coord.join(threads)

根据您的代码,我不确定您的标签如何 encoded/extracted 来自 jpeg 图像。我曾经在同一个文件中对所有内容进行编码,但后来发现了一个更优雅的解决方案。假设您可以获得文件名列表 image_paths 和一个 numpy 标签数组 labels,您可以将它们绑定在一起并使用 tf.train.slice_input_producer 对单个示例进行操作,然后使用 [=14] 将它们批处理在一起=].

import tensorflow as tf
from tensorflow.python.framework import ops

shuffle = True
batch_size = 128
num_threads = 8

def get_data():
    """
    Return image_paths, labels such that label[i] corresponds to image_paths[i].

    image_paths: list of strings
    labels: list/np array of labels
    """
    raise NotImplementedError()

def preprocess_image_tensor(image_tf):
    """Preprocess a single image."""
    image = tf.image.convert_image_dtype(image_tf, dtype=tf.float32)
    image = tf.image.resize_image_with_crop_or_pad(image, 300, 300)
    image = tf.image.per_image_standardization(image)
    return image

image_paths, labels = get_data()

image_paths_tf = ops.convert_to_tensor(image_paths, dtype=tf.string, name='image_paths')
labels_tf = ops.convert_to_tensor(image_paths, dtype=tf.int32, name='labels')
image_path_tf, label_tf = tf.train.slice_input_producer([image_paths_tf, labels_tf], shuffle=shuffle)

# preprocess single image paths
image_buffer_tf = tf.read_file(image_path_tf, name='image_buffer')
image_tf = tf.image.decode_jpeg(image_buffer_tf, channels=3, name='image')
image_tf = preprocess_image_tensor(image_tf)

# batch the results
image_batch_tf, labels_batch_tf = tf.train.batch([image_tf, label_tf], batch_size=batch_size, num_threads=num_threads)

等等....你的tf用法是不是有点奇怪?

你基本上是 运行通过调用两次图形:

  print image_path_tf.eval()
  print label_tf.eval()

并且由于您只要求 image_path_tflabel_tf,所以这条线以下的任何内容甚至都不是 运行:

image_path_tf, label_tf = tf.train.slice_input_producer([image_paths_tf, labels_tf], shuffle=False)

也许试试这个?

image_paths, labels = sess.run([images_batch_tf, labels_batch_tf])
print(image_paths)
print(labels)