如何使用 Keras 在 TensorBoard 中显示自定义图像?

How to display custom images in TensorBoard using Keras?

我正在处理 Keras 中的分割问题,我想在每个训练时期结束时显示分割结果。

我想要类似于 , but using Keras. I know that Keras has the TensorBoard 回调的东西,但似乎仅限于此用途。

我知道这会破坏 Keras 后端抽象,但我还是对使用 TensorFlow 后端感兴趣。

是否可以使用 Keras + TensorFlow 实现这一目标?

因此,以下解决方案对我来说效果很好:

import tensorflow as tf

def make_image(tensor):
    """
    Convert an numpy representation image to Image protobuf.
    Copied from https://github.com/lanpa/tensorboard-pytorch/
    """
    from PIL import Image
    height, width, channel = tensor.shape
    image = Image.fromarray(tensor)
    import io
    output = io.BytesIO()
    image.save(output, format='PNG')
    image_string = output.getvalue()
    output.close()
    return tf.Summary.Image(height=height,
                         width=width,
                         colorspace=channel,
                         encoded_image_string=image_string)

class TensorBoardImage(keras.callbacks.Callback):
    def __init__(self, tag):
        super().__init__() 
        self.tag = tag

    def on_epoch_end(self, epoch, logs={}):
        # Load image
        img = data.astronaut()
        # Do something to the image
        img = (255 * skimage.util.random_noise(img)).astype('uint8')

        image = make_image(img)
        summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])
        writer = tf.summary.FileWriter('./logs')
        writer.add_summary(summary, epoch)
        writer.close()

        return

tbi_callback = TensorBoardImage('Image Example')

只需将回调传递给 fitfit_generator

请注意,您还可以 运行 使用回调中的 model 进行一些操作。例如,您可以 运行 某些图像上的模型以检查其性能。

同样,您可能想尝试 tf-matplotlib。这是一个散点图

import tensorflow as tf
import numpy as np

import tfmpl

@tfmpl.figure_tensor
def draw_scatter(scaled, colors): 
    '''Draw scatter plots. One for each color.'''  
    figs = tfmpl.create_figures(len(colors), figsize=(4,4))
    for idx, f in enumerate(figs):
        ax = f.add_subplot(111)
        ax.axis('off')
        ax.scatter(scaled[:, 0], scaled[:, 1], c=colors[idx])
        f.tight_layout()

    return figs

with tf.Session(graph=tf.Graph()) as sess:

    # A point cloud that can be scaled by the user
    points = tf.constant(
        np.random.normal(loc=0.0, scale=1.0, size=(100, 2)).astype(np.float32)
    )
    scale = tf.placeholder(tf.float32)        
    scaled = points*scale

    # Note, `scaled` above is a tensor. Its being passed `draw_scatter` below. 
    # However, when `draw_scatter` is invoked, the tensor will be evaluated and a
    # numpy array representing its content is provided.   
    image_tensor = draw_scatter(scaled, ['r', 'g'])
    image_summary = tf.summary.image('scatter', image_tensor)      
    all_summaries = tf.summary.merge_all() 

    writer = tf.summary.FileWriter('log', sess.graph)
    summary = sess.run(all_summaries, feed_dict={scale: 2.})
    writer.add_summary(summary, global_step=0)

执行后,在 Tensorboard 中生成以下图

请注意,tf-matplotlib 负责评估任何张量输入,避免 pyplot 线程问题,并支持运行时关键绘图的 blitting。

我相信我找到了一种使用 tf-matplotlib 将此类自定义图像记录到 tensorboard 的更好方法。这是如何...

class TensorBoardDTW(tf.keras.callbacks.TensorBoard):
    def __init__(self, **kwargs):
        super(TensorBoardDTW, self).__init__(**kwargs)
        self.dtw_image_summary = None

    def _make_histogram_ops(self, model):
        super(TensorBoardDTW, self)._make_histogram_ops(model)
        tf.summary.image('dtw-cost', create_dtw_image(model.output))

只需覆盖 TensorBoard 回调 class 中的 _make_histogram_ops 方法即可添加自定义摘要。在我的例子中,create_dtw_image 是一个使用 tf-matplotlib 创建图像的函数。

此致。

下面是如何在图像上绘制地标的示例:

class CustomCallback(keras.callbacks.Callback):
    def __init__(self, model, generator):
        self.generator = generator
        self.model = model

    def tf_summary_image(self, tensor):
        import io
        from PIL import Image

        tensor = tensor.astype(np.uint8)

        height, width, channel = tensor.shape
        image = Image.fromarray(tensor)
        output = io.BytesIO()
        image.save(output, format='PNG')
        image_string = output.getvalue()
        output.close()
        return tf.Summary.Image(height=height,
                             width=width,
                             colorspace=channel,
                             encoded_image_string=image_string)

    def on_epoch_end(self, epoch, logs={}):
        frames_arr, landmarks = next(self.generator)

        # Take just 1st sample from batch
        frames_arr = frames_arr[0:1,...]

        y_pred = self.model.predict(frames_arr)

        # Get last frame for which we have done predictions
        img = frames_arr[0,-1,:,:]

        img = img * 255
        img = img[:, :, ::-1]
        img = np.copy(img)

        landmarks_gt = landmarks[-1].reshape(-1,2)
        landmarks_pred = y_pred.reshape(-1,2)

        img = draw_landmarks(img, landmarks_gt, (0,255,0))
        img = draw_landmarks(img, landmarks_pred, (0,0,255))

        image = self.tf_summary_image(img)
        summary = tf.Summary(value=[tf.Summary.Value(image=image)])
        writer = tf.summary.FileWriter('./logs')
        writer.add_summary(summary, epoch)
        writer.close()
        return

根据上面的回答和我自己的搜索,我提供了以下代码来在Keras中使用TensorBoard完成以下事情:


  • 问题设置:预测双目立体匹配中的视差图;
  • 为模型提供输入左图像 x 和地面实况视差图 gt;
  • 在某个迭代时间显示输入 x 和真实值 'gt';
  • 在某个迭代时间显示模型的输出 y

  1. 首先,你必须用 Callback 进行盛装回调 class。 Note 回调可以通过 class 属性 self.model 访问其关联模型。另外 Note如果您想获取并显示模型的输出,您必须使用 feed_dict 将输入提供给模型。

    from keras.callbacks import Callback
    import numpy as np
    from keras import backend as K
    import tensorflow as tf
    import cv2
    
    # make the 1 channel input image or disparity map look good within this color map. This function is not necessary for this Tensorboard problem shown as above. Just a function used in my own research project.
    def colormap_jet(img):
        return cv2.cvtColor(cv2.applyColorMap(np.uint8(img), 2), cv2.COLOR_BGR2RGB)
    
    class customModelCheckpoint(Callback):
        def __init__(self, log_dir='./logs/tmp/', feed_inputs_display=None):
              super(customModelCheckpoint, self).__init__()
              self.seen = 0
              self.feed_inputs_display = feed_inputs_display
              self.writer = tf.summary.FileWriter(log_dir)
    
        # this function will return the feeding data for TensorBoard visualization;
        # arguments:
        #  * feed_input_display : [(input_yourModelNeed, left_image, disparity_gt ), ..., (input_yourModelNeed, left_image, disparity_gt), ...], i.e., the list of tuples of Numpy Arrays what your model needs as input and what you want to display using TensorBoard. Note: you have to feed the input to the model with feed_dict, if you want to get and display the output of your model. 
        def custom_set_feed_input_to_display(self, feed_inputs_display):
              self.feed_inputs_display = feed_inputs_display
    
        # copied from the above answers;
        def make_image(self, numpy_img):
              from PIL import Image
              height, width, channel = numpy_img.shape
              image = Image.fromarray(numpy_img)
              import io
              output = io.BytesIO()
              image.save(output, format='PNG')
              image_string = output.getvalue()
              output.close()
              return tf.Summary.Image(height=height, width=width, colorspace= channel, encoded_image_string=image_string)
    
    
        # A callback has access to its associated model through the class property self.model.
        def on_batch_end(self, batch, logs = None):
              logs = logs or {} 
              self.seen += 1
              if self.seen % 200 == 0: # every 200 iterations or batches, plot the costumed images using TensorBorad;
                  summary_str = []
                  for i in range(len(self.feed_inputs_display)):
                      feature, disp_gt, imgl = self.feed_inputs_display[i]
                      disp_pred = np.squeeze(K.get_session().run(self.model.output, feed_dict = {self.model.input : feature}), axis = 0)
                      #disp_pred = np.squeeze(self.model.predict_on_batch(feature), axis = 0)
                      summary_str.append(tf.Summary.Value(tag= 'plot/img0/{}'.format(i), image= self.make_image( colormap_jet(imgl)))) # function colormap_jet(), defined above;
                      summary_str.append(tf.Summary.Value(tag= 'plot/disp_gt/{}'.format(i), image= self.make_image( colormap_jet(disp_gt))))
                      summary_str.append(tf.Summary.Value(tag= 'plot/disp/{}'.format(i), image= self.make_image( colormap_jet(disp_pred))))
    
                  self.writer.add_summary(tf.Summary(value = summary_str), global_step =self.seen)
    
  2. 接下来,将此回调对象传递给您模型的 fit_generator(),例如:

       feed_inputs_4_display = some_function_you_wrote()
       callback_mc = customModelCheckpoint( log_dir = log_save_path, feed_inputd_display = feed_inputs_4_display)
       # or 
       callback_mc.custom_set_feed_input_to_display(feed_inputs_4_display)
       yourModel.fit_generator(... callbacks = callback_mc)
       ...
    
  3. 现在您可以 运行 代码,然后转到 TensorBoard 主机查看服装图像显示。例如,这是我使用上述代码得到的:


    完成!尽情享受吧!

我正在尝试将 matplotlib 图显示到张量板上(在绘制统计数据、热图等方面很有用)。它也可以用于一般情况。

class AttentionLogger(keras.callbacks.Callback):
        def __init__(self, val_data, logsdir):
            super(AttentionLogger, self).__init__()
            self.logsdir = logsdir  # where the event files will be written 
            self.validation_data = val_data # validation data generator
            self.writer = tf.summary.FileWriter(self.logsdir)  # creating the summary writer

        @tfmpl.figure_tensor
        def attention_matplotlib(self, gen_images): 
            '''
            Creates a matplotlib figure and writes it to tensorboard using tf-matplotlib
            gen_images: The image tensor of shape (batchsize,width,height,channels) you want to write to tensorboard
            '''  
            r, c = 5,5  # want to write 25 images as a 5x5 matplotlib subplot in TBD (tensorboard)
            figs = tfmpl.create_figures(1, figsize=(15,15))
            cnt = 0
            for idx, f in enumerate(figs):
                for i in range(r):
                    for j in range(c):    
                        ax = f.add_subplot(r,c,cnt+1)
                        ax.set_yticklabels([])
                        ax.set_xticklabels([])
                        ax.imshow(gen_images[cnt])  # writes the image at index cnt to the 5x5 grid
                        cnt+=1
                f.tight_layout()
            return figs

        def on_train_begin(self, logs=None):  # when the training begins (run only once)
                image_summary = [] # creating a list of summaries needed (can be scalar, images, histograms etc)
                for index in range(len(self.model.output)):  # self.model is accessible within callback
                    img_sum = tf.summary.image('img{}'.format(index), self.attention_matplotlib(self.model.output[index]))                    
                    image_summary.append(img_sum)
                self.total_summary = tf.summary.merge(image_summary)

        def on_epoch_end(self, epoch, logs = None):   # at the end of each epoch run this
            logs = logs or {} 
            x,y = next(self.validation_data)  # get data from the generator
            # get the backend session and sun the merged summary with appropriate feed_dict
            sess_run_summary = K.get_session().run(self.total_summary, feed_dict = {self.model.input: x['encoder_input']})
            self.writer.add_summary(sess_run_summary, global_step =epoch)  #finally write the summary!

然后你必须把它作为参数给fit/fit_generator

#val_generator is the validation data generator
callback_image = AttentionLogger(logsdir='./tensorboard', val_data=val_generator)
... # define the model and generators

# autoencoder is the model, note how callback is suppiled to fit_generator
autoencoder.fit_generator(generator=train_generator,
                    validation_data=val_generator,
                    callbacks=callback_image)

在我向张量板显示注意力图(作为热图)的情况下,这是输出。

class customModelCheckpoint(Callback):
def __init__(self, log_dir='../logs/', feed_inputs_display=None):
      super(customModelCheckpoint, self).__init__()
      self.seen = 0
      self.feed_inputs_display = feed_inputs_display
      self.writer = tf.summary.FileWriter(log_dir)


def custom_set_feed_input_to_display(self, feed_inputs_display):
      self.feed_inputs_display = feed_inputs_display


# A callback has access to its associated model through the class property self.model.
def on_batch_end(self, batch, logs = None):
      logs = logs or {}
      self.seen += 1
      if self.seen % 8 == 0: # every 200 iterations or batches, plot the costumed images using TensorBorad;
          summary_str = []
          feature = self.feed_inputs_display[0][0]
          disp_gt = self.feed_inputs_display[0][1]
          disp_pred = self.model.predict_on_batch(feature)

          summary_str.append(tf.summary.image('disp_input/{}'.format(self.seen), feature, max_outputs=4))
          summary_str.append(tf.summary.image('disp_gt/{}'.format(self.seen), disp_gt, max_outputs=4))
          summary_str.append(tf.summary.image('disp_pred/{}'.format(self.seen), disp_pred, max_outputs=4))

          summary_st = tf.summary.merge(summary_str)
          summary_s = K.get_session().run(summary_st)
          self.writer.add_summary(summary_s, global_step=self.seen)
          self.writer.flush()
然后你可以调用你的自定义回调并将图像写入tensorboard
callback_mc = customModelCheckpoint(log_dir='../logs/',  feed_inputs_display=[(a, b)])
callback_tb = TensorBoard(log_dir='../logs/', histogram_freq=0, write_graph=True, write_images=True)
callback = []
def data_gen(fr1, fr2):
while True:
    hdr_arr = []
    ldr_arr = []
    for i in range(args['batch_size']):
        try:
            ldr = pickle.load(fr2)           
            hdr = pickle.load(fr1)               
        except EOFError:
            fr1 = open(args['data_h_hdr'], 'rb')
            fr2 = open(args['data_h_ldr'], 'rb')
        hdr_arr.append(hdr)
        ldr_arr.append(ldr)
    hdr_h = np.array(hdr_arr)
    ldr_h = np.array(ldr_arr)
    gen = aug.flow(hdr_h, ldr_h, batch_size=args['batch_size'])
    out = gen.next()
    a = out[0]
    b = out[1]
    callback_mc.custom_set_feed_input_to_display(feed_inputs_display=[(a, b)])
    yield [a, b]

callback.append(callback_tb)
callback.append(callback_mc)
H = model.fit_generator(data_gen(fr1, fr2), steps_per_epoch=100,   epochs=args['epoch'], callbacks=callback)

picture

这里和其他地方的现有答案是一个很好的起点,但我发现他们需要一些调整才能与 Tensorflow 2.x 和 keras flow_from_directory* 一起使用。这是我想出的。

我的目的是为了验证数据增广过程,所以我写入tensorboard的图像就是增广训练数据。这并不是 OP 想要的。他们必须将 on_batch_end 更改为 on_epoch_end 并访问模型输出(这是我没有研究过的,但我确信这是可能的。)

类似,您将能够通过拖动橙色滑块滚动浏览时代,显示已写入 tensorboard 的每张图像的不同增强副本。小心处理经过多个时期训练的大型数据集。由于此例程会在每个时期保存每第 1000 张图像的副本,因此您最终可能会得到一个很大的 tfevents 文件。

回调函数,保存为tensorflow_image_callback.py

import tensorflow as tf
import math

class TensorBoardImage(tf.keras.callbacks.Callback):

    def __init__(self, logdir, train, validation=None):
        super(TensorBoardImage, self).__init__()
        self.logdir = logdir
        self.train = train
        self.validation = validation
        self.file_writer = tf.summary.create_file_writer(logdir)

    def on_batch_end(self, batch, logs):
        images_or_labels = 0 #0=images, 1=labels
        imgs = self.train[batch][images_or_labels]

        #calculate epoch
        n_batches_per_epoch = self.train.samples / self.train.batch_size
        epoch = math.floor(self.train.total_batches_seen / n_batches_per_epoch)

        #since the training data is shuffled each epoch, we need to use the index_array to find something which uniquely 
        #identifies the image and is constant throughout training
        first_index_in_batch = batch * self.train.batch_size
        last_index_in_batch = first_index_in_batch + self.train.batch_size
        last_index_in_batch = min(last_index_in_batch, len(self.train.index_array))
        img_indices = self.train.index_array[first_index_in_batch : last_index_in_batch]

        #convert float to uint8, shift range to 0-255
        imgs -= tf.reduce_min(imgs)
        imgs *= 255 / tf.reduce_max(imgs)
        imgs = tf.cast(imgs, tf.uint8)

        with self.file_writer.as_default():
            for ix,img in enumerate(imgs):
                img_tensor = tf.expand_dims(img, 0) #tf.summary needs a 4D tensor
                #only post 1 out of every 1000 images to tensorboard
                if (img_indices[ix] % 1000) == 0:
                    #instead of img_filename, I could just use str(img_indices[ix]) as a unique identifier
                    #but this way makes it easier to find the unaugmented image
                    img_filename = self.train.filenames[img_indices[ix]]
                    tf.summary.image(img_filename, img_tensor, step=epoch)

像这样将其与您的训练相结合:

train_augmentation = keras.preprocessing.image.ImageDataGenerator(rotation_range=20,
                                                                    shear_range=10,
                                                                    zoom_range=0.2,
                                                                    width_shift_range=0.2,
                                                                    height_shift_range=0.2,
                                                                    brightness_range=[0.8, 1.2],
                                                                    horizontal_flip=False,
                                                                    vertical_flip=False
                                                                    )
train_data_generator = train_augmentation.flow_from_directory(directory='/some/path/train/',
                                                                class_mode='categorical',
                                                                batch_size=batch_size,
                                                                shuffle=True
                                                                )

valid_augmentation = keras.preprocessing.image.ImageDataGenerator()
valid_data_generator = valid_augmentation.flow_from_directory(directory='/some/path/valid/',
                                                                class_mode='categorical',
                                                                batch_size=batch_size,
                                                                shuffle=False
                                                                )
tensorboard_log_dir = '/some/path'
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=tensorboard_log_dir, update_freq='batch')
tensorboard_image_callback = tensorflow_image_callback.TensorBoardImage(logdir=tensorboard_log_dir, train=train_data_generator, validation=valid_data_generator)

model.fit(x=train_data_generator,
        epochs=n_epochs,
        validation_data=valid_data_generator, 
        validation_freq=1,
        callbacks=[
                    tensorboard_callback,
                    tensorboard_image_callback
                    ])

*我后来意识到 flow_from_directory 有一个选项 save_to_dir 这对我的目的来说已经足够了。简单地添加该选项要简单得多,但使用这样的回调具有在 Tensorboard 中显示图像的附加功能,可以比较同一图像的多个版本,并允许自定义保存图像的数量。 save_to_dir 保存每个增强图像的副本,这很快就会增加很多 space。