Torch 的 tensorflow 版本 nn.DepthConcat

tensorflow version of Torch nn.DepthConcat

Torch 有一个函数 nn.DepthConcat,它与 nn.Concat 相似,只是它用零填充以使所有非通道暗淡的大小相同。我一直试图在 tensorflow 中实现这一目标,但运气不佳。如果我在图形构建时知道所有张量的大小,这似乎可行:

    def depthconcat(inputs):
        concat_dim = 3
        shapes = []
        for input_ in inputs:
            shapes.append(input_.get_shape())
        shape_tensor = tf.pack(shapes)
        max_dims = tf.reduce_max(shape_tensor, 0)

        padded_inputs = []
        for input_ in inputs:
            paddings = max_dims - input_.get_shape()
            padded_inputs.append(tf.pad(input_, paddings))
        return tf.concat(concat_dim, padded_inputs)

但是,如果形状是在 运行 时确定的,我会收到以下错误:

    Tensors in list passed to 'values' of 'Pack' Op have types [<NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>] that don't all match.

如果 TensorShape 对象在图形构建时已完全定义,它似乎能够将其转换为张量。有什么建议么?谢谢。

编辑: 从 input_.get_shape() 更改为 tf.shape(input_) 解决了图形创建时形状不明确的问题。现在我得到 ValueError: Shape (4,) must have rank 2

我希望这对尝试构建具有不同输出大小的初始模块的其他人有所帮助。

def depthconcat(inputs):
    concat_dim = 3
    shapes = []
    for input_ in inputs:
        shapes.append(tf.to_float(tf.shape(input_)[:3]))
    shape_tensor = tf.pack(shapes)
    max_dims = tf.reduce_max(shape_tensor, 0)

    padded_inputs = []
    for idx, input_ in enumerate(inputs):
        mean_diff = (max_dims - shapes[idx])/2.0
        pad_low = tf.floor(mean_diff)
        pad_high = tf.ceil(mean_diff)
        paddings = tf.to_int32(tf.pack([pad_low, pad_high], axis=1))
        paddings = tf.pad(paddings, paddings=[[0, 1], [0, 0]])
        padded_inputs.append(tf.pad(input_, paddings))

     return tf.concat(concat_dim, padded_inputs, name=name)