如何向量化总和?张量 [i,:,:,:] + 张量 [i]

How to vectorize the sum? tensor[i,:,:,:] + tensor[i]

我想向量化以下代码:

def style_noise(self, y, style):
    n = torch.randn(y.shape)
    for i in range(n.shape[0]):
        n[i] = (n[i] - n.mean(dim=(1, 2, 3))[i]) * style.std(dim=(1, 2, 3))[i] / n.std(dim=(1, 2, 3))[i] + style.mean(dim=(1, 2, 3))[i]
    noise = Variable(n, requires_grad=False).to(y.device)
    return noise

我没有找到这样做的好方法。

y 和 style 是 4d 张量,比如 style.shape = y.shape = [64, 3, 128, 128]

我想要return噪声张量,noise.shape = [64, 3, 128, 128]

如果问题不清楚,请在评论中告诉我。

您的用例正是 .mean.std 方法带有 keepdim 参数的原因。您可以利用它来启用 broadcasting semantics 为您矢量化事物:

def style_noise(self, y, style):
    n = torch.randn(y.shape)
    n_mean = n.mean(dim=(1, 2, 3), keepdim=True)
    n_std = n.std(dim=(1, 2, 3), keepdim=True)
    style_mean = style.mean(dim=(1, 2, 3), keepdim=True)
    style_std = style.std(dim=(1, 2, 3), keepdim=True)
    n = (n - n_mean) * style_std / n_std + style_mean
    noise = Variable(n, requires_grad=False).to(y.device)
    return noise

要计算整个张量的均值和标准差,请不要设置任何参数

m = t.mean(); print(m) # if you don't set the dim for the whole tensor
s = t.std(); print(s) # if you don't set the dim for the whole tensor

然后如果你的形状是 2,2,2,创建用于广播减法和除法的张量。

ss = torch.empty(2,2,2).fill_(s)
print(ss)

mm = torch.empty(2,2,2).fill_(m)
print(mm)

目前,如果您未设置 dimkeepdim 无法正常工作。

m = t.mean(); print(m) # for the whole tensor
s = t.std(); print(s) # for the whole tensor

m = t.mean(dim=0); print(m) # 0 means columns mean
s = t.std(dim=0); print(s) # 0 means columns mean

m = t.mean(dim=1); print(m) # 1 means rows mean
s = t.std(dim=1); print(s) # 1 means rows mean

s = t.mean(keepdim=True);print(s) # will not work
m = t.std(keepdim=True);print(m) # will not work

如果您将暗淡设置为元组,那么它将return表示轴,而不是整体。