如何为 PyTorch 的预训练 VGG16 添加权重归一化?

How to add weight normalisation to PyTorch's pretrained VGG16?

我想为 PyTorch 预训练 VGG-16 添加权重归一化。 我能想到的一种可能的解决方案如下,

from torch.nn.utils import weight_norm as wn
import torchvision.models as models

class ResnetEncoder(nn.Module):
    def __init__(self):
        super(ResnetEncoder, self).__init__()
        ...
        self.encoder = models.vgg16(pretrained=True).features
        ...
    def forward(self, input_image):
        self.features = []
        x = (input_image - self.mean) / self.std
        
        self.features.append(self.encoder(x))
        ...

        return self.features

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.encoder = ResnetEncoder() # this is basically VGG16
        self.decoder = DepthDecoder(self.encoder.num_ch_enc)
        for k,m in self.encoder.encoder._modules.items():
            if isinstance(m,nn.Conv2d):
                m = wn(m)

    def forward(self,x):
        return self.decoder(self.encoder(x))

vgg_backbone_model = Net()
vgg_backbone_model.train()
...

但我不知道这是否是向预训练 VGG16 添加权重归一化的正确方法。

您应该使用 nn.Module.modules 而不是访问 _modules 属性。

执行m = wn(m)不会更新层的参数,而是复制并覆盖局部变量m。相反,您应该从 nn.Module 覆盖层本身,一种方法是使用 setattr:

for k, v in model.named_modules():
    if isinstance(v, nn.Conv2d):
        setattr(model, k, weight_norm(v))