巧妙地更改自定义 resnet 18 架构并仍然在预训练模式下使用它

Changing a custom resnet 18 architecture subtly and still use it in pre-trained mode

我可以更改自定义 resnet 18 架构并仍然在 pre-trained = true 模式下使用它吗?我正在对自定义 resnet18 的体系结构进行细微更改,当我 运行 它时,出现以下错误: 这就是自定义 resnet18 的调用方式: model = Resnet_18.resnet18(pretrained=True, embedding_size=args.dim_embed)

自定义resnet18的新变化:

self.layer_attend1 =  nn.Sequential(nn.Conv2d(layers[0], layers[0], stride=2, padding=1, kernel_size=3),
                                    nn.AdaptiveAvgPool2d(1),
                                    nn.Softmax(1))

我正在使用以下方式加载检查点:

checkpoint = torch.load(args.resume, encoding='latin1')
args.start_epoch = checkpoint['epoch']
best_acc = checkpoint['best_prec1']
tnet.load_state_dict(checkpoint['state_dict'])

模型 运行 的输出是:

/scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
  warnings.warn("The use of the transforms.Scale transform is deprecated, " +
=> loading checkpoint 'runs/nondisjoint_l2norm/model_best.pth.tar'
Traceback (most recent call last):
  File "main.py", line 352, in <module>
    main()    
  File "main.py", line 145, in main
    tnet.load_state_dict(checkpoint['state_dict'])
  File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Tripletnet:
        Missing key(s) in state_dict: "embeddingnet.embeddingnet.layer_attend1.0.weight", "embeddingnet.embeddingnet.layer_attend1.0.bias". /scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
  warnings.warn("The use of the transforms.Scale transform is deprecated, " +
=> loading checkpoint 'runs/nondisjoint_l2norm/model_best.pth.tar'
Traceback (most recent call last):
  File "main.py", line 352, in <module>
    main()    
  File "main.py", line 145, in main
    tnet.load_state_dict(checkpoint['state_dict'])
  File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Tripletnet:
        Missing key(s) in state_dict: "embeddingnet.embeddingnet.layer_attend1.0.weight", "embeddingnet.embeddingnet.layer_attend1.0.bias".

那么,如何才能在不每次都从头开始重新训练的情况下实施小的架构更改?

P.S.: 在此处交叉发布:https://discuss.pytorch.org/t/can-i-change-a-custom-resnet-18-architecture-subtly-and-still-use-it-in-pre-trained-true-mode/130783 Thanks a lot to Rodrigo Berriel teach me about https://meta.stackexchange.com/a/141824/913043

如果你真的想这样做,你应该构建模型,然后用参数 strict=False (https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict).

调用 load_state_dict

请记住,A) 你应该初始化你显式添加的任何新层,因为它们不会被 state dict 初始化,并且 B) 由于未初始化,模型可能无法开箱即用权重,但它应该比随机初始化模型训练得更快。