在训练期间更新输入变量的正确方法是什么?

What is the correct way to update an input variable during training?

我有意见

inp = torch.tensor([1.0])

和一个神经网络

class Model_updater(nn.Module):
    def __init__(self):
        super(Model_updater, self).__init__()
        self.fc1 = nn.Linear(1, 2)
        self.fc2 = nn.Linear(2, 3)
        self.fc3 = nn.Linear(3, 2)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net_updater = Model_updater()

opt_updater = optim.Adam(net_updater.parameters())

我正在尝试使用神经网络的输出更新我的输入:

inp = torch.tensor([1.0])
epochs = 3

for i in range(epochs):
    opt_updater.zero_grad()

    inp_copy = inp.detach().clone()

    mu, sigma = net_updater(inp_copy)
    dist1 = Normal(mu, torch.abs(sigma))
    a = dist1.rsample()

    inp += a

    loss = torch.tensor(5.0) - inp

    loss.backward(retain_graph=True)
    opt_updater.step()

但是出现错误:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 2]], which is output 0 of TBackward, is at version 2; expected version 1

我也试过用

改变损失计算
loss = torch.tensor(5.0) - inp_copy

但是报错

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

我也试过没有 retain_graph=True 但我得到了

RuntimeError: Trying to backward through the graph a second time, 
but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.

这对我来说真的没有意义,因为我看不到我在哪里打电话 backward() 两次

很可能,这就是您想要的

inp1 = inp + a  # create a separate variable for updated value
inp.data = inp1.data # update the value without touching the graph

loss = torch.tensor(5.0) - inp1 # use updated value which has gradient