我想我之前重构过计算图,但是它提示我"Trying to backward through the graph a second time ",为什么?

I think I have reconstructed the computational graph before, but it hints me "Trying to backward through the graph a second time ", why?

A image to discribe my question

从我的角度来看,每一次迭代,计算图都会在第一个箭头处构建,并在反向传递的第二个箭头处使用和删除。那么,为什么它告诉我:

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

这是我的代码:

def train(num_epoch = 10,len_vocab = 1, num_hidden=256,embedding_dim = 8):
    data = get_data()
    model = MyRNN(len_vocab,num_hidden,embedding_dim)
    if os.path.exists('QingBinLi'):
        model.load_state_dict(torch.load('QingBinLi'))
    criterion = nn.MSELoss()
    optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=1e-5)
    loss_for_draw = []
    for epoch in range(num_epoch+1):

        h = torch.randn(1,1,num_hidden)
        loss_average = []
        for i in range(data.shape[-2]):
            optimizer.zero_grad()
            #I think my computational graph will be constructed there
            pre,h = model(data[:,:,i,:] ,h)
            pre = pre.unsqueeze(0).unsqueeze(0)
            loss = criterion(pre, data[:,:,i+1,:])
            loss_average.append(loss)
            #I think everytime the backward pass will delete the computational graph.
            loss.backward()
            nn.utils.clip_grad_norm_(model.parameters(), max_norm=10)
            optimizer.step()
            print(f"finish {i+1} times")

        loss_for_draw.append(sum(loss_average)/len(loss_average))
        torch.save(model.state_dict(), 'QingBinLi')
        print(f'now epoch:{epoch}, loss = {loss_for_draw[-1]}')


    return loss_for_draw
class MyRNN(nn.Module):
    def __init__(self,len_vocab, num_hidden=256,embedding_dim = 8):
        super(MyRNN,self).__init__()
        self.rnn = nn.RNN(embedding_dim, num_hidden)
        self.num_directions=1
        self.output_model = nn.Linear(num_hidden, embedding_dim)


    def forward(self, x, h):
        y, h = self.rnn(x, h)

        output = self.output_model(y.reshape((-1)))

        return output, h

所以,如果我是对的,它不应该告诉我“第二次尝试向后遍历图形”..

所以,我哪里错了

变量h和数据需要梯度,所以我们必须添加2行:

h = h.detach()
data = data.detach()