针对不在 [0,1] 内的概率调试神经网络丢失问题

Debugging neural network dropout problem for the probability not lying inside [0,1]

我尝试使用 torch 为我的神经网络 (NN) 设置一个掉率,但最后出现了一个奇怪的错误。我该如何解决?

所以思路是我在一个函数里面写了一个NN,方便调用。函数如下: (我个人认为问题出在 NN 的 class 内部,但为了有一个工作示例,我将所有内容都放在了上面)。

def train_neural_network(data_train_X, data_train_Y, batch_size, learning_rate, graph = True, dropout = 0.0 ):
  input_size = len(data_test_X.columns)
  hidden_size = 200
  num_classes = 4
  num_epochs = 120
  batch_size = batch_size
  learning_rate = learning_rate

  # The class of NN
  class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes, p = dropout):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.fc2 = nn.Linear(hidden_size, hidden_size)
        self.fc3 = nn.Linear(hidden_size, num_classes)

    def forward(self, x, p = dropout):
          out = F.relu(self.fc1(x))
          out = F.relu(self.fc2(out))
          out = nn.Dropout(out, p) #drop
          out = self.fc3(out)
          return out

  # Prepare data
  X_train = torch.from_numpy(data_train_X.values).float()
  Y_train = torch.from_numpy(data_train_Y.values).float()

  # Loading data
  train = torch.utils.data.TensorDataset(X_train, Y_train)
  train_loader = torch.utils.data.DataLoader(train, batch_size=batch_size)

  net = NeuralNet(input_size, hidden_size, num_classes)

  # Loss
  criterion = nn.CrossEntropyLoss()

  # Optimiser
  optimiser = torch.optim.SGD(net.parameters(), lr=learning_rate)

  # Proper training
  total_step = len(train_loader)
  loss_values = []

  for epoch in range(num_epochs+1):
    net.train()

    train_loss = 0.0

    for i, (predictors, results) in enumerate(train_loader, 0):
      # Forward pass
      outputs = net(predictors)
      results = results.long()
      results = results.squeeze_()
      loss = criterion(outputs, results)

      # Backward and optimise
      optimiser.zero_grad()
      loss.backward()
      optimiser.step()

      # Update loss
      train_loss += loss.item()

    loss_values.append(train_loss / batch_size )
  print('Finished Training')

  return net

当我调用函数时:

net = train_neural_network(data_train_X = data_train_X, data_train_Y = data_train_Y, batch_size = batch_size, learning_rate = learning_rate, dropout = 0.1)

错误如下:

net = train_neural_network(data_train_X = data_train_X, data_train_Y = data_train_Y, batch_size = batch_size, learning_rate = learning_rate, dropout = 0.1)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/dropout.py in __init__(self, p, inplace)
      8     def __init__(self, p=0.5, inplace=False):
      9         super(_DropoutNd, self).__init__()
---> 10         if p < 0 or p > 1:
     11             raise ValueError("dropout probability has to be between 0 and 1, "
     12                              "but got {}".format(p))

RuntimeError: bool value of Tensor with more than one value is ambiguous

为什么你认为有错误?

在降低掉率之前,一切正常。如果您知道如何做,可以为您加分 在我的网络中实施偏见!例如,在隐藏层上。我在网上找不到任何示例。

为此更改架构:

class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes, p=dropout):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.fc2 = nn.Linear(hidden_size, hidden_size)
        self.fc3 = nn.Linear(hidden_size, num_classes)
        self.dropout = nn.Dropout(p=p)

    def forward(self, x):
        out = F.relu(self.fc1(x))
        out = F.relu(self.fc2(out))
        out = self.dropout(self.fc3(out))
        return out

如果有效请告诉我。