ValueError: Expected input batch_size (24) to match target batch_size (8)

ValueError: Expected input batch_size (24) to match target batch_size (8)

有很多链接可以解决此问题阅读了与此相关的不同 Whosebug 答案,但无法弄清楚。 我的图像大小是 torch.Size([8, 3, 16, 16])。 我的架构如下

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # linear layer (784 -> 1 hidden node)
        self.fc1 = nn.Linear(16 * 16, 768)
        self.fc2 = nn.Linear(768, 64)
        self.fc3 = nn.Linear(64, 10)
        self.dropout = nn.Dropout(p=.5)

    def forward(self, x):
        # flatten image input
        x = x.view(-1, 16 * 16)
        # add hidden layer, with relu activation function
        x = self.dropout(F.relu(self.fc1(x)))
        x = self.dropout(F.relu(self.fc2(x)))
        x = F.log_softmax(self.fc3(x), dim=1)
        
        return x

# specify loss function
criterion = nn.NLLLoss()

# specify optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=.003)

# number of epochs to train the model
n_epochs = 30  # suggest training between 20-50 epochs

model.train() # prep model for training

for epoch in range(n_epochs):
    # monitor training loss
    train_loss = 0.0
    
    ###################
    # train the model #
    ###################
    for data, target in trainloader:
        # clear the gradients of all optimized variables
        optimizer.zero_grad()
        # forward pass: compute predicted outputs by passing inputs to the model
        output = model(data)
        # calculate the loss
        loss = criterion(output, target)
        # backward pass: compute gradient of the loss with respect to model parameters
        loss.backward()
        # perform a single optimization step (parameter update)
        optimizer.step()
        # update running training loss
        train_loss += loss.item()*data.size(0)
        
    # print training statistics 
    # calculate average loss over an epoch
    train_loss = train_loss/len(trainloader.dataset)

    print('Epoch: {} \tTraining Loss: {:.6f}'.format(
        epoch+1, 
        train_loss
        ))

我收到值错误,因为

ValueError: Expected input batch_size (24) to match target batch_size (8).

如何修复它。我的批量大小是 8,输入图像大小是 (16*16)。我这里有 10 class class化。

您的输入图像有 3 个通道,因此您的输入特征尺寸是 16*16*3,而不是 16*16。目前,您将每个通道视为单独的实例,导致分类器输出 - 在 x.view(-1, 16*16) 扁平化之后 - (24, 16*16)。显然,批量大小不匹配,因为它应该是 8,而不是 8*3 = 24

您可以:

  • 切换到 CNN 来处理多通道输入(这里是 3 个通道)。
  • 使用具有 16*16*3 输入功能的 self.fc1
  • 如果输入是 RGB,甚至可以转换为 1 通道灰度图。