RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
我正在使用 Pytorch Unet 模型,我将图像作为输入输入该模型,同时我将标签作为输入图像掩码输入,并在其上转换数据集。
我从其他地方获取的 Unet 模型,我使用交叉熵损失作为损失函数,但我得到了这个维度超出范围的错误,
RuntimeError
Traceback (most recent call last)
<ipython-input-358-fa0ef49a43ae> in <module>()
16 for epoch in range(0, num_epochs):
17 # train for one epoch
---> 18 curr_loss = train(train_loader, model, criterion, epoch, num_epochs)
19
20 # store best loss and save a model checkpoint
<ipython-input-356-1bd6c6c281fb> in train(train_loader, model, criterion, epoch, num_epochs)
16 # measure loss
17 print (outputs.size(),labels.size())
---> 18 loss = criterion(outputs, labels)
19 losses.update(loss.data[0], images.size(0))
20
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in _ _call__(self, *input, **kwargs)
323 for hook in self._forward_pre_hooks.values():
324 hook(self, input)
--> 325 result = self.forward(*input, **kwargs)
326 for hook in self._forward_hooks.values():
327 hook_result = hook(self, input, result)
<ipython-input-355-db66abcdb074> in forward(self, logits, targets)
9 probs_flat = probs.view(-1)
10 targets_flat = targets.view(-1)
---> 11 return self.crossEntropy_loss(probs_flat, targets_flat)
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
323 for hook in self._forward_pre_hooks.values():
324 hook(self, input)
--> 325 result = self.forward(*input, **kwargs)
326 for hook in self._forward_hooks.values():
327 hook_result = hook(self, input, result)
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in f orward(self, input, target)
599 _assert_no_grad(target)
600 return F.cross_entropy(input, target, self.weight, self.size_average,
--> 601 self.ignore_index, self.reduce)
602
603
/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce)
1138 >>> loss.backward()
1139 """
-> 1140 return nll_loss(log_softmax(input, 1), target, weight, size_average, ignore_index, reduce)
1141
1142
/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in log_softmax(input, dim, _stacklevel)
784 if dim is None:
785 dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
--> 786 return torch._C._nn.log_softmax(input, dim)
787
788
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
我的部分代码如下所示
class crossEntropy(nn.Module):
def __init__(self, weight = None, size_average = True):
super(crossEntropy, self).__init__()
self.crossEntropy_loss = nn.CrossEntropyLoss(weight, size_average)
def forward(self, logits, targets):
probs = F.sigmoid(logits)
probs_flat = probs.view(-1)
targets_flat = targets.view(-1)
return self.crossEntropy_loss(probs_flat, targets_flat)
class UNet(nn.Module):
def __init__(self, imsize):
super(UNet, self).__init__()
self.imsize = imsize
self.activation = F.relu
self.pool1 = nn.MaxPool2d(2)
self.pool2 = nn.MaxPool2d(2)
self.pool3 = nn.MaxPool2d(2)
self.pool4 = nn.MaxPool2d(2)
self.conv_block1_64 = UNetConvBlock(4, 64)
self.conv_block64_128 = UNetConvBlock(64, 128)
self.conv_block128_256 = UNetConvBlock(128, 256)
self.conv_block256_512 = UNetConvBlock(256, 512)
self.conv_block512_1024 = UNetConvBlock(512, 1024)
self.up_block1024_512 = UNetUpBlock(1024, 512)
self.up_block512_256 = UNetUpBlock(512, 256)
self.up_block256_128 = UNetUpBlock(256, 128)
self.up_block128_64 = UNetUpBlock(128, 64)
self.last = nn.Conv2d(64, 2, 1)
def forward(self, x):
block1 = self.conv_block1_64(x)
pool1 = self.pool1(block1)
block2 = self.conv_block64_128(pool1)
pool2 = self.pool2(block2)
block3 = self.conv_block128_256(pool2)
pool3 = self.pool3(block3)
block4 = self.conv_block256_512(pool3)
pool4 = self.pool4(block4)
block5 = self.conv_block512_1024(pool4)
up1 = self.up_block1024_512(block5, block4)
up2 = self.up_block512_256(up1, block3)
up3 = self.up_block256_128(up2, block2)
up4 = self.up_block128_64(up3, block1)
return F.log_softmax(self.last(up4))
根据您的代码:
probs_flat = probs.view(-1)
targets_flat = targets.view(-1)
return self.crossEntropy_loss(probs_flat, targets_flat)
你给 nn.CrossEntropyLoss
两个一维张量,但根据 documentation,它期望:
Input: (N,C) where C = number of classes
Target: (N) where each value is 0 <= targets[i] <= C-1
Output: scalar. If reduce is False, then (N) instead.
我相信这就是您遇到的问题的原因。
问题是您在 class 化问题中向 torch.nn.CrossEntropyLoss 传递了错误的参数。
具体来说,在这一行
---> 18 loss = criterion(outputs, labels)
参数 labels
不是 CrossEntropyLoss
所期望的。 labels
应该是一维数组。该数组的长度应该是与代码中的 outputs
匹配的批量大小。每个元素的值应该是从 0 开始的目标 class ID。
这是一个例子。
假设您的批量大小为 B=2
,并且每个数据实例都被赋予 K=3
classes.
此外,假设神经网络的最后一层正在为批处理中的两个实例中的每一个输出以下原始对数(softmax 之前的值)。每个数据实例的逻辑和真实标签如下所示。
Logits (before softmax)
Class 0 Class 1 Class 2 True class
------- ------- ------- ----------
Instance 0: 0.5 1.5 0.1 1
Instance 1: 2.2 1.3 1.7 2
然后为了正确调用CrossEntropyLoss
,你需要两个变量:
input
形状 (B, K)
包含 logit 值
target
的形状 B
包含真实 class 的索引
以下是正确使用 CrossEntropyLoss
和上述值的方法。我正在使用 torch.__version__
1.9.0.
import torch
yhat = torch.Tensor([[0.5, 1.5, 0.1], [2.2, 1.3, 1.7]])
print(yhat)
# tensor([[0.5000, 1.5000, 0.1000],
# [2.2000, 1.3000, 1.7000]])
y = torch.Tensor([1, 2]).to(torch.long)
print(y)
# tensor([1, 2])
loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)
# tensor(0.8393)
我猜你最初收到的错误是
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
可能 发生是因为您正在尝试计算一个数据实例的交叉熵损失,其中目标被编码为 one-hot。您的数据可能是这样的:
Logits (before softmax)
Class 0 Class 1 Class 2 True class 0 True class 1 True class 2
------- ------- ------- ------------ ------------ ------------
Instance 0: 0.5 1.5 0.1 0 1 0
下面是表示上述数据的代码:
import torch
yhat = torch.Tensor([0.5, 1.5, 0.1])
print(yhat)
# tensor([0.5000, 1.5000, 0.1000])
y = torch.Tensor([0, 1, 0]).to(torch.long)
print(y)
# tensor([0, 1, 0])
loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)
此时,我得到以下错误:
---> 10 cel = loss(input=yhat, target=y)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
在我看来,该错误消息难以理解且无法操作。
另见类似问题,但在 TensorFlow 中:
我遇到了同样的问题,由于这个线程没有提供任何明确的答案,尽管 post.
的年龄我会 post 我的解决方案
在forward()
方法中,你也需要returnx
。
它需要看起来像这样:
return F.log_softmax(self.last(up4)), x
我正在使用 Pytorch Unet 模型,我将图像作为输入输入该模型,同时我将标签作为输入图像掩码输入,并在其上转换数据集。 我从其他地方获取的 Unet 模型,我使用交叉熵损失作为损失函数,但我得到了这个维度超出范围的错误,
RuntimeError
Traceback (most recent call last)
<ipython-input-358-fa0ef49a43ae> in <module>()
16 for epoch in range(0, num_epochs):
17 # train for one epoch
---> 18 curr_loss = train(train_loader, model, criterion, epoch, num_epochs)
19
20 # store best loss and save a model checkpoint
<ipython-input-356-1bd6c6c281fb> in train(train_loader, model, criterion, epoch, num_epochs)
16 # measure loss
17 print (outputs.size(),labels.size())
---> 18 loss = criterion(outputs, labels)
19 losses.update(loss.data[0], images.size(0))
20
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in _ _call__(self, *input, **kwargs)
323 for hook in self._forward_pre_hooks.values():
324 hook(self, input)
--> 325 result = self.forward(*input, **kwargs)
326 for hook in self._forward_hooks.values():
327 hook_result = hook(self, input, result)
<ipython-input-355-db66abcdb074> in forward(self, logits, targets)
9 probs_flat = probs.view(-1)
10 targets_flat = targets.view(-1)
---> 11 return self.crossEntropy_loss(probs_flat, targets_flat)
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
323 for hook in self._forward_pre_hooks.values():
324 hook(self, input)
--> 325 result = self.forward(*input, **kwargs)
326 for hook in self._forward_hooks.values():
327 hook_result = hook(self, input, result)
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in f orward(self, input, target)
599 _assert_no_grad(target)
600 return F.cross_entropy(input, target, self.weight, self.size_average,
--> 601 self.ignore_index, self.reduce)
602
603
/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce)
1138 >>> loss.backward()
1139 """
-> 1140 return nll_loss(log_softmax(input, 1), target, weight, size_average, ignore_index, reduce)
1141
1142
/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in log_softmax(input, dim, _stacklevel)
784 if dim is None:
785 dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
--> 786 return torch._C._nn.log_softmax(input, dim)
787
788
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
我的部分代码如下所示
class crossEntropy(nn.Module):
def __init__(self, weight = None, size_average = True):
super(crossEntropy, self).__init__()
self.crossEntropy_loss = nn.CrossEntropyLoss(weight, size_average)
def forward(self, logits, targets):
probs = F.sigmoid(logits)
probs_flat = probs.view(-1)
targets_flat = targets.view(-1)
return self.crossEntropy_loss(probs_flat, targets_flat)
class UNet(nn.Module):
def __init__(self, imsize):
super(UNet, self).__init__()
self.imsize = imsize
self.activation = F.relu
self.pool1 = nn.MaxPool2d(2)
self.pool2 = nn.MaxPool2d(2)
self.pool3 = nn.MaxPool2d(2)
self.pool4 = nn.MaxPool2d(2)
self.conv_block1_64 = UNetConvBlock(4, 64)
self.conv_block64_128 = UNetConvBlock(64, 128)
self.conv_block128_256 = UNetConvBlock(128, 256)
self.conv_block256_512 = UNetConvBlock(256, 512)
self.conv_block512_1024 = UNetConvBlock(512, 1024)
self.up_block1024_512 = UNetUpBlock(1024, 512)
self.up_block512_256 = UNetUpBlock(512, 256)
self.up_block256_128 = UNetUpBlock(256, 128)
self.up_block128_64 = UNetUpBlock(128, 64)
self.last = nn.Conv2d(64, 2, 1)
def forward(self, x):
block1 = self.conv_block1_64(x)
pool1 = self.pool1(block1)
block2 = self.conv_block64_128(pool1)
pool2 = self.pool2(block2)
block3 = self.conv_block128_256(pool2)
pool3 = self.pool3(block3)
block4 = self.conv_block256_512(pool3)
pool4 = self.pool4(block4)
block5 = self.conv_block512_1024(pool4)
up1 = self.up_block1024_512(block5, block4)
up2 = self.up_block512_256(up1, block3)
up3 = self.up_block256_128(up2, block2)
up4 = self.up_block128_64(up3, block1)
return F.log_softmax(self.last(up4))
根据您的代码:
probs_flat = probs.view(-1)
targets_flat = targets.view(-1)
return self.crossEntropy_loss(probs_flat, targets_flat)
你给 nn.CrossEntropyLoss
两个一维张量,但根据 documentation,它期望:
Input: (N,C) where C = number of classes
Target: (N) where each value is 0 <= targets[i] <= C-1
Output: scalar. If reduce is False, then (N) instead.
我相信这就是您遇到的问题的原因。
问题是您在 class 化问题中向 torch.nn.CrossEntropyLoss 传递了错误的参数。
具体来说,在这一行
---> 18 loss = criterion(outputs, labels)
参数 labels
不是 CrossEntropyLoss
所期望的。 labels
应该是一维数组。该数组的长度应该是与代码中的 outputs
匹配的批量大小。每个元素的值应该是从 0 开始的目标 class ID。
这是一个例子。
假设您的批量大小为 B=2
,并且每个数据实例都被赋予 K=3
classes.
此外,假设神经网络的最后一层正在为批处理中的两个实例中的每一个输出以下原始对数(softmax 之前的值)。每个数据实例的逻辑和真实标签如下所示。
Logits (before softmax)
Class 0 Class 1 Class 2 True class
------- ------- ------- ----------
Instance 0: 0.5 1.5 0.1 1
Instance 1: 2.2 1.3 1.7 2
然后为了正确调用CrossEntropyLoss
,你需要两个变量:
input
形状(B, K)
包含 logit 值target
的形状B
包含真实 class 的索引
以下是正确使用 CrossEntropyLoss
和上述值的方法。我正在使用 torch.__version__
1.9.0.
import torch
yhat = torch.Tensor([[0.5, 1.5, 0.1], [2.2, 1.3, 1.7]])
print(yhat)
# tensor([[0.5000, 1.5000, 0.1000],
# [2.2000, 1.3000, 1.7000]])
y = torch.Tensor([1, 2]).to(torch.long)
print(y)
# tensor([1, 2])
loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)
# tensor(0.8393)
我猜你最初收到的错误是
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
可能 发生是因为您正在尝试计算一个数据实例的交叉熵损失,其中目标被编码为 one-hot。您的数据可能是这样的:
Logits (before softmax)
Class 0 Class 1 Class 2 True class 0 True class 1 True class 2
------- ------- ------- ------------ ------------ ------------
Instance 0: 0.5 1.5 0.1 0 1 0
下面是表示上述数据的代码:
import torch
yhat = torch.Tensor([0.5, 1.5, 0.1])
print(yhat)
# tensor([0.5000, 1.5000, 0.1000])
y = torch.Tensor([0, 1, 0]).to(torch.long)
print(y)
# tensor([0, 1, 0])
loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)
此时,我得到以下错误:
---> 10 cel = loss(input=yhat, target=y)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
在我看来,该错误消息难以理解且无法操作。
另见类似问题,但在 TensorFlow 中:
我遇到了同样的问题,由于这个线程没有提供任何明确的答案,尽管 post.
的年龄我会 post 我的解决方案在forward()
方法中,你也需要returnx
。
它需要看起来像这样:
return F.log_softmax(self.last(up4)), x