使用 Pytorch 转换和自定义数据集时出错

Error Utilizing Pytorch Transforms and Custom Dataset

这个问题主要涉及 pytorch Dataset__getitem__ 的 return 值,我在源代码中看到它既是元组又是字典。

我一直在关注 this tutorial for creating a dataset class within my code, which is following this tutorial 迁移学习。它具有以下数据集定义。

class FaceLandmarksDataset(Dataset):
"""Face Landmarks dataset."""

    def __init__(self, csv_file, root_dir, transform=None):
        """
        Args:
            csv_file (string): Path to the csv file with annotations.
            root_dir (string): Directory with all the images.
            transform (callable, optional): Optional transform to be applied
                on a sample.
        """
        self.landmarks_frame = pd.read_csv(csv_file)
        self.root_dir = root_dir
        self.transform = transform

    def __len__(self):
        return len(self.landmarks_frame)

    def __getitem__(self, idx):
        img_name = os.path.join(self.root_dir,
                                self.landmarks_frame.iloc[idx, 0])
        image = io.imread(img_name)
        landmarks = self.landmarks_frame.iloc[idx, 1:].as_matrix()
        landmarks = landmarks.astype('float').reshape(-1, 2)
        sample = {'image': image, 'landmarks': landmarks}

        if self.transform:
            sample = self.transform(sample)

        return sample

如您所见,__getitem__ return 是一个包含两个条目的字典。 在迁移学习教程中,进行了以下调用来转换数据集:

    data_transforms = {
    'train': transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
    'val': transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
}

data_dir = 'hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
                                          data_transforms[x])
                  for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
                                             shuffle=True, num_workers=4)
              for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes

use_gpu = torch.cuda.is_available()

inputs, classes = next(iter(dataloaders['train']))

最后一行代码尝试对自定义数据集中的样本进行 运行 转换,导致我的代码出错。

'dict' object has no attribute 'size'

但是如果教程数据集实现正确,它不应该通过转换正确运行吗?我自己的混合实现如下:

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.autograd import Variable
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
from torch.utils.data import *
from skimage import io, transform
plt.ion()


class NumsDataset(Dataset):
    """Face Landmarks dataset."""

    def __init__(self, root_dir, transform=None):
        """
        Args:
            csv_file (string): Path to the csv file with annotations.
            root_dir (string): Directory with all the images.
            transform (callable, optional): Optional transform to be applied
                on a sample.
        """
        self.docs = []
        for file in os.listdir(root_dir):
            #print(file)
            if file.endswith(".txt"):
                path = os.path.join(root_dir, file)
                with open(path, 'r') as f:
                    self.docs.append( (  file , list(f.read()) ) ) #tup containing file, image values pairs
        self.root_dir = root_dir
        self.transform = transform

    def __len__(self): #returns number of images
        i = 0
        for j in self.docs:
            i += len(j[1])
        return i

    def len2(self): #returns number of batches
        return len(self.docs)

    def __getitem__(self, idx):
        idx1 = idx // self.len2()
        idx2 = idx % self.len2()
        imglabel = self.docs[idx1][0] #label with filename for batch error calculation later
        imgdir = os.path.join(self.root_dir, self.docs[idx1][0].strip(".txt"))
        img = None
        l = idx2

        for file in os.listdir(imgdir):
            file = os.path.join(imgdir, file)
            if(l == 0):
                img = io.imread(file)
            l -= 1
        sample = (img , imglabel)
        sample ={'image': img, 'label': imglabel}
        if self.transform:
            sample = self.transform(sample)

        return sample




data_transforms = {
    'train': transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
    'val': transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
}
data_dir = "images"
image_datasets = {x: NumsDataset(os.path.join(data_dir, x),
                                          data_transforms[x])
                  for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=5) 
              for x in ['train', 'val']}

dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = ["one", "two", "four"]

use_gpu = torch.cuda.is_available()
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))

目录结构:

images
     /train
        /file1
            *.jpg
        /file2...
            *.jpg
        file1.txt
        file2.txt...
     /val
        /file1
            *.jpg
        /file2...
            *.jpg
        file1.txt
        file2.txt...

我 return 的示例格式不正确吗?

数据加载教程使用自定义数据集的特定方式是使用自定义转换。转换必须设计为适合数据集。因此,数据集必须输出与库转换函数兼容的样本,或者必须为特定样本案例定义转换。选择后者,除其他外,导致了完整的功能代码。

当您将 dict 而不是 image 传递给 transforms 时,会出现以下问题。示例中提到的自定义转换可以处理该问题,但默认转换不能,您只能将图像传递给转换。这样就解决了一半的问题。

'dict' object has no attribute 'size'

剩下的问题在于示例中的图像处理代码,所以我不得不在 torchvision 中挖掘到 transforms.py;与示例中提到的 skimage 不同,它使用 PIL 图像,所以我用 PIL 替换了代码并且工作得很好。

site-packages/torchvision/transforms/transforms.py

原码:

def __getitem__(self, idx):
        if torch.is_tensor(idx):
        img_name = os.path.join(self.root_dir,self.anb_frame.iloc[idx, 0])
        image = io.imread(img_name)
        labels = self.anb_frame.iloc[idx, 1:]
        labels = np.array([labels])
        sample = {'image': image, 'labels': labels}
        if self.transform:
            image = self.transform(image)
        return sample

修改:

def __getitem__(self, idx):
        if torch.is_tensor(idx):
        img_name = os.path.join(self.root_dir,self.anb_frame.iloc[idx, 0])
        image = Image.open(img_name)
        if self.transform:
            image = self.transform(image)
        labels = self.anb_frame.iloc[idx, 1:]
        labels = np.array([labels])
        sample = {'image': image, 'labels': labels}
        return sample