stanford_dogs 数据集的准确性很差

Getting very poor accuracy on stanford_dogs dataset

我正在尝试在 stanford_dogs 数据集上训练一个模型来对 120 个犬种进行分类,但我的代码正在执行 st运行ge.

我从http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar

下载了图像数据

然后运行以下代码将品种的每个文件夹拆分为训练和测试文件夹:

dataset_dict = {}
source_path = 'C:/Users/visha/Downloads/stanford_dogs/dataset'
dir_root = os.getcwd()
dataset_folders = [x for x in os.listdir(os.path.join(dir_root, source_path)) if os.path.isdir(os.path.join(dir_root, source_path, x))]
for category in dataset_folders:     
    dataset_dict[category] = {'source_path': os.path.join(dir_root, source_path, category),                                  
              'train_path': create_folder(new_path='C:/Users/visha/Downloads/stanford_dogs/train',                                                              
                                      folder_type='train',                                                              
                                      data_class=category),                                  
             'validation_path': create_folder(new_path='C:/Users/visha/Downloads/stanford_dogs/validation',                                                                                    
                                          folder_type='validation',                                                                   
                                          data_class=category)}


dataset_folders = [x for x in os.listdir(os.path.join(dir_root, source_path)) if os.path.isdir(os.path.join(dir_root, source_path, x))]



for key in dataset_dict:        
    print("Splitting Category {} ...".format(key))           
    split_data(source_path=dataset_dict[key]['source_path'],
               train_path=dataset_dict[key]['train_path'],
               validation_path=dataset_dict[key]['validation_path'],
               split_size=0.7)

我在一些图像增强后通过网络输入图像,并在最后一层使用 sigmoid 激活和 categorical_crossentropy 损失。

import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop



model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(150, 150, 3)),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512, activation='relu'),
    tf.keras.layers.Dense(120, activation='softmax')
])

model.compile(optimizer='rmsprop',
             loss='categorical_crossentropy',
             metrics=['accuracy'])

TRAINING_DIR = 'C:/Users/visha/Downloads/stanford_dogs/train'
train_datagen = ImageDataGenerator(rescale=1./255,rotation_range=40,width_shift_range=0.2,height_shift_range=0.2,
                                    shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
      

train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
                                                    batch_size=10,
                                                    class_mode='categorical',
                                                    target_size=(150, 150))

VALIDATION_DIR = 'C:/Users/visha/Downloads/stanford_dogs/validation'
validation_datagen = ImageDataGenerator(rescale=1./255, rotation_range=40,width_shift_range=0.2, height_shift_range=0.2,
                                        shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
      

validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
                                                              batch_size=10,
                                                              class_mode='categorical',
                                                              target_size=(150, 150))

history = model.fit(train_generator,
                              epochs=10,
                              verbose=1,
                              validation_data=validation_generator)

但是代码没有按预期工作。 10 个时期后的 val_accuracy 类似于 4.756.

对于验证数据,您不应进行任何图像增强,只需重新缩放即可。在验证 flow_from_directory 中设置 shuffle=False。请注意,斯坦福狗数据集非常困难。要达到合理的准确度,您将需要一个复杂得多的模型。我建议您考虑使用 Mobilenet 模型进行迁移学习。下面的代码展示了如何做到这一点。

base_model=tf.keras.applications.mobilenet.MobileNet( include_top=False, 
          input_shape=(150,150,3) pooling='max', weights='imagenet',dropout=.4) 
x=base_model.output
x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
x = Dense(1024, activation='relu')(x)
x=Dropout(rate=.3, seed=123)(x) 
output=Dense(120, activation='softmax')(x)
model=Model(inputs=base_model.input, outputs=output)
model.compile(Adamax(lr=.001),loss='categorical_crossentropy',metrics= 
              ['accuracy'] )

我忘了说 Mobilenet 是在像素值范围为 -1 到 +1 的图像上训练的。所以在 ImageDataGenerator 中包含代码

preprocessing_function=tf.keras.applications.mobilenet.preprocess_input

这会缩放像素,因此您不需要代码

rescale=1./255

或者设置

rescale=1/157.5-1

这将重新调整介于 -1 和 +1 之间的值