model.save_weights 和 model.load_weights 未按预期工作

model.save_weights and model.load_weights not working as expected

我是机器学习的新手,正在学习 fast.ai 上的课程。我们正在学习 vgg16,但我在保存我的模型时遇到了问题。我想知道我做错了什么。当我从头开始我的模型,训练以了解猫和狗之间的区别时,我得到:

from __future__ import division,print_function
from vgg16 import Vgg16
import os, json
from glob import glob
import numpy as np
from matplotlib import pyplot as plt
import utils; reload(utils)
from utils import plots


np.set_printoptions(precision=4, linewidth=100)
batch_size=64

path = "dogscats/sample"
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'/train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'/valid', batch_size=batch_size*2)
vgg.finetune(batches)
no_of_epochs = 4
latest_weights_filename = None
for epoch in range(no_of_epochs):
    print ("Running epoch: %d" % epoch)
    vgg.fit(batches, val_batches, nb_epoch=1)
    latest_weights_filename = ('ft%d.h5' % epoch)
    vgg.model.save_weights(path+latest_weights_filename)
print ("Completed %s fit operations" % no_of_epochs)

Found 160 images belonging to 2 classes.
Found 40 images belonging to 2 classes.
Running epoch: 0
Epoch 1/1
160/160 [==============================] - 4s - loss: 1.8980 - acc: 0.6125 - val_loss: 0.5442 - val_acc: 0.8500
Running epoch: 1
Epoch 1/1
160/160 [==============================] - 4s - loss: 0.7194 - acc: 0.8563 - val_loss: 0.2167 - val_acc: 0.9500
Running epoch: 2
Epoch 1/1
160/160 [==============================] - 4s - loss: 0.1809 - acc: 0.9313 - val_loss: 0.1604 - val_acc: 0.9750
Running epoch: 3
Epoch 1/1
160/160 [==============================] - 4s - loss: 0.2733 - acc: 0.9375 - val_loss: 0.1684 - val_acc: 0.9750
Completed 4 fit operations

但是现在当我加载其中一个权重文件时,模型从头开始!例如,我希望下面的模型具有 0.9750 的 val_acc!我是误会了什么还是做错了什么?为什么这个加载模型的 val_acc 这么低?

vgg = Vgg16()
vgg.model.load_weights(path+'ft3.h5')
batches = vgg.get_batches(path+'/train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'/valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)

Found 160 images belonging to 2 classes.
Found 40 images belonging to 2 classes.
Epoch 1/1
160/160 [==============================] - 6s - loss: 1.3110 - acc: 0.6562 - val_loss: 0.5961 - val_acc: 0.8250

问题出在一个finetune函数上。当你深入了解它的定义时:

def finetune(self, batches):
    model = self.model
    model.pop()
    for layer in model.layers: layer.trainable=False
    model.add(Dense(batches.nb_class, activation='softmax'))
    self.compile()

...可以通过调用 pop 函数看到 - 模型的最后一层被删除。通过这样做,您将丢失来自训练有素的模型的信息。最后一层再次添加随机权重,然后再次开始训练。这就是准确率下降的原因。