Tensorflow 的权重在每个时期都会覆盖自己

Tensorflow's weights overwrite themselves every epoch

问题:如何为每个epoch保存不同的权重?例如 tf_weights_epoch_1.hd5tf_weights_epoch_2.hd5..

我正在使用 TensorFlow 2.0 和这个回调:

checkpoint_path = "./weights/tf_weights_.hd5"

cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
                                                 save_weights_only=True,
                                                 verbose=1,
                                                 save_freq='epoch')

应该不是很容易实现吧?也许以某种方式附加变量 checkpoint_path?

类似 checkpoint_path = "./weights/tf_weights_{}.hd5".format(cur_epoch_number) 的东西并创建另一个回调函数,在每个纪元结束时将此计数器增加 1?但似乎应该有一些内置的东西,比如 save_freq=epoch 可以保存每个纪元..(但它会覆盖文件而不是创建一个新文件)

编辑:我找到了一种方法:

if filepath is weights.{epoch:02d}-{val_loss:.2f}.hdf5,: then the model checkpoints will be saved with the epoch number and the validation loss in the filename.

希望这会起作用。

来自文档,https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint

filepath string, path to save the model file. filepath can contain named formatting options, which will be filled the value of epoch and keys in logs (passed in on_epoch_end). For example: if filepath is weights.{epoch:02d}-{val_loss:.2f}.hdf5, then the model checkpoints will be saved with the epoch number and the validation loss in the filename.