无法加载模型检查点以继续训练,TensorSliceReader 构造函数失败:找不到任何匹配文件
Unable to load model checkpoint to continue training,Unsuccessful TensorSliceReader constructor: Failed to find any matching files
我正在尝试加载模型以继续训练,但我不断收到错误消息
NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./drive/My Drive/DLSRL/Model/
[[Node: save/RestoreV2_81 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_81/tensor_names, save/RestoreV2_81/shape_and_slices)]]
[[Node: save/RestoreV2_3/_189 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_396_save/RestoreV2_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]
Caused by op 'save/RestoreV2_81', defined at:
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
我的文件夹结构
模型包含检查点,03-27-09-15_epoch_29.ckpt.data-00000-of-00001,03-27-09-15_epoch_29.ckpt.index,03-27-09-15_epoch_29.ckpt.meta
这是代码
saver = tf.train.import_meta_graph('./drive/My Drive/DLSRL/Model/03-27-09-15_epoch_39.ckpt.meta')
g = tf.get_default_graph()
with g.as_default():
model = Model(config, embeddings, label_dict.size(), g)
sess = tf.Session(graph=g, config=tf.ConfigProto(allow_soft_placement=True,
log_device_placement=False))
saver.restore(sess,'./drive/My Drive/DLSRL/Model/')
#sess.run(tf.global_variables_initializer())
ckpt_saver = tf.train.Saver(max_to_keep=config.max_epochs)
for epoch in range(39,config.max_epochs):
# save chckpoint from which to load model
path = runs_dir / "{}_epoch_{}.ckpt".format(time_of_init, epoch)
ckpt_saver.save(sess, str(path))
print('Saved checkpoint.')
evaluate(dev_data, model, sess, epoch, global_step)
x1, x2, y = shuffle_stack_pad(train_data, config.train_batch_size)
epoch_start = time.time()
for x1_b, x2_b, y_b in get_batches(x1, x2, y, config.train_batch_size):
feed_dict = make_feed_dict(x1_b, x2_b, y_b, model, config.keep_prob)
if epoch_step % LOSS_INTERVAL == 0:
# tensorboard
run_options = tf.RunOptions(trace_level=tf.RunOptions.NO_TRACE)
scalar_summaries = sess.run(model.scalar_summaries,
feed_dict=feed_dict,
options=run_options)
model.train_writer.add_summary(scalar_summaries, global_step)
# print info
print("step {:>6} epoch {:>3}: loss={:1.3f}, epoch sec={:3.0f}, total hrs={:.1f}".format(
epoch_step,
epoch,
epoch_loss_sum / max(epoch_step, 1),
(time.time() - epoch_start),
(time.time() - global_start) / 3600))
loss, _ = sess.run([model.nonzero_mean_loss, model.update], feed_dict=feed_dict)
epoch_loss_sum+= loss
epoch_step += 1
global_step += 1
epoch_step = 0
epoch_loss_sum = 0.0
你能提出修复建议吗?
您没有指定要还原的检查点。更改为:
saver.restore(sess, tf.train.latest_checkpoint('./drive/My Drive/DLSRL/Model/'))
我正在尝试加载模型以继续训练,但我不断收到错误消息
NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./drive/My Drive/DLSRL/Model/
[[Node: save/RestoreV2_81 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_81/tensor_names, save/RestoreV2_81/shape_and_slices)]]
[[Node: save/RestoreV2_3/_189 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_396_save/RestoreV2_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]
Caused by op 'save/RestoreV2_81', defined at: File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec)
我的文件夹结构 模型包含检查点,03-27-09-15_epoch_29.ckpt.data-00000-of-00001,03-27-09-15_epoch_29.ckpt.index,03-27-09-15_epoch_29.ckpt.meta
这是代码
saver = tf.train.import_meta_graph('./drive/My Drive/DLSRL/Model/03-27-09-15_epoch_39.ckpt.meta')
g = tf.get_default_graph()
with g.as_default():
model = Model(config, embeddings, label_dict.size(), g)
sess = tf.Session(graph=g, config=tf.ConfigProto(allow_soft_placement=True,
log_device_placement=False))
saver.restore(sess,'./drive/My Drive/DLSRL/Model/')
#sess.run(tf.global_variables_initializer())
ckpt_saver = tf.train.Saver(max_to_keep=config.max_epochs)
for epoch in range(39,config.max_epochs):
# save chckpoint from which to load model
path = runs_dir / "{}_epoch_{}.ckpt".format(time_of_init, epoch)
ckpt_saver.save(sess, str(path))
print('Saved checkpoint.')
evaluate(dev_data, model, sess, epoch, global_step)
x1, x2, y = shuffle_stack_pad(train_data, config.train_batch_size)
epoch_start = time.time()
for x1_b, x2_b, y_b in get_batches(x1, x2, y, config.train_batch_size):
feed_dict = make_feed_dict(x1_b, x2_b, y_b, model, config.keep_prob)
if epoch_step % LOSS_INTERVAL == 0:
# tensorboard
run_options = tf.RunOptions(trace_level=tf.RunOptions.NO_TRACE)
scalar_summaries = sess.run(model.scalar_summaries,
feed_dict=feed_dict,
options=run_options)
model.train_writer.add_summary(scalar_summaries, global_step)
# print info
print("step {:>6} epoch {:>3}: loss={:1.3f}, epoch sec={:3.0f}, total hrs={:.1f}".format(
epoch_step,
epoch,
epoch_loss_sum / max(epoch_step, 1),
(time.time() - epoch_start),
(time.time() - global_start) / 3600))
loss, _ = sess.run([model.nonzero_mean_loss, model.update], feed_dict=feed_dict)
epoch_loss_sum+= loss
epoch_step += 1
global_step += 1
epoch_step = 0
epoch_loss_sum = 0.0
你能提出修复建议吗?
您没有指定要还原的检查点。更改为:
saver.restore(sess, tf.train.latest_checkpoint('./drive/My Drive/DLSRL/Model/'))