Tensorflow 自定义估算器中的批量归一化
Batch Normalization in a Custom Estimator in Tensorflow
我指的是 tf.layers.batch_normilization 上的注释:
Note: when training, the moving_mean and moving_variance need to be updated. By default the update ops are placed in tf.GraphKeys.UPDATE_OPS, so they need to be added as a dependency to the train_op. For example:
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
如何在自定义估算器中实现这一点?例如在 Tensorflow 的网站上看这个例子:The complete abalone model_fn
我想你可以传递 train_op 你指的是 EstimatorSpec 的 train_op 参数。
下面这个问题,最下面有一个例子
https://github.com/tensorflow/tensorflow/issues/16455
if mode == tf.estimator.ModeKeys.TRAIN:
lr = 0.001
optimizer = tf.train.RMSPropOptimizer(learning_rate=lr, decay=0.9)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
我指的是 tf.layers.batch_normilization 上的注释:
Note: when training, the moving_mean and moving_variance need to be updated. By default the update ops are placed in tf.GraphKeys.UPDATE_OPS, so they need to be added as a dependency to the train_op. For example:
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
如何在自定义估算器中实现这一点?例如在 Tensorflow 的网站上看这个例子:The complete abalone model_fn
我想你可以传递 train_op 你指的是 EstimatorSpec 的 train_op 参数。
下面这个问题,最下面有一个例子 https://github.com/tensorflow/tensorflow/issues/16455
if mode == tf.estimator.ModeKeys.TRAIN:
lr = 0.001
optimizer = tf.train.RMSPropOptimizer(learning_rate=lr, decay=0.9)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)