Adagrad 如何在 Keras 中工作? self.weights 在 Keras 优化器中是什么意思?
How Adagrad works in Keras? What does self.weights mean in Keras Optimizer?
比如Keras的Adagrad的实现是:
class Adagrad(Optimizer):
"""Adagrad optimizer.
It is recommended to leave the parameters of this optimizer
at their default values.
# Arguments
lr: float >= 0. Learning rate.
epsilon: float >= 0.
decay: float >= 0. Learning rate decay over each update.
# References
- [Adaptive Subgradient Methods for Online Learning and Stochastic Optimization](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)
"""
def __init__(self, lr=0.01, epsilon=1e-8, decay=0., **kwargs):
super(Adagrad, self).__init__(**kwargs)
self.lr = K.variable(lr)
self.epsilon = epsilon
self.decay = K.variable(decay)
self.initial_decay = decay
self.iterations = K.variable(0.)
def get_updates(self, params, constraints, loss):
grads = self.get_gradients(loss, params)
shapes = [K.get_variable_shape(p) for p in params]
accumulators = [K.zeros(shape) for shape in shapes]
self.weights = accumulators
self.updates = []
lr = self.lr
if self.initial_decay > 0:
lr *= (1. / (1. + self.decay * self.iterations))
self.updates.append(K.update_add(self.iterations, 1))
for p, g, a in zip(params, grads, accumulators):
new_a = a + K.square(g) # update accumulator
self.updates.append(K.update(a, new_a))
new_p = p - lr * g / (K.sqrt(new_a) + self.epsilon)
# apply constraints
if p in constraints:
c = constraints[p]
new_p = c(new_p)
self.updates.append(K.update(p, new_p))
return self.updates
而函数 'get_update()' 似乎一步更新。但是累加器应该存储历史信息吗?为什么它在每一步都被初始化为零?如何在整个训练过程中成为一个累加器?
这条线是做什么的?
self.weights = accumulators
似乎 self.weights 再也没有被调用过。
你是对的.. 对于 Keras 中的所有优化器 get_updates()
实现一步更新的张量逻辑。 _make_train_function()
here, which is used to create the tensor function by passing the update rule as update=
here 中的每个 model.fit()
都会调用此函数一次。此更新规则用于迭代到迭代以更新模型参数和其他参数。
self.weights
优化器 class 是它的内部参数。这不用于训练。它只是用来保持优化器的状态(指向 param/accumulators 张量的指针列表),当调用 model.save
时,它们也会通过调用 get_weights()
here and is loaded back when model.load
is called by set_weights()
here
比如Keras的Adagrad的实现是:
class Adagrad(Optimizer):
"""Adagrad optimizer.
It is recommended to leave the parameters of this optimizer
at their default values.
# Arguments
lr: float >= 0. Learning rate.
epsilon: float >= 0.
decay: float >= 0. Learning rate decay over each update.
# References
- [Adaptive Subgradient Methods for Online Learning and Stochastic Optimization](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)
"""
def __init__(self, lr=0.01, epsilon=1e-8, decay=0., **kwargs):
super(Adagrad, self).__init__(**kwargs)
self.lr = K.variable(lr)
self.epsilon = epsilon
self.decay = K.variable(decay)
self.initial_decay = decay
self.iterations = K.variable(0.)
def get_updates(self, params, constraints, loss):
grads = self.get_gradients(loss, params)
shapes = [K.get_variable_shape(p) for p in params]
accumulators = [K.zeros(shape) for shape in shapes]
self.weights = accumulators
self.updates = []
lr = self.lr
if self.initial_decay > 0:
lr *= (1. / (1. + self.decay * self.iterations))
self.updates.append(K.update_add(self.iterations, 1))
for p, g, a in zip(params, grads, accumulators):
new_a = a + K.square(g) # update accumulator
self.updates.append(K.update(a, new_a))
new_p = p - lr * g / (K.sqrt(new_a) + self.epsilon)
# apply constraints
if p in constraints:
c = constraints[p]
new_p = c(new_p)
self.updates.append(K.update(p, new_p))
return self.updates
而函数 'get_update()' 似乎一步更新。但是累加器应该存储历史信息吗?为什么它在每一步都被初始化为零?如何在整个训练过程中成为一个累加器?
这条线是做什么的?
self.weights = accumulators
似乎 self.weights 再也没有被调用过。
你是对的.. 对于 Keras 中的所有优化器 get_updates()
实现一步更新的张量逻辑。 _make_train_function()
here, which is used to create the tensor function by passing the update rule as update=
here 中的每个 model.fit()
都会调用此函数一次。此更新规则用于迭代到迭代以更新模型参数和其他参数。
self.weights
优化器 class 是它的内部参数。这不用于训练。它只是用来保持优化器的状态(指向 param/accumulators 张量的指针列表),当调用 model.save
时,它们也会通过调用 get_weights()
here and is loaded back when model.load
is called by set_weights()
here