on_epoch_end 给出的平均指标似乎与 tf.keras 不符
Average metrics given by on_epoch_end seem wrong with tf.keras
我正在玩 tf.keras
并且正在编写一些基本的自定义回调,比如给定的 here 更精确。
回调方法给出的损失指标on_epoch_end
应该是所有小批量的平均损失,但我得到了最后的损失登记,即最后一个小批量的损失。
如果您查看 日志字典的用法 部分中的 Tensorflow site,通过手动计算,您会看到示例中使用方法 on_epoch_end
是这个时期所有批次的平均损失。
我试过没有我的自定义回调,它没有改变任何东西。虽然
BaseLogger callback的核心代码说你应该得到epoch结束时的平均损失,这不是我得到的,我仍然得到最后一个minibatch的损失。
这是我写的代码
# import libs
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Dense
import random
print(tf.__version__)
print(keras.__version__)
RANDOM_SEED = 42
random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
#dummy dataset
t_x = tf.random.uniform([30, 4], dtype=tf.float32)
t_y = tf.range(30)
ds_x = tf.data.Dataset.from_tensor_slices(t_x)
ds_y = tf.data.Dataset.from_tensor_slices(t_y)
ds = tf.data.Dataset.zip((ds_x, ds_y))
ds = ds.batch(3)
# Custom callback
class LossCallback(tf.keras.callbacks.Callback):
def on_train_batch_end(self, batch, logs):
print(f'Batch {batch}, loss is {logs["loss"]:.2f}.\n')
def on_epoch_end(self, epoch, logs):
print(f'Avg loss on {epoch} is {logs["loss"]:.2f} \n')
cb = LossCallback()
# create dummy model
from tensorflow.keras import Model
input = Input(shape=(4,))
x = Dense(32)(input)
model = Model(input,x)
model.compile(loss = 'mean_absolute_error',
optimizer=tf.keras.optimizers.SGD())
history = model.fit(ds,
epochs=1,
verbose=0,
callbacks=[cb])
这是我得到的结果。
2.2.0
2.3.0-tf
Batch 0, loss is 1.03.
Batch 1, loss is 2.48.
Batch 2, loss is 3.95.
Batch 3, loss is 5.44.
Batch 4, loss is 6.93.
Batch 5, loss is 8.43.
Batch 6, loss is 9.93.
Batch 7, loss is 11.43.
Batch 8, loss is 12.93.
Batch 9, loss is 14.43.
Avg loss on 0 is 14.43
摆脱我的自定义回调并重新运行
history = model.fit(ds,
epochs=1)
我什么也没得到,因为我仍然有同样的损失。
我做这个的时候在 Google Colab 上。
你知道为什么我没有 epoch 结束时的平均损失吗?我哪里错了?
我认为你最终得到的是平均损失。似乎错误的是每批次的损失。 on_train_batch_end
中的回调不是返回当前批次的损失,而是打印平均损失。这似乎是 TF 的问题:https://github.com/tensorflow/tensorflow/issues/39448
我做了一点测试,Tensorflow 2.0.0 似乎呈现了预期的行为。看一下 this colab,它将您的代码与 TF 2.0.0 一起使用。
对于 TF 2.0.0,代码的输出是:
Batch 0, loss is 1.03.
1/Unknown - 0s 438ms/step - loss: 1.0256
Batch 1, loss is 3.93.
2/Unknown - 0s 222ms/step - loss: 2.4791
Batch 2, loss is 6.90.
3/Unknown - 0s 149ms/step - loss: 3.9526
Batch 3, loss is 9.90.
4/Unknown - 0s 113ms/step - loss: 5.4391
Batch 4, loss is 12.90.
5/Unknown - 0s 91ms/step - loss: 6.9314
Batch 5, loss is 15.92.
6/Unknown - 0s 77ms/step - loss: 8.4295
Batch 6, loss is 18.91.
7/Unknown - 0s 66ms/step - loss: 9.9273
Batch 7, loss is 21.94.
8/Unknown - 0s 58ms/step - loss: 11.4284
Batch 8, loss is 24.93.
9/Unknown - 0s 52ms/step - loss: 12.9280
Batch 9, loss is 27.90.
10/Unknown - 0s 47ms/step - loss: 14.4251
Avg loss on 0 is 14.43
10/10 [==============================] - 0s 49ms/step - loss: 14.4251
我正在玩 tf.keras
并且正在编写一些基本的自定义回调,比如给定的 here 更精确。
回调方法给出的损失指标on_epoch_end
应该是所有小批量的平均损失,但我得到了最后的损失登记,即最后一个小批量的损失。
如果您查看 日志字典的用法 部分中的 Tensorflow site,通过手动计算,您会看到示例中使用方法 on_epoch_end
是这个时期所有批次的平均损失。
我试过没有我的自定义回调,它没有改变任何东西。虽然 BaseLogger callback的核心代码说你应该得到epoch结束时的平均损失,这不是我得到的,我仍然得到最后一个minibatch的损失。
这是我写的代码
# import libs
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Dense
import random
print(tf.__version__)
print(keras.__version__)
RANDOM_SEED = 42
random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
#dummy dataset
t_x = tf.random.uniform([30, 4], dtype=tf.float32)
t_y = tf.range(30)
ds_x = tf.data.Dataset.from_tensor_slices(t_x)
ds_y = tf.data.Dataset.from_tensor_slices(t_y)
ds = tf.data.Dataset.zip((ds_x, ds_y))
ds = ds.batch(3)
# Custom callback
class LossCallback(tf.keras.callbacks.Callback):
def on_train_batch_end(self, batch, logs):
print(f'Batch {batch}, loss is {logs["loss"]:.2f}.\n')
def on_epoch_end(self, epoch, logs):
print(f'Avg loss on {epoch} is {logs["loss"]:.2f} \n')
cb = LossCallback()
# create dummy model
from tensorflow.keras import Model
input = Input(shape=(4,))
x = Dense(32)(input)
model = Model(input,x)
model.compile(loss = 'mean_absolute_error',
optimizer=tf.keras.optimizers.SGD())
history = model.fit(ds,
epochs=1,
verbose=0,
callbacks=[cb])
这是我得到的结果。
2.2.0
2.3.0-tf
Batch 0, loss is 1.03.
Batch 1, loss is 2.48.
Batch 2, loss is 3.95.
Batch 3, loss is 5.44.
Batch 4, loss is 6.93.
Batch 5, loss is 8.43.
Batch 6, loss is 9.93.
Batch 7, loss is 11.43.
Batch 8, loss is 12.93.
Batch 9, loss is 14.43.
Avg loss on 0 is 14.43
摆脱我的自定义回调并重新运行
history = model.fit(ds,
epochs=1)
我什么也没得到,因为我仍然有同样的损失。
我做这个的时候在 Google Colab 上。
你知道为什么我没有 epoch 结束时的平均损失吗?我哪里错了?
我认为你最终得到的是平均损失。似乎错误的是每批次的损失。 on_train_batch_end
中的回调不是返回当前批次的损失,而是打印平均损失。这似乎是 TF 的问题:https://github.com/tensorflow/tensorflow/issues/39448
我做了一点测试,Tensorflow 2.0.0 似乎呈现了预期的行为。看一下 this colab,它将您的代码与 TF 2.0.0 一起使用。
对于 TF 2.0.0,代码的输出是:
Batch 0, loss is 1.03.
1/Unknown - 0s 438ms/step - loss: 1.0256
Batch 1, loss is 3.93.
2/Unknown - 0s 222ms/step - loss: 2.4791
Batch 2, loss is 6.90.
3/Unknown - 0s 149ms/step - loss: 3.9526
Batch 3, loss is 9.90.
4/Unknown - 0s 113ms/step - loss: 5.4391
Batch 4, loss is 12.90.
5/Unknown - 0s 91ms/step - loss: 6.9314
Batch 5, loss is 15.92.
6/Unknown - 0s 77ms/step - loss: 8.4295
Batch 6, loss is 18.91.
7/Unknown - 0s 66ms/step - loss: 9.9273
Batch 7, loss is 21.94.
8/Unknown - 0s 58ms/step - loss: 11.4284
Batch 8, loss is 24.93.
9/Unknown - 0s 52ms/step - loss: 12.9280
Batch 9, loss is 27.90.
10/Unknown - 0s 47ms/step - loss: 14.4251
Avg loss on 0 is 14.43
10/10 [==============================] - 0s 49ms/step - loss: 14.4251