从官方 tensorflow 页面了解代码
Understanding code from official tensorflow page
我对 this page 上的代码感到困惑。
问题 1)
下面的代码块显示了该页面的输出。在此步骤之前,我没有看到任何使用 model.fit
函数训练数据的代码。那么下面的代码是什么?他们是否使用随机权重显示预测?
model.predict(train_features[:10])
array([[0.6296253 ],
[0.82509124],
[0.75135857],
[0.73724824],
[0.82174015],
[0.33519754],
[0.6719973 ],
[0.30910844],
[0.6378555 ],
[0.8381703 ]], dtype=float32)
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
array([[0.00124893],
[0.00185736],
[0.00164955],
[0.00123761],
[0.00137692],
[0.00182851],
[0.00170887],
[0.00239349],
[0.0024704 ],
[0.00517672]], dtype=float32)
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
Loss: 0.0157
问题2)
继续下面的代码。 initial_weights
是什么?它们是随机值吗?
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
问题3)
然后他们说
Before moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
,但我不确定他们是如何分配初始偏差的。
我知道我们为对象 zero_bias_history
分配了 0 偏差。但是我们如何为 careful_bias_history
分配偏差?它不应该有等于 initial_bias
的偏差吗? careful_bias_history
如何得到偏置值呢?我觉得 careful_bias_history
应该从使用 model = make_model(output_bias = initial_bias)
创建的模型创建
### Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
print (type(model))
#model.load_weights()
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
答案 1:是的,这些预测来自编译后但训练前的模型。
答案 2:是的,它们是随机权重,例如,在 Dense 层中它们是使用 glorot_uniform
初始化的。 tf.keras.layers.Dense
答案3:我们上面保存的模型有一个使用np.log([pos/neg])
初始化的偏差,它被提到here。
因此,在 zero_bias_history
中,他们使用 model.layers[-1].bias.assign([0.0])
将偏差初始化为零,而在 careful_bias_history
中,他们只是加载了已经具有初始化偏差的已保存模型。
我对 this page 上的代码感到困惑。
问题 1)
下面的代码块显示了该页面的输出。在此步骤之前,我没有看到任何使用 model.fit
函数训练数据的代码。那么下面的代码是什么?他们是否使用随机权重显示预测?
model.predict(train_features[:10])
array([[0.6296253 ],
[0.82509124],
[0.75135857],
[0.73724824],
[0.82174015],
[0.33519754],
[0.6719973 ],
[0.30910844],
[0.6378555 ],
[0.8381703 ]], dtype=float32)
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
array([[0.00124893],
[0.00185736],
[0.00164955],
[0.00123761],
[0.00137692],
[0.00182851],
[0.00170887],
[0.00239349],
[0.0024704 ],
[0.00517672]], dtype=float32)
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
Loss: 0.0157
问题2)
继续下面的代码。 initial_weights
是什么?它们是随机值吗?
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
问题3)
然后他们说
Before moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
,但我不确定他们是如何分配初始偏差的。
我知道我们为对象 zero_bias_history
分配了 0 偏差。但是我们如何为 careful_bias_history
分配偏差?它不应该有等于 initial_bias
的偏差吗? careful_bias_history
如何得到偏置值呢?我觉得 careful_bias_history
应该从使用 model = make_model(output_bias = initial_bias)
### Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
print (type(model))
#model.load_weights()
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
答案 1:是的,这些预测来自编译后但训练前的模型。
答案 2:是的,它们是随机权重,例如,在 Dense 层中它们是使用 glorot_uniform
初始化的。 tf.keras.layers.Dense
答案3:我们上面保存的模型有一个使用np.log([pos/neg])
初始化的偏差,它被提到here。
因此,在 zero_bias_history
中,他们使用 model.layers[-1].bias.assign([0.0])
将偏差初始化为零,而在 careful_bias_history
中,他们只是加载了已经具有初始化偏差的已保存模型。