使用具有多个输出的 tf.GradientTape 进行单次更新

Single updates using tf.GradientTape with multiple outputs

我定义了以下模型,它有两个不同的输出:

input_layer = keras.layers.Input(shape = (1, 20), name = "input_features")

# Shared layers
hidden_1 = keras.layers.Dense(32, 
                            activation = "relu", 
                            name = "LSTM_shared_l"
                            )(input_layer)

# Additional layers
hidden_2 = keras.layers.Dense(32, 
                              activation = "selu",
                              name = "Forecasting_extra_layer_1"
                              )(input_layer)

hidden_3 = keras.layers.Dense(32, 
                              activation = "selu",
                              name = "Forecasting_extra_layer_2"
                              )(hidden_2)


# Output layers
f_output = keras.layers.Dense(1, 
                              name = "F_output")(hidden_1)

rl_output = keras.layers.Dense(32, 
                               name = "RL_output")(hidden_3)

model = keras.Model(inputs = [input_layer], outputs = [f_output, rl_output])


model.summary()

我想用 GradientTape 训练它,执行单次迭代;只有一个输出,我会使用以下代码:

with tf.GradientTape() as tape:
        
        predictions = model(inputs)
        pred_values = tf.reduce_sum(predictions, axis=1, keepdims=True)
        loss = tf.reduce_mean(loss_fn(target_pred, pred_values))
    
    grads = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))

如何将其扩展到多输出场景?

有多种策略,最简单的一种是计算两个输出的损失并将结果加在一起:

predictions_1, predictions_2 = model(inputs)

predictions_1 = ...
predictions_2 = ... # any desired post-processing 

loss = tf.reduce_mean(loss_fn(target_1, predictions_1)) + tf.reduce_mean(loss_fn(target_2, predictions_2))

然后就可以安全下山了:

grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))