TF Keras - ValueError: No gradients provided for any variable

TF Keras - ValueError: No gradients provided for any variable

我正在做一个简单的 TF 教程,但是当我尝试开始训练模型时,出现此错误:

ValueError: No gradients provided for any variable: (['dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0'],). Provided `grads_and_vars` is ((None, <tf.Variable 'dense/kernel:0' shape=(10, 10) dtype=float32>), (None, <tf.Variable 'dense/bias:0' shape=(10,) dtype=float32>), (None, <tf.Variable 'dense_1/kernel:0' shape=(10, 1) dtype=float32>), (None, <tf.Variable 'dense_1/bias:0' shape=(1,) dtype=float32>)).

对可能出现的问题有什么想法吗? 我尝试更改编译方法的模型和参数,但 none 似乎有效。我对这个问题做了一些研究,但找不到任何类似于这个特定练习的东西。

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Normalization, Dense, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import MeanAbsoluteError
import pandas as pd
import numpy as np
import seaborn as sns


def get_dataset() -> pd.DataFrame:
    """ Get dataset from the web. """
    url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
    cols = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'Model Year', 'Origin']
    return pd.read_csv(
        url,
        names=cols,
        na_values='?',
        comment='\t',
        sep=' ',
        skipinitialspace=True
    )


def plot_data_distribution(df: pd.DataFrame):
    return sns.pairplot(df[['MPG', 'Cylinders', 'Displacement', 'Weight']], diag_kind='kde')


def get_normalization_layer(data: np.array) -> Normalization:
    """ Build a normalization layer and adapt it to the features in the dataset. """
    layer = Normalization(axis=-1, input_shape=[10])
    layer.adapt(data)
    return layer


def build_neural_network(normalization_layer: Normalization) -> Sequential:
    """ Build a simple neural network. """
    model = Sequential([
        normalization_layer,
        Dense(10, activation='relu'),
        Dropout(0.2),
        Dense(1, activation='relu')
    ])
    model.compile(
        optimizer=Adam(learning_rate=0.1),
        loss=MeanAbsoluteError(),
        metrics=['accuracy']
    )
    return model


def main():
    """ Run script. """
    # Clean raw data:
    df = get_dataset()
    avg_hp_by_cylinder = df.groupby(['Cylinders']).Horsepower.mean()
    avg_hp_by_cylinder.name = 'avg_hp_by_cylinder'
    df = df.join(avg_hp_by_cylinder, on='Cylinders')
    df.loc[df.Horsepower.isna(), 'Horsepower'] = df.loc[df.Horsepower.isna(), 'avg_hp_by_cylinder']
    df.Origin = df.Origin.map({1: "USA", 2: "Europe", 3: "Japan"})
    df = pd.get_dummies(df, columns=['Origin'], prefix='', prefix_sep='')

    # Split data into Train/Test sets:
    train_df = df.sample(frac=0.8, random_state=69)
    test_df = df.drop(train_df.index)

    # Separate labales from features:
    train_labels = train_df.pop('MPG')
    test_labels = test_df.pop('MPG')

    # Convert dataframes into arrays:
    train_labels = train_labels.values
    test_labels = test_labels.values
    train_df = train_df.values
    test_df = test_df.values

    # Build model and start training:
    EPOCHS = 10
    normalization_layer = get_normalization_layer(train_df)
    model = build_neural_network(normalization_layer)
    training_history = model.fit(x=train_df, y=train_labels, epochs=EPOCHS)

    return {}


if __name__ == "__main__":
    pd.set_option('expand_frame_repr', False)
    main()

错误是因为使用度量模块作为损失函数。你应该这样做:

from tensorflow.keras import losses
model.compile(
        optimizer=...,
        loss=losses.MeanAbsoluteError(),
        metrics=..
    )

此外,这似乎是一个回归问题,如果是这样,回归指标的 acc 不适合使用。此外,最后一层激活设置为 relu,而可能应该是 linear。您可能会考虑以下更好的方法:

model = [
        ...
        Dropout(0.2),
        Dense(1)
    ])

model.compile(
    optimizer='adam',
    loss='mse',
    metrics=[keras.metrics.MeanAbsoluteError()])