tf.data.Dataset.from_tensor_slices 的最后记录中的形状不兼容

Shapes are incompatible at the last records of tf.data.Dataset.from_tensor_slices

我已经在 Tensorflow 2.0 中实现了 seq2seq 翻译模型

但在训练过程中出现以下错误:

ValueError: Shapes (2056, 10, 10000) and (1776, 10, 10000) are incompatible

我的数据集中有 10000 条记录。从第一条记录开始,直到 8224 条记录维度匹配。但是对于最后的 1776 条记录,我得到上面提到的错误只是因为我的 batch_size 大于剩余的记录数。这是我的代码:

max_seq_len_output = 10
n_words = 10000
batch_size = 2056

model = Model_translation(batch_size = batch_size,embed_size = embed_size,total_words = n_words , dropout_rate = dropout_rate,num_classes = n_words,embedding_matrix = embedding_matrix)
dataset_train = tf.data.Dataset.from_tensor_slices((encoder_input,decoder_input,decoder_output))
dataset_train = dataset_train.shuffle(buffer_size = 1024).batch(batch_size)


loss_object = tf.keras.losses.CategoricalCrossentropy()#used in backprop
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)

train_loss = tf.keras.metrics.Mean(name='train_loss')#mean of the losses per observation
train_accuracy =tf.keras.metrics.CategoricalAccuracy(name='train_accuracy')


##### no @tf.function here 
def training(X_1,X_2,y):
    #creation of one-hot-encoding, because if I would do it out of the loop if would have RAM problem
    y_numpy = y.numpy()
    Y = np.zeros((batch_size,max_seq_len_output,n_words),dtype='float32')
    for i, d in enumerate(y_numpy):
        for t, word in enumerate(d):
            if word != 0:
                Y[i, t, word] = 1

    Y = tf.convert_to_tensor(Y)
    #predictions
    with tf.GradientTape() as tape:#Trainable variables (created by tf.Variable or tf.compat.v1.get_variable, where trainable=True is default in both cases) are automatically watched. 
        predictions =  model(X_1,X_2)
        loss = loss_object(Y,predictions)

    gradients = tape.gradient(loss,model.trainable_variables)
    optimizer.apply_gradients(zip(gradients,model.trainable_variables))
    train_loss(loss) 
    train_accuracy(Y,predictions)
    del Y
    del y_numpy


EPOCHS = 70

for epoch in range(EPOCHS):
    for X_1,X_2,y in dataset_train:
       training(X_1,X_2,y)
    template = 'Epoch {}, Loss: {}, Accuracy: {}'
    print(template.format(epoch+1,train_loss.result(),train_accuracy.result()*100))
    # Reset the metrics for the next epoch
    train_loss.reset_states()
    train_accuracy.reset_states() 

我该如何解决这个问题?

一个解决方案是在批处理过程中删除余数

dataset_train = dataset_train.shuffle(buffer_size = 1024).batch(batch_size, drop_remainder=True)