InvalidArgumentError: Negative dimension size caused by subtracting 3 from 1 '{{node conv2d_28/Conv2D}}

InvalidArgumentError: Negative dimension size caused by subtracting 3 from 1 '{{node conv2d_28/Conv2D}}

import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dropout, Dense, MaxPool2D, Conv2D, BatchNormalization, Flatten, Activation
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.utils import to_categorical
import os
import time
import matplotlib.pyplot as plt
import seaborn
import pickle

这个“icml_face_data.csv”包含面部表情的训练、验证和测试数据

df = pd.read_csv("icml_face_data.csv")

def prepare_data(data):
  """
  This function separates array and label(target)
  :param data: data( it can be train,test,val)
  :return: image_array and labels(target)
  """
  image_array = np.zeros(shape=(len(data),48,48))
  image_label = np.array(data["emotion"])
  for i, row in enumerate(data.index):
    image = np.fromstring(data.loc[row, " pixels"], dtype=int, sep=" ")
    image = np.reshape(image, (48, 48))
    image_array[i] = image
  return image_array, image_label

training_data, training_label = prepare_data(df[df[" Usage"]=="Training"])
validation_data, validation_label = prepare_data(df[df[" Usage"]=="PublicTest"])
test_data, test_label = prepare_data(df[df[" Usage"]=="PrivateTest"])

train_data = training_data.reshape((training_data.shape[0],48,48,1))
train_data = train_data.astype("float32")/255

valid_data = validation_data.reshape((validation_data.shape[0],48,48,1))
valid_data = valid_data.astype("float32")/255

test_data = test_data.reshape((test_data.shape[0],48,48,1))
test_data = test_data.astype("float32")/255

training_label = to_categorical(training_label)
validation_label = to_categorical(validation_label)
test_label = to_categorical(test_label)

当我训练组合 dense_layers = [1,2,3],layer_sizes = [32 时,我正在使用密集层、卷积层和层大小的不同组合来训练卷积模型,64,128],conv_layers = [1,2,3]

它工作正常,没有错误,当我尝试 dense_layers = [1],layer_sizes = [32],conv_layers = [3,4] 它仍然工作正常.

但是当我使用 dense_layers = [1],layer_sizes = [32],conv_layers = [5] 这个组合时它会引发错误

dense_layers = [1]
layer_sizes=[32]
conv_layers = [5]

for dense_layer in dense_layers:
  for layer_size in layer_sizes:
    for conv_layer in conv_layers:

      NAME = f"{conv_layer}-conv-{layer_size}-layer-{dense_layer}-Dense-{int(time.time())}"
      tensorboard = TensorBoard(log_dir=f"logs/{NAME}")

      model = Sequential()
      model.add(Conv2D(layer_size, (3,3),activation="relu",input_shape=(48,48,1)))
      model.add(MaxPool2D((2,2)))
      model.add(Dropout(0.2))

      for _ in range(conv_layer-1):
        model.add(Conv2D(layer_size, (3,3),activation="relu"))
        model.add(MaxPool2D((2,2)))
        model.add(Dropout(0.2))

      model.add(Flatten())
      for _ in range(dense_layer):
        model.add(Dense(layer_size, activation="relu"))
        model.add(Dropout(0.2))

      model.add(Dense(7, activation="softmax"))

      model.compile(loss='categorical_crossentropy',optimizer=tf.keras.optimizers.Adam(lr=1e-3),metrics=["accuracy"])

      model.fit(train_data, training_label,
                        validation_data=(valid_data,validation_label),
                        epochs=20,
                        batch_size=32,
                        callbacks=[tensorboard])

错误:

---------------------------------------------------------------------------

InvalidArgumentError                      Traceback (most recent call last)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
   1879   try:
-> 1880     c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
   1881   except errors.InvalidArgumentError as e:

InvalidArgumentError: Negative dimension size caused by subtracting 3 from 1 for '{{node conv2d_28/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](Placeholder, conv2d_28/Conv2D/ReadVariableOp)' with input shapes: [?,1,1,32], [3,3,32,32].


During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)

17 frames

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
   1881   except errors.InvalidArgumentError as e:
   1882     # Convert to ValueError for backwards compatibility.
-> 1883     raise ValueError(str(e))
   1884 
   1885   return c_op

ValueError: Negative dimension size caused by subtracting 3 from 1 for '{{node conv2d_28/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](Placeholder, conv2d_28/Conv2D/ReadVariableOp)' with input shapes: [?,1,1,32], [3,3,32,32].

为什么此代码仅针对该组合引发错误 我使用了 google colab(运行time type = "gpu") 我尝试重新启动 运行time 和 运行 all 但它只是在该组合上引发错误 我不知道为什么会这样? 请帮忙

您的问题可能是,由于您在卷积层和最大池化层中使用了步长,因此特征图的大小随着每增加一层而变得越来越小。您的原始输入具有形状 (48, 48, 1),因此如果您在其上使用 conv 层(在您的情况下没有步幅,但 padding="valid",这是标准选项)那么您的输出将具有形状( 46、46、十)。 max pooling层也是一样,但是效果更差,因为你没有指定stride,默认情况下tensorflow假设stride应该等于你的kernel size。这意味着对于形状为 (48, 48, 1) 的输入,最大池化层的输出将只是 (24, 24, 1).

总结一下:每增加一个层,您都会减小特征图的大小,并且在某些时候它们会小于层的内核大小,从而导致错误。

我假设您希望图像形状始终保持不变。如果是这种情况,那么您应该按以下方式更改您的代码:

  1. padding = "same" 添加到最大池化层和卷积层。
  2. strides = (1, 1) 添加到最大池化层。

对于大多数卷积网络,特征图在通过网络时变得越来越小通常是有意义的。所以你应该考虑只对某些层实施我的建议。