InvalidArgumentError: required broadcastable shapes at loc(unknown)
InvalidArgumentError: required broadcastable shapes at loc(unknown)
背景
我对 Python 和机器学习完全陌生。我只是尝试根据我在 Internet 上找到的代码设置一个 UNet,并希望使其适应我正在一点一点地工作的情况。当尝试 .fit
UNet 到训练数据时,我收到以下错误:
InvalidArgumentError: required broadcastable shapes at loc(unknown)
[[node Equal (defined at <ipython-input-68-f1422c6f17bb>:1) ]] [Op:__inference_train_function_3847]
我搜索时得到了很多结果,但大多数都是不同的错误。
这是什么意思?而且,更重要的是,我该如何解决它?
导致错误的代码
这个错误的上下文如下:
我想分割图像并标记不同的 classes。
我为训练、测试和验证数据设置了目录“trn”、“tst”和“val”。 dir_dat()
函数应用 os.path.join()
以获得相应 data set 的完整路径。 3 个文件夹中的每个文件夹都有子目录,每个 class 都标有整数。在每个文件夹中,有一些 .tif
图像对应 class.
我定义了以下图像数据生成器(训练数据稀疏,因此进行了扩充):
classes = np.array([ 0, 2, 4, 6, 8, 11, 16, 21, 29, 30, 38, 39, 51])
bs = 15 # batch size
augGen = ks.preprocessing.image.ImageDataGenerator(rotation_range = 365,
width_shift_range = 0.05,
height_shift_range = 0.05,
horizontal_flip = True,
vertical_flip = True,
fill_mode = "nearest") \
.flow_from_directory(directory = dir_dat("trn"),
classes = [str(x) for x in classes.tolist()],
class_mode = "categorical",
batch_size = bs, seed = 42)
tst_batches = ks.preprocessing.image.ImageDataGenerator() \
.flow_from_directory(directory = dir_dat("tst"),
classes = [str(x) for x in classes.tolist()],
class_mode = "categorical",
batch_size = bs, shuffle = False)
val_batches = ks.preprocessing.image.ImageDataGenerator() \
.flow_from_directory(directory = dir_dat("val"),
classes = [str(x) for x in classes.tolist()],
class_mode = "categorical",
batch_size = bs)
然后我建立了基于this example的UNet。在这里,我更改了一些参数以使 UNet 适应这种情况(多个 classes),即最后一层的激活和损失函数:
layer_in = ks.layers.Input(shape = (imgr, imgc, imgdim))
# convert pixel integer values to float
inVals = ks.layers.Lambda(lambda x: x / 255)(layer_in)
# Contraction path
c1 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(inVals)
c1 = ks.layers.Dropout(0.1)(c1)
c1 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c1)
p1 = ks.layers.MaxPooling2D((2, 2))(c1)
c2 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(p1)
c2 = ks.layers.Dropout(0.1)(c2)
c2 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c2)
p2 = ks.layers.MaxPooling2D((2, 2))(c2)
c3 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(p2)
c3 = ks.layers.Dropout(0.2)(c3)
c3 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c3)
p3 = ks.layers.MaxPooling2D((2, 2))(c3)
c4 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(p3)
c4 = ks.layers.Dropout(0.2)(c4)
c4 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c4)
p4 = ks.layers.MaxPooling2D(pool_size = (2, 2))(c4)
c5 = ks.layers.Conv2D(256, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(p4)
c5 = ks.layers.Dropout(0.3)(c5)
c5 = ks.layers.Conv2D(256, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c5)
# Expansive path
u6 = ks.layers.Conv2DTranspose(128, (2, 2), strides = (2, 2), padding = "same")(c5)
u6 = ks.layers.concatenate([u6, c4])
c6 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(u6)
c6 = ks.layers.Dropout(0.2)(c6)
c6 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c6)
u7 = ks.layers.Conv2DTranspose(64, (2, 2), strides = (2, 2), padding = "same")(c6)
u7 = ks.layers.concatenate([u7, c3])
c7 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(u7)
c7 = ks.layers.Dropout(0.2)(c7)
c7 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c7)
u8 = ks.layers.Conv2DTranspose(32, (2, 2), strides = (2, 2), padding = "same")(c7)
u8 = ks.layers.concatenate([u8, c2])
c8 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(u8)
c8 = ks.layers.Dropout(0.1)(c8)
c8 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c8)
u9 = ks.layers.Conv2DTranspose(16, (2, 2), strides = (2, 2), padding = "same")(c8)
u9 = ks.layers.concatenate([u9, c1], axis = 3)
c9 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(u9)
c9 = ks.layers.Dropout(0.1)(c9)
c9 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c9)
out = ks.layers.Conv2D(1, (1, 1), activation = "softmax")(c9)
model = ks.Model(inputs = layer_in, outputs = out)
model.compile(optimizer = "adam", loss = "sparse_categorical_crossentropy", metrics = ["accuracy"])
model.summary()
最后,我定义了回调和 运行 产生错误的训练:
cllbs = [
ks.callbacks.EarlyStopping(patience = 4),
ks.callbacks.ModelCheckpoint(dir_out("Checkpoint.h5"), save_best_only = True),
ks.callbacks.TensorBoard(log_dir = './logs'),# log events for TensorBoard
]
model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)
完整控制台输出
这是 运行 最后一行的完整输出(如果它有助于解决问题):
trained = model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)
Epoch 1/5
Traceback (most recent call last):
File "<ipython-input-68-f1422c6f17bb>", line 1, in <module>
trained = model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1183, in fit
tmp_logs = self.train_function(iterator)
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\def_function.py", line 889, in __call__
result = self._call(*args, **kwds)
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\def_function.py", line 950, in _call
return self._stateless_fn(*args, **kwds)
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 3023, in __call__
return graph_function._call_flat(
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 1960, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 591, in call
outputs = execute.execute(
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
InvalidArgumentError: required broadcastable shapes at loc(unknown)
[[node Equal (defined at <ipython-input-68-f1422c6f17bb>:1) ]] [Op:__inference_train_function_3847]
Function call stack:
train_function
尝试检查 ks.layers.concatenate layers
' 个输入是否具有相同的维度。例如 ks.layers.concatenate([u7, c3])
,此处检查 u7 和 c3 张量具有相同的形状以进行连接,但函数 ks.layers.concatenate
的轴输入除外。 Axis = -1
默认,这是最后一个维度。为了说明如果您给出 ks.layers.concatenate([u7,c3],axis=0)
,那么除了 u7 和 c3 的第一个轴外,所有其他轴的尺寸都应该完全匹配,例如,u7.shape = [3,4,5], c3.shape = [6,4,5].
我在这里发现了几个问题。该模型旨在用于具有多个 类 的语义分割(这就是为什么我将输出层激活更改为 "softmax"
并设置 "sparse_categorical_crossentropy"
损失)。因此,在 ImageDataGenerators 中,class_mode
必须设置为 None
。 classes
不提供。相反,我需要将手动分类的图像插入为 y
。我想初学者会犯很多初学者错误。
当 Class 个标签的数量与输出层的输出形状不匹配时,我遇到了这个问题。
例如,如果有 10 个 Class 标签,我们将输出层定义为:
output = tf.keras.layers.Conv2D(5, (1, 1), activation = "softmax")(c9)
因为 Class 标签的数量 (10
) 不等于输出形状 (5
)。
然后,我们会得到这个错误。
确保 class 个标签的数量与输出层的输出形状匹配。
我遇到了同样的问题,因为我在模型(对于输出层)中使用了一些 n_classes 与 labels/masks 数组中 类 的实际数量不同。我看到你这里有类似的问题:你有 13 类,但你的输出层只给了 1。最好的方法是避免硬编码 类 的数量,只传递一个变量(如 n_classes),然后在调用模型之前声明此变量。例如 n_classes = y_Train.shape[-1] 或 n_classes = len(np.unique(y_Train))
只需在全连接层之前添加 Flatten() 层。
背景
我对 Python 和机器学习完全陌生。我只是尝试根据我在 Internet 上找到的代码设置一个 UNet,并希望使其适应我正在一点一点地工作的情况。当尝试 .fit
UNet 到训练数据时,我收到以下错误:
InvalidArgumentError: required broadcastable shapes at loc(unknown)
[[node Equal (defined at <ipython-input-68-f1422c6f17bb>:1) ]] [Op:__inference_train_function_3847]
我搜索时得到了很多结果,但大多数都是不同的错误。
这是什么意思?而且,更重要的是,我该如何解决它?
导致错误的代码
这个错误的上下文如下:
我想分割图像并标记不同的 classes。
我为训练、测试和验证数据设置了目录“trn”、“tst”和“val”。 dir_dat()
函数应用 os.path.join()
以获得相应 data set 的完整路径。 3 个文件夹中的每个文件夹都有子目录,每个 class 都标有整数。在每个文件夹中,有一些 .tif
图像对应 class.
我定义了以下图像数据生成器(训练数据稀疏,因此进行了扩充):
classes = np.array([ 0, 2, 4, 6, 8, 11, 16, 21, 29, 30, 38, 39, 51])
bs = 15 # batch size
augGen = ks.preprocessing.image.ImageDataGenerator(rotation_range = 365,
width_shift_range = 0.05,
height_shift_range = 0.05,
horizontal_flip = True,
vertical_flip = True,
fill_mode = "nearest") \
.flow_from_directory(directory = dir_dat("trn"),
classes = [str(x) for x in classes.tolist()],
class_mode = "categorical",
batch_size = bs, seed = 42)
tst_batches = ks.preprocessing.image.ImageDataGenerator() \
.flow_from_directory(directory = dir_dat("tst"),
classes = [str(x) for x in classes.tolist()],
class_mode = "categorical",
batch_size = bs, shuffle = False)
val_batches = ks.preprocessing.image.ImageDataGenerator() \
.flow_from_directory(directory = dir_dat("val"),
classes = [str(x) for x in classes.tolist()],
class_mode = "categorical",
batch_size = bs)
然后我建立了基于this example的UNet。在这里,我更改了一些参数以使 UNet 适应这种情况(多个 classes),即最后一层的激活和损失函数:
layer_in = ks.layers.Input(shape = (imgr, imgc, imgdim))
# convert pixel integer values to float
inVals = ks.layers.Lambda(lambda x: x / 255)(layer_in)
# Contraction path
c1 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(inVals)
c1 = ks.layers.Dropout(0.1)(c1)
c1 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c1)
p1 = ks.layers.MaxPooling2D((2, 2))(c1)
c2 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(p1)
c2 = ks.layers.Dropout(0.1)(c2)
c2 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c2)
p2 = ks.layers.MaxPooling2D((2, 2))(c2)
c3 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(p2)
c3 = ks.layers.Dropout(0.2)(c3)
c3 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c3)
p3 = ks.layers.MaxPooling2D((2, 2))(c3)
c4 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(p3)
c4 = ks.layers.Dropout(0.2)(c4)
c4 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c4)
p4 = ks.layers.MaxPooling2D(pool_size = (2, 2))(c4)
c5 = ks.layers.Conv2D(256, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(p4)
c5 = ks.layers.Dropout(0.3)(c5)
c5 = ks.layers.Conv2D(256, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c5)
# Expansive path
u6 = ks.layers.Conv2DTranspose(128, (2, 2), strides = (2, 2), padding = "same")(c5)
u6 = ks.layers.concatenate([u6, c4])
c6 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(u6)
c6 = ks.layers.Dropout(0.2)(c6)
c6 = ks.layers.Conv2D(128, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c6)
u7 = ks.layers.Conv2DTranspose(64, (2, 2), strides = (2, 2), padding = "same")(c6)
u7 = ks.layers.concatenate([u7, c3])
c7 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(u7)
c7 = ks.layers.Dropout(0.2)(c7)
c7 = ks.layers.Conv2D(64, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c7)
u8 = ks.layers.Conv2DTranspose(32, (2, 2), strides = (2, 2), padding = "same")(c7)
u8 = ks.layers.concatenate([u8, c2])
c8 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(u8)
c8 = ks.layers.Dropout(0.1)(c8)
c8 = ks.layers.Conv2D(32, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c8)
u9 = ks.layers.Conv2DTranspose(16, (2, 2), strides = (2, 2), padding = "same")(c8)
u9 = ks.layers.concatenate([u9, c1], axis = 3)
c9 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(u9)
c9 = ks.layers.Dropout(0.1)(c9)
c9 = ks.layers.Conv2D(16, (3, 3), activation = "relu",
kernel_initializer = "he_normal", padding = "same")(c9)
out = ks.layers.Conv2D(1, (1, 1), activation = "softmax")(c9)
model = ks.Model(inputs = layer_in, outputs = out)
model.compile(optimizer = "adam", loss = "sparse_categorical_crossentropy", metrics = ["accuracy"])
model.summary()
最后,我定义了回调和 运行 产生错误的训练:
cllbs = [
ks.callbacks.EarlyStopping(patience = 4),
ks.callbacks.ModelCheckpoint(dir_out("Checkpoint.h5"), save_best_only = True),
ks.callbacks.TensorBoard(log_dir = './logs'),# log events for TensorBoard
]
model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)
完整控制台输出
这是 运行 最后一行的完整输出(如果它有助于解决问题):
trained = model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)
Epoch 1/5
Traceback (most recent call last):
File "<ipython-input-68-f1422c6f17bb>", line 1, in <module>
trained = model.fit(augGen, epochs = 5, validation_data = val_batches, callbacks = cllbs)
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1183, in fit
tmp_logs = self.train_function(iterator)
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\def_function.py", line 889, in __call__
result = self._call(*args, **kwds)
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\def_function.py", line 950, in _call
return self._stateless_fn(*args, **kwds)
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 3023, in __call__
return graph_function._call_flat(
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 1960, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\function.py", line 591, in call
outputs = execute.execute(
File "c:\users\manuel\python\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
InvalidArgumentError: required broadcastable shapes at loc(unknown)
[[node Equal (defined at <ipython-input-68-f1422c6f17bb>:1) ]] [Op:__inference_train_function_3847]
Function call stack:
train_function
尝试检查 ks.layers.concatenate layers
' 个输入是否具有相同的维度。例如 ks.layers.concatenate([u7, c3])
,此处检查 u7 和 c3 张量具有相同的形状以进行连接,但函数 ks.layers.concatenate
的轴输入除外。 Axis = -1
默认,这是最后一个维度。为了说明如果您给出 ks.layers.concatenate([u7,c3],axis=0)
,那么除了 u7 和 c3 的第一个轴外,所有其他轴的尺寸都应该完全匹配,例如,u7.shape = [3,4,5], c3.shape = [6,4,5].
我在这里发现了几个问题。该模型旨在用于具有多个 类 的语义分割(这就是为什么我将输出层激活更改为 "softmax"
并设置 "sparse_categorical_crossentropy"
损失)。因此,在 ImageDataGenerators 中,class_mode
必须设置为 None
。 classes
不提供。相反,我需要将手动分类的图像插入为 y
。我想初学者会犯很多初学者错误。
当 Class 个标签的数量与输出层的输出形状不匹配时,我遇到了这个问题。
例如,如果有 10 个 Class 标签,我们将输出层定义为:
output = tf.keras.layers.Conv2D(5, (1, 1), activation = "softmax")(c9)
因为 Class 标签的数量 (10
) 不等于输出形状 (5
)。
然后,我们会得到这个错误。
确保 class 个标签的数量与输出层的输出形状匹配。
我遇到了同样的问题,因为我在模型(对于输出层)中使用了一些 n_classes 与 labels/masks 数组中 类 的实际数量不同。我看到你这里有类似的问题:你有 13 类,但你的输出层只给了 1。最好的方法是避免硬编码 类 的数量,只传递一个变量(如 n_classes),然后在调用模型之前声明此变量。例如 n_classes = y_Train.shape[-1] 或 n_classes = len(np.unique(y_Train))
只需在全连接层之前添加 Flatten() 层。