为什么我的 Keras/TensorFlow 模型不适合(即使参数看起来正确)?
Why does my Keras/TensorFlow model refuse to fit (even though params appear correct)?
使用此模型:
def unet(input_shape=(256, 256, 256)):
inputs = Input(input_shape)
conv1 = Conv2D(64, (3, 3), padding='same')(inputs)
#print(conv1.shape)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
conv1 = Conv2D(64, (3, 3), padding='same')(conv1)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv1)
conv2 = Conv2D(128, (3, 3), padding='same')(pool1)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
conv2 = Conv2D(128, (3, 3), padding='same')(conv2)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv2)
conv3 = Conv2D(256, (3, 3), padding='same')(pool2)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
conv3 = Conv2D(256, (3, 3), padding='same')(conv3)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv3)
conv4 = Conv2D(512, (3, 3), padding='same')(pool3)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
conv4 = Conv2D(512, (3, 3), padding='same')(conv4)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv4)
max = Conv2D(1024, (3, 3), padding='same')(pool4)
max = BatchNormalization()(max)
max = Activation('relu')(max)
max = Conv2D(1024, (3, 3), padding='same')(max)
max = BatchNormalization()(max)
max = Activation('relu')(max)
back4 = UpSampling2D((2, 2))(max)
back4 = concatenate([conv4, back4], axis = 3)
back4 = Conv2D(512, (3, 3), padding='same')(back4)
back4 = BatchNormalization()(back4)
back4 = Activation('relu')(back4)
back4 = Conv2D(512, (3, 3), padding='same')(back4)
back4 = BatchNormalization()(back4)
back4 = Activation('relu')(back4)
back4 = Conv2D(512, (3, 3), padding='same')(back4)
back4 = BatchNormalization()(back4)
back4 = Activation('relu')(back4)
back3 = UpSampling2D((2, 2))(back4)
back3 = concatenate([conv3, back3], axis = 3)
back3 = Conv2D(256, (3, 3), padding='same')(back3)
back3 = BatchNormalization()(back3)
back3 = Activation('relu')(back3)
back3 = Conv2D(256, (3, 3), padding='same')(back3)
back3 = BatchNormalization()(back3)
back3 = Activation('relu')(back3)
back3 = Conv2D(256, (3, 3), padding='same')(back3)
back3 = BatchNormalization()(back3)
back3 = Activation('relu')(back3)
back2 = UpSampling2D((2, 2))(back3)
back2 = concatenate([conv2, back2], axis = 3)
back2 = Conv2D(128, (3, 3), padding='same')(back2)
back2 = BatchNormalization()(back2)
back2 = Activation('relu')(back2)
back2 = Conv2D(128, (3, 3), padding='same')(back2)
back2 = BatchNormalization()(back2)
back2 = Activation('relu')(back2)
back2 = Conv2D(128, (3, 3), padding='same')(back2)
back2 = BatchNormalization()(back2)
back2 = Activation('relu')(back2)
back1 = UpSampling2D((2, 2))(back2)
back1 = concatenate([conv1, back1], axis = 3)
back1 = Conv2D(64, (3, 3), padding='same')(back1)
back1 = BatchNormalization()(back1)
back1 = Activation('relu')(back1)
back1 = Conv2D(64, (3, 3), padding='same')(back1)
back1 = BatchNormalization()(back1)
back1 = Activation('relu')(back1)
back1 = Conv2D(64, (3, 3), padding='same')(back1)
back1 = BatchNormalization()(back1)
back1 = Activation('relu')(back1)
outputs = Conv2D(1, (1, 1), activation='sigmoid')(back1)
#np_from_tensor = tf.make_ndarray(outputs)
#print(type(outputs))
#output_tf_array.append(outputs)
model = Model(inputs = [inputs], outputs = [outputs])
model.summary()
model.compile(optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999), loss='binary_crossentropy', metrics=['accuracy'])
return model
我的模型将以下参数传递给 Model.fit() 方法:
X_train (a 'List' object containing 4 numpy.ndarray objects with Shape(256, 256, 256))
y_train (a numpy.ndarray object with shape (4, 5))
当传递给 Model.fit(X_train, y_train, steps_per_epoch=10, epochs=100) 时,出现以下错误:
ValueError: Data cardinality is ambiguous:
x sizes: 256, 256, 256, 256
y sizes: 4
Please provide data which shares the same first dimension.
我认为 Model.fit() 方法的参数是正确的,因为 x 大小是指列表中的所有元素,而 y 大小是列表中有多少元素。
我已经尝试将 X_train 转换为 numpy.ndarray 类型,但是 Model.fit() 不接受该类型作为第一个参数,我不认为重塑 y_train 拥有 Shape(20) 是正确的,因为 y 大小仍然指的是 X_train 'List' 对象中有多少 numpy.ndarray 个对象。
是否有任何其他方法可以让我的 Model.fit(X_train, y_train, steps_per_epoch=10, epochs=100) 不引发 ValueError?
更新:
解决上述错误后,我收到另一个错误:
ValueError: logits and labels must have the same shape ((1, 256, 256, 1) vs (1, 5))
不知何故,我不相信重塑本身会有帮助,所以我怎么可能 'match' 标签?
谢谢。
您的模型只有一个输入,因此请将您的列表转换为 np.ndarray:
input = np.asarray(input)
您的模型期望形状 (4, 256, 256, 1) 作为标签。它与您的标签形状 (4, 5) 无关。
使用此模型:
def unet(input_shape=(256, 256, 256)):
inputs = Input(input_shape)
conv1 = Conv2D(64, (3, 3), padding='same')(inputs)
#print(conv1.shape)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
conv1 = Conv2D(64, (3, 3), padding='same')(conv1)
conv1 = BatchNormalization()(conv1)
conv1 = Activation('relu')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv1)
conv2 = Conv2D(128, (3, 3), padding='same')(pool1)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
conv2 = Conv2D(128, (3, 3), padding='same')(conv2)
conv2 = BatchNormalization()(conv2)
conv2 = Activation('relu')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv2)
conv3 = Conv2D(256, (3, 3), padding='same')(pool2)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
conv3 = Conv2D(256, (3, 3), padding='same')(conv3)
conv3 = BatchNormalization()(conv3)
conv3 = Activation('relu')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv3)
conv4 = Conv2D(512, (3, 3), padding='same')(pool3)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
conv4 = Conv2D(512, (3, 3), padding='same')(conv4)
conv4 = BatchNormalization()(conv4)
conv4 = Activation('relu')(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(conv4)
max = Conv2D(1024, (3, 3), padding='same')(pool4)
max = BatchNormalization()(max)
max = Activation('relu')(max)
max = Conv2D(1024, (3, 3), padding='same')(max)
max = BatchNormalization()(max)
max = Activation('relu')(max)
back4 = UpSampling2D((2, 2))(max)
back4 = concatenate([conv4, back4], axis = 3)
back4 = Conv2D(512, (3, 3), padding='same')(back4)
back4 = BatchNormalization()(back4)
back4 = Activation('relu')(back4)
back4 = Conv2D(512, (3, 3), padding='same')(back4)
back4 = BatchNormalization()(back4)
back4 = Activation('relu')(back4)
back4 = Conv2D(512, (3, 3), padding='same')(back4)
back4 = BatchNormalization()(back4)
back4 = Activation('relu')(back4)
back3 = UpSampling2D((2, 2))(back4)
back3 = concatenate([conv3, back3], axis = 3)
back3 = Conv2D(256, (3, 3), padding='same')(back3)
back3 = BatchNormalization()(back3)
back3 = Activation('relu')(back3)
back3 = Conv2D(256, (3, 3), padding='same')(back3)
back3 = BatchNormalization()(back3)
back3 = Activation('relu')(back3)
back3 = Conv2D(256, (3, 3), padding='same')(back3)
back3 = BatchNormalization()(back3)
back3 = Activation('relu')(back3)
back2 = UpSampling2D((2, 2))(back3)
back2 = concatenate([conv2, back2], axis = 3)
back2 = Conv2D(128, (3, 3), padding='same')(back2)
back2 = BatchNormalization()(back2)
back2 = Activation('relu')(back2)
back2 = Conv2D(128, (3, 3), padding='same')(back2)
back2 = BatchNormalization()(back2)
back2 = Activation('relu')(back2)
back2 = Conv2D(128, (3, 3), padding='same')(back2)
back2 = BatchNormalization()(back2)
back2 = Activation('relu')(back2)
back1 = UpSampling2D((2, 2))(back2)
back1 = concatenate([conv1, back1], axis = 3)
back1 = Conv2D(64, (3, 3), padding='same')(back1)
back1 = BatchNormalization()(back1)
back1 = Activation('relu')(back1)
back1 = Conv2D(64, (3, 3), padding='same')(back1)
back1 = BatchNormalization()(back1)
back1 = Activation('relu')(back1)
back1 = Conv2D(64, (3, 3), padding='same')(back1)
back1 = BatchNormalization()(back1)
back1 = Activation('relu')(back1)
outputs = Conv2D(1, (1, 1), activation='sigmoid')(back1)
#np_from_tensor = tf.make_ndarray(outputs)
#print(type(outputs))
#output_tf_array.append(outputs)
model = Model(inputs = [inputs], outputs = [outputs])
model.summary()
model.compile(optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999), loss='binary_crossentropy', metrics=['accuracy'])
return model
我的模型将以下参数传递给 Model.fit() 方法:
X_train (a 'List' object containing 4 numpy.ndarray objects with Shape(256, 256, 256))
y_train (a numpy.ndarray object with shape (4, 5))
当传递给 Model.fit(X_train, y_train, steps_per_epoch=10, epochs=100) 时,出现以下错误:
ValueError: Data cardinality is ambiguous:
x sizes: 256, 256, 256, 256
y sizes: 4
Please provide data which shares the same first dimension.
我认为 Model.fit() 方法的参数是正确的,因为 x 大小是指列表中的所有元素,而 y 大小是列表中有多少元素。
我已经尝试将 X_train 转换为 numpy.ndarray 类型,但是 Model.fit() 不接受该类型作为第一个参数,我不认为重塑 y_train 拥有 Shape(20) 是正确的,因为 y 大小仍然指的是 X_train 'List' 对象中有多少 numpy.ndarray 个对象。
是否有任何其他方法可以让我的 Model.fit(X_train, y_train, steps_per_epoch=10, epochs=100) 不引发 ValueError?
更新:
解决上述错误后,我收到另一个错误:
ValueError: logits and labels must have the same shape ((1, 256, 256, 1) vs (1, 5))
不知何故,我不相信重塑本身会有帮助,所以我怎么可能 'match' 标签?
谢谢。
您的模型只有一个输入,因此请将您的列表转换为 np.ndarray:
input = np.asarray(input)
您的模型期望形状 (4, 256, 256, 1) 作为标签。它与您的标签形状 (4, 5) 无关。