新形状和旧形状必须具有相同数量的元素
new shape and old shape must have the same number of elements
出于学习目的,我正在使用 Tensorflow.js,并且在尝试使用 fit
方法和批处理数据集(10 x 10)来学习批处理过程时遇到错误.
我有几张 600x600x3 的图像要分类(2 个输出,1 或 0)
这是我的训练循环:
const batches = await loadDataset()
for (let i = 0; i < batches.length; i++) {
const batch = batches[i]
const xs = batch.xs.reshape([batch.size, 600, 600, 3])
const ys = tf.oneHot(batch.ys, 2)
console.log({
xs: xs.shape,
ys: ys.shape,
})
// { xs: [ 10, 600, 600, 3 ], ys: [ 10, 2 ] }
const history = await model.fit(
xs, ys,
{
batchSize: batch.size,
epochs: 1
}) // <----- The code throws here
const loss = history.history.loss[0]
const accuracy = history.history.acc[0]
console.log({ loss, accuracy })
}
这是我定义数据集的方式
const chunks = chunk(examples, BATCH_SIZE)
const batches = chunks.map(
batch => {
const ys = tf.tensor1d(batch.map(e => e.y), 'int32')
const xs = batch
.map(e => imageToInput(e.x, 3))
.reduce((p, c) => p ? p.concat(c) : c)
return { size: batch.length, xs , ys }
}
)
这是模型:
const model = tf.sequential()
model.add(tf.layers.conv2d({
inputShape: [600, 600, 3],
kernelSize: 60,
filters: 50,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: [20, 20],
strides: [20, 20]
}))
model.add(tf.layers.conv2d({
kernelSize: 5,
filters: 100,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: [20, 20],
strides: [20, 20]
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
units: 2,
kernelInitializer: 'VarianceScaling',
activation: 'softmax'
}))
我在 for 循环的第一次迭代中收到一个错误,来自 .fit
,如下所示:
Error: new shape and old shape must have the same number of elements.
at Object.assert (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/util.js:36:15)
at reshape_ (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/array_ops.js:271:10)
at Object.reshape (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/operation.js:23:29)
at Tensor.reshape (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tensor.js:273:26)
at Object.derB [as $b] (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/binary_ops.js:32:24)
at _loop_1 (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tape.js:90:47)
at Object.backpropagateGradients (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tape.js:108:9)
at /Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:334:20
at /Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:91:22
at Engine.scopedRun (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:101:23)
我不知道从中可以理解什么,也没有找到关于该特定错误的文档或帮助,知道吗?
模型的问题在于 convolution
与 maxPooling
一起应用的方式
第一层正在对 kernelSize 60 进行卷积,步幅为 [20, 20] 和 50 个过滤器。
该层的输出将具有近似形状 [600 / 20, 600 / 20, 50] = [30, 30, 50]
应用最大池化,步幅为 [20, 20]
。该层的输出也将具有近似形状 [30 / 20, 30 / 20, 50] =[1, 1, 50 ]
从这一步开始,模型无法再使用 kernelSize 5 执行卷积。因为内核形状 [5, 5]
大于输入形状 [1, 1]
,导致抛出错误。该模型唯一可以执行的卷积是大小为 1 的内核。显然,该卷积将输出输入而不进行任何转换。
同样的规则适用于最后一个maxPooling
,其poolingSize
不能与1不同,否则会抛出错误。
这是一个片段:
const model = tf.sequential()
model.add(tf.layers.conv2d({
inputShape: [600, 600, 3],
kernelSize: 60,
filters: 50,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: [20, 20],
strides: [20, 20]
}))
model.add(tf.layers.conv2d({
kernelSize: 1,
filters: 100,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: 1,
strides: [20, 20]
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
units: 2,
kernelInitializer: 'VarianceScaling',
activation: 'softmax'
}))
model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
model.fit(tf.ones([10, 600, 600, 3]), tf.ones([10, 2]), {batchSize: 4});
model.predict(tf.ones([1, 600, 600, 3])).print()
<html>
<head>
<!-- Load TensorFlow.js -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.13.0"> </script>
</head>
<body>
</body>
</html>
出于学习目的,我正在使用 Tensorflow.js,并且在尝试使用 fit
方法和批处理数据集(10 x 10)来学习批处理过程时遇到错误.
我有几张 600x600x3 的图像要分类(2 个输出,1 或 0)
这是我的训练循环:
const batches = await loadDataset()
for (let i = 0; i < batches.length; i++) {
const batch = batches[i]
const xs = batch.xs.reshape([batch.size, 600, 600, 3])
const ys = tf.oneHot(batch.ys, 2)
console.log({
xs: xs.shape,
ys: ys.shape,
})
// { xs: [ 10, 600, 600, 3 ], ys: [ 10, 2 ] }
const history = await model.fit(
xs, ys,
{
batchSize: batch.size,
epochs: 1
}) // <----- The code throws here
const loss = history.history.loss[0]
const accuracy = history.history.acc[0]
console.log({ loss, accuracy })
}
这是我定义数据集的方式
const chunks = chunk(examples, BATCH_SIZE)
const batches = chunks.map(
batch => {
const ys = tf.tensor1d(batch.map(e => e.y), 'int32')
const xs = batch
.map(e => imageToInput(e.x, 3))
.reduce((p, c) => p ? p.concat(c) : c)
return { size: batch.length, xs , ys }
}
)
这是模型:
const model = tf.sequential()
model.add(tf.layers.conv2d({
inputShape: [600, 600, 3],
kernelSize: 60,
filters: 50,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: [20, 20],
strides: [20, 20]
}))
model.add(tf.layers.conv2d({
kernelSize: 5,
filters: 100,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: [20, 20],
strides: [20, 20]
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
units: 2,
kernelInitializer: 'VarianceScaling',
activation: 'softmax'
}))
我在 for 循环的第一次迭代中收到一个错误,来自 .fit
,如下所示:
Error: new shape and old shape must have the same number of elements.
at Object.assert (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/util.js:36:15)
at reshape_ (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/array_ops.js:271:10)
at Object.reshape (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/operation.js:23:29)
at Tensor.reshape (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tensor.js:273:26)
at Object.derB [as $b] (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/binary_ops.js:32:24)
at _loop_1 (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tape.js:90:47)
at Object.backpropagateGradients (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tape.js:108:9)
at /Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:334:20
at /Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:91:22
at Engine.scopedRun (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:101:23)
我不知道从中可以理解什么,也没有找到关于该特定错误的文档或帮助,知道吗?
模型的问题在于 convolution
与 maxPooling
第一层正在对 kernelSize 60 进行卷积,步幅为 [20, 20] 和 50 个过滤器。
该层的输出将具有近似形状 [600 / 20, 600 / 20, 50] = [30, 30, 50]
应用最大池化,步幅为 [20, 20]
。该层的输出也将具有近似形状 [30 / 20, 30 / 20, 50] =[1, 1, 50 ]
从这一步开始,模型无法再使用 kernelSize 5 执行卷积。因为内核形状 [5, 5]
大于输入形状 [1, 1]
,导致抛出错误。该模型唯一可以执行的卷积是大小为 1 的内核。显然,该卷积将输出输入而不进行任何转换。
同样的规则适用于最后一个maxPooling
,其poolingSize
不能与1不同,否则会抛出错误。
这是一个片段:
const model = tf.sequential()
model.add(tf.layers.conv2d({
inputShape: [600, 600, 3],
kernelSize: 60,
filters: 50,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: [20, 20],
strides: [20, 20]
}))
model.add(tf.layers.conv2d({
kernelSize: 1,
filters: 100,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: 1,
strides: [20, 20]
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
units: 2,
kernelInitializer: 'VarianceScaling',
activation: 'softmax'
}))
model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
model.fit(tf.ones([10, 600, 600, 3]), tf.ones([10, 2]), {batchSize: 4});
model.predict(tf.ones([1, 600, 600, 3])).print()
<html>
<head>
<!-- Load TensorFlow.js -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.13.0"> </script>
</head>
<body>
</body>
</html>