在 caffe 中使用 SPP 层导致 Check failed: pad_w_ < kernel_w_ (1 vs. 1)
Using the SPP Layer in caffe results in Check failed: pad_w_ < kernel_w_ (1 vs. 1)
好的,我之前有一个关于在 caffe 中使用 SPP 层的问题。
这个问题是 的后续问题。
使用 SPP 层时,我得到以下错误输出。
似乎到达 spp 层时图像变得太小了?
我使用的图像很小。宽度介于 10 和 20 像素之间,高度介于 30 和 35 像素之间。
I0719 12:18:22.553256 2114932736 net.cpp:406] spatial_pyramid_pooling <- conv2
I0719 12:18:22.553261 2114932736 net.cpp:380] spatial_pyramid_pooling -> pool2
F0719 12:18:22.553505 2114932736 pooling_layer.cpp:74] Check failed: pad_w_ < kernel_w_ (1 vs. 1)
*** Check failure stack trace: ***
@ 0x106afcb6e google::LogMessage::Fail()
@ 0x106afbfbe google::LogMessage::SendToLog()
@ 0x106afc53a google::LogMessage::Flush()
@ 0x106aff86b google::LogMessageFatal::~LogMessageFatal()
@ 0x106afce55 google::LogMessageFatal::~LogMessageFatal()
@ 0x1068dc659 caffe::PoolingLayer<>::LayerSetUp()
@ 0x1068ffd98 caffe::SPPLayer<>::LayerSetUp()
@ 0x10691123f caffe::Net<>::Init()
@ 0x10690fefe caffe::Net<>::Net()
@ 0x106927ef8 caffe::Solver<>::InitTrainNet()
@ 0x106927325 caffe::Solver<>::Init()
@ 0x106926f95 caffe::Solver<>::Solver()
@ 0x106935b46 caffe::SGDSolver<>::SGDSolver()
@ 0x10693ae52 caffe::Creator_SGDSolver<>()
@ 0x1067e78f3 train()
@ 0x1067ea22a main
@ 0x7fff9a3ad5ad start
@ 0x5 (unknown)
我是对的,我的图片太小了。
我换了我的网,它起作用了。我删除了一个 conv 层,并用 spp 层替换了普通的 pool 层。我还必须将我的测试批量大小设置为 1。准确性非常高,但我的 F1 分数下降了。我不知道这是否与我必须使用的小测试批量大小有关。
网络:
name: "TessDigitMean"
layer {
name: "input"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/train_lmdb"
batch_size: 1 #64
backend: LMDB
}
}
layer {
name: "input"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/test_lmdb"
batch_size: 1
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
pad_w: 2
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "spatial_pyramid_pooling"
type: "SPP"
bottom: "conv1"
top: "pool2"
spp_param {
pyramid_height: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
好的,我之前有一个关于在 caffe 中使用 SPP 层的问题。
这个问题是
使用 SPP 层时,我得到以下错误输出。 似乎到达 spp 层时图像变得太小了? 我使用的图像很小。宽度介于 10 和 20 像素之间,高度介于 30 和 35 像素之间。
I0719 12:18:22.553256 2114932736 net.cpp:406] spatial_pyramid_pooling <- conv2
I0719 12:18:22.553261 2114932736 net.cpp:380] spatial_pyramid_pooling -> pool2
F0719 12:18:22.553505 2114932736 pooling_layer.cpp:74] Check failed: pad_w_ < kernel_w_ (1 vs. 1)
*** Check failure stack trace: ***
@ 0x106afcb6e google::LogMessage::Fail()
@ 0x106afbfbe google::LogMessage::SendToLog()
@ 0x106afc53a google::LogMessage::Flush()
@ 0x106aff86b google::LogMessageFatal::~LogMessageFatal()
@ 0x106afce55 google::LogMessageFatal::~LogMessageFatal()
@ 0x1068dc659 caffe::PoolingLayer<>::LayerSetUp()
@ 0x1068ffd98 caffe::SPPLayer<>::LayerSetUp()
@ 0x10691123f caffe::Net<>::Init()
@ 0x10690fefe caffe::Net<>::Net()
@ 0x106927ef8 caffe::Solver<>::InitTrainNet()
@ 0x106927325 caffe::Solver<>::Init()
@ 0x106926f95 caffe::Solver<>::Solver()
@ 0x106935b46 caffe::SGDSolver<>::SGDSolver()
@ 0x10693ae52 caffe::Creator_SGDSolver<>()
@ 0x1067e78f3 train()
@ 0x1067ea22a main
@ 0x7fff9a3ad5ad start
@ 0x5 (unknown)
我是对的,我的图片太小了。 我换了我的网,它起作用了。我删除了一个 conv 层,并用 spp 层替换了普通的 pool 层。我还必须将我的测试批量大小设置为 1。准确性非常高,但我的 F1 分数下降了。我不知道这是否与我必须使用的小测试批量大小有关。
网络:
name: "TessDigitMean"
layer {
name: "input"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/train_lmdb"
batch_size: 1 #64
backend: LMDB
}
}
layer {
name: "input"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/test_lmdb"
batch_size: 1
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
pad_w: 2
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "spatial_pyramid_pooling"
type: "SPP"
bottom: "conv1"
top: "pool2"
spp_param {
pyramid_height: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}