Opencv - 将视频帧与图像匹配
Open CV - Matching video frame with image
我正在尝试将视频帧的关键点与我上传的图像进行比较。这是我的代码。 img参数是上传的图片,pixels_color是frame的像素,高和宽也是frame的。
int Image::match(ofImage img){
cv::SurfFeatureDetector detector(400);
vector<cv::KeyPoint> keypoints1, keypoints2;
cv::Mat img1(height, width, CV_8UC3, pixels_color);
cv::Mat img2(img.getHeight(), img.getWidth(), CV_8UC3, img.getPixels());
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
cv::SurfDescriptorExtractor extractor;
cv::Mat descriptors1, descriptors2;
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
cv::BruteForceMatcher<cv::L2<float>> matcher;
vector<cv::DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
return matches.size();
}
问题是 return 始终是 keypoints1 的大小。我不知道为什么,除非我的 keypoints2 的大小为 0,否则 return 将始终是 keypoints1 的大小。
无论ofImage img
是否与frame有关,它总是returnkeypoints1的大小。这是一个例子:
我仍然不明白发生了什么,所以我使用了不同的方法,我使用了 FlannBasedMatcher 和规范化,允许我实现我想要的:
int Image::match(ofImage img){
bool result = false;
float neares_neighbor_distance_ratio = 0.7f;
cv::SurfFeatureDetector detector(400);
vector<cv::KeyPoint> keypoints1, keypoints2;
cv::SurfDescriptorExtractor extractor;
cv::Mat descriptors1, descriptors2;
cv::Mat img1(height, width, CV_8UC3, pixels_color);
cv::Mat img2(img.getHeight(), img.getWidth(), CV_8UC3, img.getPixels());
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
cv::FlannBasedMatcher matcher;
vector<vector<cv::DMatch>> matches;
matcher.knnMatch(descriptors1, descriptors2, matches, 2);
vector<cv::DMatch> good_matches;
good_matches.reserve(matches.size());
for(size_t i = 0; i < matches.size(); ++i)
{
if(matches[i].size() < 2)
continue;
const cv::DMatch &m1 = matches[i][0];
const cv::DMatch &m2 = matches[i][1];
if(m1.distance <= neares_neighbor_distance_ratio * m2.distance)
good_matches.push_back(m1);
}
return good_matches.size();
}
这个方法有点大,但我猜它奏效了。
我正在尝试将视频帧的关键点与我上传的图像进行比较。这是我的代码。 img参数是上传的图片,pixels_color是frame的像素,高和宽也是frame的。
int Image::match(ofImage img){
cv::SurfFeatureDetector detector(400);
vector<cv::KeyPoint> keypoints1, keypoints2;
cv::Mat img1(height, width, CV_8UC3, pixels_color);
cv::Mat img2(img.getHeight(), img.getWidth(), CV_8UC3, img.getPixels());
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
cv::SurfDescriptorExtractor extractor;
cv::Mat descriptors1, descriptors2;
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
cv::BruteForceMatcher<cv::L2<float>> matcher;
vector<cv::DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
return matches.size();
}
问题是 return 始终是 keypoints1 的大小。我不知道为什么,除非我的 keypoints2 的大小为 0,否则 return 将始终是 keypoints1 的大小。
无论ofImage img
是否与frame有关,它总是returnkeypoints1的大小。这是一个例子:
我仍然不明白发生了什么,所以我使用了不同的方法,我使用了 FlannBasedMatcher 和规范化,允许我实现我想要的:
int Image::match(ofImage img){
bool result = false;
float neares_neighbor_distance_ratio = 0.7f;
cv::SurfFeatureDetector detector(400);
vector<cv::KeyPoint> keypoints1, keypoints2;
cv::SurfDescriptorExtractor extractor;
cv::Mat descriptors1, descriptors2;
cv::Mat img1(height, width, CV_8UC3, pixels_color);
cv::Mat img2(img.getHeight(), img.getWidth(), CV_8UC3, img.getPixels());
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
cv::FlannBasedMatcher matcher;
vector<vector<cv::DMatch>> matches;
matcher.knnMatch(descriptors1, descriptors2, matches, 2);
vector<cv::DMatch> good_matches;
good_matches.reserve(matches.size());
for(size_t i = 0; i < matches.size(); ++i)
{
if(matches[i].size() < 2)
continue;
const cv::DMatch &m1 = matches[i][0];
const cv::DMatch &m2 = matches[i][1];
if(m1.distance <= neares_neighbor_distance_ratio * m2.distance)
good_matches.push_back(m1);
}
return good_matches.size();
}
这个方法有点大,但我猜它奏效了。