OpenCV中的SURF算法介绍

SURF:speed up robust feature,翻译为快速鲁棒特征。首先就其中涉及到的特征点和描述符做一些简单的介绍:

  • 特征点和描述符  

  特征点分为两类:狭义特征点和广义特征点。狭义特征点的位置本身具有常规的属性意义,比如角点、交叉点等等。而广义特征点是基于区域定义的,它本身的位置不具备特征意义,只代表满足一定特征条件的特征区域的位置。广义特征点可以是某特征区域的任一相对位置。这种特征可以不是物理意义上的特征,只要满足一定的数学描述就可以,因而有时是抽象的。因此,从本质上说,广义特征点可以认为是一个抽象的特征区域,它的属性就是特征区域具备的属性;称其为点,是将其抽象为一个位置概念。

  特征点既是一个点的位置标识,同时也说明它的局部邻域具有一定的模式特征。事实上,特征点是一个具有一定特征的局部区域的位置标识,称其为点,是将其抽象为一个位置概念,以便于确定两幅图像中同一个位置点的对应关系而进行图像匹配。所以在特征匹配过程中是以该特征点为中心,将邻域的局部特征进行匹配。也就是说在进行特征匹配时首先要为这些特征点(狭义和广义)建立特征描述,这种特征描述通常称之为描述符。 

  一个好的特征点需要有一个好的描述方法将其表现出来,它涉及到的是图像匹配的一个准确性。因此在基于特征点的图像拼接和图像配准技术中,特征点和描述符同样重要。

更多内容可参考:http://blog.sina.com.cn/s/blog_4b146a9c0100rb18.html

  • OpenCv中SURF的demo
  1 #include <stdio.h>
  2 #include <iostream>
  3 #include "opencv2/core/core.hpp"
  4 #include "opencv2/features2d/features2d.hpp"
  5 #include "opencv2/highgui/highgui.hpp"
  6 #include "opencv2/calib3d/calib3d.hpp"
  7 
  8 using namespace cv;
  9 
 10 void readme();
 11 
 12 /** @function main */
 13 int main( int argc, char** argv )
 14 {
 15   if( argc != 3 )
 16   { readme(); return -1; }
 17 
 18   Mat img_object = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
 19   Mat img_scene = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
 20 
 21   if( !img_object.data || !img_scene.data )
 22   { std::cout<< " --(!) Error reading images " << std::endl; return -1; }
 23 
 24   //-- Step 1: Detect the keypoints using SURF Detector
 25   int minHessian = 400;
 26 
 27   SurfFeatureDetector detector( minHessian );
 28 
 29   std::vector<KeyPoint> keypoints_object, keypoints_scene;
 30 
 31   detector.detect( img_object, keypoints_object );
 32   detector.detect( img_scene, keypoints_scene );
 33 
 34   //-- Step 2: Calculate descriptors (feature vectors)
 35   SurfDescriptorExtractor extractor;
 36 
 37   Mat descriptors_object, descriptors_scene;
 38 
 39   extractor.compute( img_object, keypoints_object, descriptors_object );
 40   extractor.compute( img_scene, keypoints_scene, descriptors_scene );
 41 
 42   //-- Step 3: Matching descriptor vectors using FLANN matcher
 43   FlannBasedMatcher matcher;
 44   std::vector< DMatch > matches;
 45   matcher.match( descriptors_object, descriptors_scene, matches );
 46 
 47   double max_dist = 0; double min_dist = 100;
 48 
 49   //-- Quick calculation of max and min distances between keypoints
 50   for( int i = 0; i < descriptors_object.rows; i++ )
 51   { double dist = matches[i].distance;
 52     if( dist < min_dist ) min_dist = dist;
 53     if( dist > max_dist ) max_dist = dist;
 54   }
 55 
 56   printf("-- Max dist : %f \n", max_dist );
 57   printf("-- Min dist : %f \n", min_dist );
 58 
 59   //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
 60   std::vector< DMatch > good_matches;
 61 
 62   for( int i = 0; i < descriptors_object.rows; i++ )
 63   { if( matches[i].distance < 3*min_dist )
 64      { good_matches.push_back( matches[i]); }
 65   }
 66 
 67   Mat img_matches;
 68   drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
 69                good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
 70                vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
 71 
 72   //-- Localize the object
 73   std::vector<Point2f> obj;
 74   std::vector<Point2f> scene;
 75 
 76   for( int i = 0; i < good_matches.size(); i++ )
 77   {
 78     //-- Get the keypoints from the good matches
 79     obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
 80     scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
 81   }
 82 
 83   Mat H = findHomography( obj, scene, CV_RANSAC );
 84 
 85   //-- Get the corners from the image_1 ( the object to be "detected" )
 86   std::vector<Point2f> obj_corners(4);
 87   obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
 88   obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
 89   std::vector<Point2f> scene_corners(4);
 90 
 91   perspectiveTransform( obj_corners, scene_corners, H);
 92 
 93   //-- Draw lines between the corners (the mapped object in the scene - image_2 )
 94   line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
 95   line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 96   line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 97   line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 98 
 99   //-- Show detected matches
100   imshow( "Good Matches & Object detection", img_matches );
101 
102   waitKey(0);
103   return 0;
104   }
105 
106   /** @function readme */
107   void readme()
108   { std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }
View Code

有了对特征点和描述符的简单认识后,对上述代码就能有更好的理解了。

代码来源:http://www.opencv.org.cn/opencvdoc/2.3.2/html/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography

  • SURF算法的具体实现过程

整理了网上的一些资料:

  1. surf算法原理,有一些简单介绍(1)

  http://blog.csdn.net/andkobe/article/details/5778739

     2.  surf算法原理,有一些简单介绍(2)

    http://wuzizhang.blog.163.com/blog/static/78001208201138102648854/  

     3 . 特征点检测学习_2(surf算法)

      http://www.cnblogs.com/tornadomeet/archive/2012/08/17/2644903.html

  • 其他
1 // DMatch function
2 DMatch(int queryIdx, int trainIdx, float distance)

其中 queryIdx 和 trainIdx 对应的特征点索引由match 函数决定,例如:

1 // 按如下顺序使用
2 match(descriptor_for_keypoints1, descriptor_for_keypoints2, matches)

queryIdx 和 trainIdx分别对应keypoints1和keypoints2。

 

 2013-11-05

posted @ 2013-11-05 16:53  StevenMeng  阅读(4083)  评论(0编辑  收藏  举报