Opencv中ORB算法如何保证尺度不变形
在ORB算法论文中,并没有对尺度空间如何构建进行描述,但是Opencv内部构建了尺度空间,尺度空间层为8。
当然这都是小问题,不知道你们有没有发现一个问题:若A图像尺度空间第一层上有一点a,与B图像尺度空间第三层上的一点b为同一匹配点,这时该如何匹配?
或者说,该如何画出来?这两层所在尺度不同,进行筛选内点时该如何操作?
先把Opencv内部的KeyPoint类贴出来:
class CV_EXPORTS_W_SIMPLE KeyPoint { public: //! the default constructor CV_WRAP KeyPoint() : pt(0,0), size(0), angle(-1), response(0), octave(0), class_id(-1) {} //! the full constructor KeyPoint(Point2f _pt, float _size, float _angle=-1, float _response=0, int _octave=0, int _class_id=-1) : pt(_pt), size(_size), angle(_angle), response(_response), octave(_octave), class_id(_class_id) {} //! another form of the full constructor CV_WRAP KeyPoint(float x, float y, float _size, float _angle=-1, float _response=0, int _octave=0, int _class_id=-1) : pt(x, y), size(_size), angle(_angle), response(_response), octave(_octave), class_id(_class_id) {} size_t hash() const; //! converts vector of keypoints to vector of points static void convert(const vector<KeyPoint>& keypoints, CV_OUT vector<Point2f>& points2f, const vector<int>& keypointIndexes=vector<int>()); //! converts vector of points to the vector of keypoints, where each keypoint is assigned the same size and the same orientation static void convert(const vector<Point2f>& points2f, CV_OUT vector<KeyPoint>& keypoints, float size=1, float response=1, int octave=0, int class_id=-1); //! computes overlap for pair of keypoints; //! overlap is a ratio between area of keypoint regions intersection and //! area of keypoint regions union (now keypoint region is circle) static float overlap(const KeyPoint& kp1, const KeyPoint& kp2); CV_PROP_RW Point2f pt; //!< coordinates of the keypoints CV_PROP_RW float size; //!< diameter of the meaningful keypoint neighborhood CV_PROP_RW float angle; //!< computed orientation of the keypoint (-1 if not applicable); //!< it's in [0,360) degrees and measured relative to //!< image coordinate system, ie in clockwise. CV_PROP_RW float response; //!< the response by which the most strong keypoints have been selected. Can be used for the further sorting or subsampling CV_PROP_RW int octave; //!< octave (pyramid layer) from which the keypoint has been extracted CV_PROP_RW int class_id; //!< object class (if the keypoints need to be clustered by an object they belong to) };
这里,KeyPoint类中,对特征点所在的角度、位置、层数,都标出来了。
但是,有没有发现使用drawMatches()函数和drawKeypoints()函数中,并没有对特征点所在层进行一个输入。
例如在drawKeypoints()函数中Point center( cvRound(p.pt.x * draw_multiplier), cvRound(p.pt.y * draw_multiplier) ),直接把KeyPoint类转换成为了Point类。
这是如何对齐的呢?
重点来了!!!
这是因为,在Opencv内部把不同尺度空间层上的点,进行一个处理,都乘上了降采样因子。这里的KeyPoint类上的octave只表明了该点是在该层上提取的。
ORB中有getScale函数:
static inline float getScale(int level, int firstLevel, double scaleFactor) { return (float)std::pow(scaleFactor, (double)(level - firstLevel)); }
先构建层度空间,每一层的大小为:
float scale = 1/getScale(level, firstLevel, scaleFactor); Size sz(cvRound(image.cols*scale), cvRound(image.rows*scale)); Size wholeSize(sz.width + border*2, sz.height + border*2); Mat temp(wholeSize, image.type()), masktemp; imagePyramid[level] = temp(Rect(border, border, sz.width, sz.height));
对每一层检测的点进行处理,从而对不同层检测的点还原到原图像上!
if (level != firstLevel) { float scale = getScale(level, firstLevel, scaleFactor); for (vector<KeyPoint>::iterator keypoint = keypoints.begin(), keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint) keypoint->pt *= scale; } // And add the keypoints to the output _keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end());