三维姿态:关于solvePnP与cvPOSIT

转载自:https://blog.csdn.net/abc20002929/article/details/8520063

之所以写:

场景:给定物体3D点集与对应的图像2D点集,之后进行姿态计算(即求旋转与位移矩阵)。
在翻阅opencv api时看到这2个函数输出都是旋转与位移,故做简单分析并记录于此。

官方解释:

solvePnP(http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp)
Finds an object pose from 3D-2D point correspondences.
bool solvePnP(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec,bool useExtrinsicGuess=false, int flags=ITERATIVE )
cvPOSIT(http://www.opencv.org.cn/index.php/Cv照相机定标和三维重建#POSIT)
执行POSIT算法
void cvPOSIT( CvPOSITObject* posit_object, CvPoint2D32f* image_points, double focal_length, CvTermCriteria criteria, CvMatr32f rotation_matrix,  CvVect32f translation_vector );

理解:

相同点:
1.输入都是3D点集和对应的2D点集,其中cvPOSIT的3D点包含在posit_object结构中
2.输出均包括旋转矩阵和位移向量
不同点:solvePnP有摄像机的一些内参
solvePnP源码:

void cv::solvePnP( InputArray _opoints, InputArray _ipoints,
                 InputArray _cameraMatrix, InputArray _distCoeffs,
                 OutputArray _rvec, OutputArray _tvec, bool useExtrinsicGuess )
{
   Mat opoints = _opoints.getMat(), ipoints = _ipoints.getMat();
   int npoints = std::max(opoints.checkVector(3, CV_32F), >opoints.checkVector(3, CV_64F));
   CV_Assert( npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) );
   _rvec.create(3, 1, CV_64F);
   _tvec.create(3, 1, CV_64F);
   Mat cameraMatrix = _cameraMatrix.getMat(), distCoeffs = _distCoeffs.getMat();
   CvMat c_objectPoints = opoints, c_imagePoints = ipoints;
   CvMat c_cameraMatrix = cameraMatrix, c_distCoeffs = distCoeffs;
   CvMat c_rvec = _rvec.getMat(), c_tvec = _tvec.getMat();
   cvFindExtrinsicCameraParams2(&c_objectPoints, &c_imagePoints, &c_cameraMatrix,
                             c_distCoeffs.rows*c_distCoeffs.cols ? &c_distCoeffs : 0,
                             &c_rvec, &c_tvec, useExtrinsicGuess );

}

结论:可以看到,除了前面的一堆数据类型检查和转化外,其实solvePnP调用的是cvFindExtrinsicCameraParams2通过已知的内参进行未知外参求解,是一个精确解;而cvPOSIT是用仿射投影模型近似透视投影模型下,不断迭代计算出来的估计值(在物体深度变化相对于物体到摄像机的距离比较大的时候,这种算法可能不收敛)。    

3d坐标->2d坐标,毫米到像素的转换主要体现在fx、fy、cx、cy上,这些量均以像素为单位来表示实际物理长度(mm)。而单位像素有多少个毫米,受sensor面板感光颗粒(单个物理像素)尺寸影响。

solvePnP输出的rvec是旋转向量,可以通过Rodrigues转换成旋转矩阵,有需要可以再转到欧拉角:

static Vec3f rotationMatrixToEulerAngles(Mat &R)
{
    float sy = sqrt(R.at<double>(0,0) * R.at<double>(0,0) +  R.at<double>(1,0) * R.at<double>(1,0) );
 
    bool singular = sy < 1e-6; // If
 
    float x, y, z;
    if (!singular)
    {
        x = atan2(R.at<double>(2,1) , R.at<double>(2,2));
        y = atan2(-R.at<double>(2,0), sy);
        z = atan2(R.at<double>(1,0), R.at<double>(0,0));
    }
    else
    {
        x = atan2(-R.at<double>(1,2), R.at<double>(1,1));
        y = atan2(-R.at<double>(2,0), sy);
        z = 0;
    }
    return Vec3f(x, y, z)*180/3.14159;
}
 
void test()
{
    ...
    Mat Rv,R,T;
    cv::solvePnP(objPts,imgPts,cameraMatrix,Mat(),Rv,T);
    //旋转向量转旋转矩阵
    Rodrigues(Rv,R);
    //旋转矩阵转欧拉角
    Vec3f angles = rotationMatrixToEulerAngles(R);
    ...
 
}
posted @ 2018-12-19 14:37  木易修  阅读(1782)  评论(0编辑  收藏  举报