深度图转激光原理
深度图转激光在ROS包depthimage_to_laserscan中实现,本篇讲解其计算过程。关于点云转激光数据的思路也是类似的,只需要将一定高度范围内的数据进行投影即可。
1. 深度图转激光原理
原理如图(1)所示。深度图转激光中,对任意给定的一个深度图像点\(m(u,v,z)\),其转换激光的步骤为:
a.将深度图像的点\(m(u,v,z)\)转换到深度相机坐标系下的坐标点\(M(x,y,z)\),具体求解过程请参考“深度图转点云的原理”;
b.计算直线\(AO\)和\(CO\)的夹角\(AOC\),计算公式如下:
\(\theta = atan(x/z)\)
c.将角\(AOC\)影射到相应的激光数据槽中.已知激光的最小最大范围\([\alpha,\beta]\),激光束共细分为\(N\)分,那么可用数组\(laser[N]\)表示激光数据。那么点\(M\)投影到数组laser中的索引值\(n\)可如下计算:
\(n=(\theta - \alpha)/((\beta - \alpha)/N) =N(\theta - \alpha)/(\beta - \alpha)\)
\(laser[n]\)的值为\(M\)在x轴上投影的点\(C\)到相机光心\(O\)的距离\(r\),即
\(laser[n] = r=OC=\sqrt{z^{2}+x^{2}}\)
图(1)
2. 代码
以下为添加注释说明的实现代码(另可参阅原始代码),
/** * Converts the depth image to a laserscan using the DepthTraits to assist. * * This uses a method to inverse project each pixel into a LaserScan angular increment. This method first projects the pixel * forward into Cartesian coordinates, then calculates the range and angle for this point. When multiple points coorespond to * a specific angular measurement, then the shortest range is used. * * @param depth_msg The UInt16 or Float32 encoded depth message. * @param cam_model The image_geometry camera model for this image. * @param scan_msg The output LaserScan. * @param scan_height The number of vertical pixels to feed into each angular_measurement. * */ template<typename T> void convert(const sensor_msgs::ImageConstPtr& depth_msg, const image_geometry::PinholeCameraModel& cam_model, const sensor_msgs::LaserScanPtr& scan_msg, const int& scan_height) const{ // Use correct principal point from calibration float center_x = cam_model.cx();//图像中心位置x float center_y = cam_model.cy();//图像中心位置y // Combine unit conversion (if necessary) with scaling by focal length for computing (X,Y) double unit_scaling = depthimage_to_laserscan::DepthTraits<T>::toMeters( T(1) ); float constant_x = unit_scaling / cam_model.fx(); float constant_y = unit_scaling / cam_model.fy(); const T* depth_row = reinterpret_cast<const T*>(&depth_msg->data[0]); int row_step = depth_msg->step / sizeof(T); int offset = (int)(cam_model.cy()-scan_height/2); depth_row += offset*row_step; // Offset to center of image for(int v = offset; v < offset+scan_height_; v++, depth_row += row_step){ for (int u = 0; u < (int)depth_msg->width; u++) // Loop over each pixel in row { T depth = depth_row[u]; double r = depth; // Assign to pass through NaNs and Infs double th = -atan2((double)(u - center_x) * constant_x, unit_scaling); // 计算夹角AOC,Atan2(x, z),实际上这里省略了深度值,因为分子分母都有,所以就略去了 but depth divides out int index = (th - scan_msg->angle_min) / scan_msg->angle_increment;//计算对应激光数据的索引 if (depthimage_to_laserscan::DepthTraits<T>::valid(depth)){ // Not NaN or Inf // Calculate in XYZ,计算XYZ double x = (u - center_x) * depth * constant_x; double z = depthimage_to_laserscan::DepthTraits<T>::toMeters(depth); // Calculate actual distance,计算激光的真实距离 r = sqrt(pow(x, 2.0) + pow(z, 2.0)); } // Determine if this point should be used.判断激光距离是否超出预设的有效范围 if(use_point(r, scan_msg->ranges[index], scan_msg->range_min, scan_msg->range_max)){ scan_msg->ranges[index] = r; } } } } image_geometry::PinholeCameraModel cam_model_; ///< image_geometry helper class for managing sensor_msgs/CameraInfo messages. float scan_time_; ///< Stores the time between scans. float range_min_; ///< Stores the current minimum range to use. float range_max_; ///< Stores the current maximum range to use. int scan_height_; ///< Number of pixel rows to use when producing a laserscan from an area. std::string output_frame_id_; ///< Output frame_id for each laserscan. This is likely NOT the camera's frame_id. };
Make Change - Focus on Computer Vision and Pattern Recognition
版权声明:本文为博主原创文章,未经博主允许不得转载
版权声明:本文为博主原创文章,未经博主允许不得转载