ORB-SLAM跑通笔记本摄像头
环境:Ubuntu 14.04 + ROS indigo + ORB-SLAM2 (Thinkpad T460s)
1. 安装ORB-SLAM:
Pangolin有一些依赖库,按照提示安装好
git clone https://github.com/stevenlovegrove/Pangolin.git cd Pangolin mkdir build cd build cmake .. make -j
2.4.8版本,2.4.11版本均可以用,3.2版本没有测试,应该也行
注意OpenCV兼容性经常出问题,包括头文件的路径各版本也有变化.
因此从source编译比较好,可以在电脑中编译好几个常用版本的OpenCV,以后想卸载,直接在build目录中sudo make uninstall即可,想安装,在build目录中sudo make install,这样切换不同版本还是比较快的.
Eigen
sudo apt-get install libeigen3-dev
Eigen是一个只有头文件的库,默认安装在/usr/include/eigen3/中,由于Eigen的位置经常有问题,导致CMakeLists.txt找不到这个库,因此ORB-SLAM提供了一个FindEigen3.cmake文件帮助寻找Eigen3,在自己的工程中也可以去使用这个文件来帮助寻找Eigen库的位置.
DBow和g2o
这两个库ORB-SLAM的Thirdparty目录中提供了,下载ORB-SLAM源代码后使用提供的脚本即可.
将ORB-SLAM安装在ROS的工作路径catkin_ws中,不理解ROS原理的需要去ROS官网把Beginner Level Tutorial看完.
cd catkin_ws/src git clone https://github.com/raulmur/ORB_SLAM2.git
运行ORB-SLAM目录下的build.sh脚本:
cd ORB-SLAM2
./build.sh
// build.sh
echo "Configuring and building Thirdparty/DBoW2 ..." cd Thirdparty/DBoW2 mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j cd ../../g2o echo "Configuring and building Thirdparty/g2o ..." mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j cd ../../../ echo "Uncompress vocabulary ..." cd Vocabulary tar -xf ORBvoc.txt.tar.gz cd .. echo "Configuring and building ORB_SLAM2 ..." mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j
完成DBow,g2o,ORB-SLAM的编译,解压DBow字典文件.ORB-SLAM启动时,也需要载入这个100多M的文件,比较耗时.
2. 笔记本摄像头驱动安装和相机标定
1. 使用博世公司的 "usb_cam":A ROS Driver for V4L USB Cameras
cd catkin_ws/src git clone https://github.com/bosch-ros-pkg/usb_cam.git cd ../ catkin_make
下载需要标定的黑白棋盘,打印后贴在平板上.
2. 编译ROS相机标定包
rosdep install camera_calibration
rosmake camera_calibration
3. 启动usb_cam,获取笔记本摄像头的图像
// sudo apt-get install ros-indigo-usb-cam optional 若没有安装usb_cam驱动时安装 roslaunch usb_cam usb-cam-test.launch
4. 启动标定程序
rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.025 image:=/usb_cam/image_raw camera:=/usb_cam
标定界面出现后,按照x(左右)、y(上下)、size(前后)、skew(倾斜)等方式移动棋盘,直到x,y,size,skew的进度条都变成绿色位置.
此时可以按下CALIBRATE按钮,等一段时间就可以完成标定。
完成后Commit,在终端后会有标定结果yaml文件地址.打开后,按照TUM1.yaml的格式修改,命名为mycam.yaml.复制到/home/shang/catkin_ws/src/ORB_SLAM2/Examples/Monocular/目录下
只是需要加上camera的尺寸Camera.width和Camera.height
我的T460s摄像头标定结果和ORB-SLAM参数是
%YAML:1.0 #-------------------------------------------------------------------------------------------- # Camera Parameters. Adjust them! #-------------------------------------------------------------------------------------------- # Camera calibration and distortion parameters (OpenCV) Camera.fx: 626.3131886043523 Camera.fy: 624.0872390416225 Camera.cx: 280.8331825622062 Camera.cy: 234.9590765749035 Camera.k1: 0.1226796723026339 Camera.k2: -0.1753096021786491 Camera.p1: 0.003319071389844154 Camera.p2: -0.01267716347709299 Camera.k3: 0 Camera.width: 640 Camera.width: 480 # Camera frames per second Camera.fps: 30.0 # Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale) Camera.RGB: 1 #-------------------------------------------------------------------------------------------- # ORB Parameters #-------------------------------------------------------------------------------------------- # ORB Extractor: Number of features per image ORBextractor.nFeatures: 1000 # ORB Extractor: Scale factor between levels in the scale pyramid ORBextractor.scaleFactor: 1.2 # ORB Extractor: Number of levels in the scale pyramid ORBextractor.nLevels: 8 # ORB Extractor: Fast threshold # Image is divided in a grid. At each cell FAST are extracted imposing a minimum response. # Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST # You can lower these values if your images have low contrast ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7 #-------------------------------------------------------------------------------------------- # Viewer Parameters #-------------------------------------------------------------------------------------------- Viewer.KeyFrameSize: 0.05 Viewer.KeyFrameLineWidth: 1 Viewer.GraphLineWidth: 0.9 Viewer.PointSize:2 Viewer.CameraSize: 0.08 Viewer.CameraLineWidth: 3 Viewer.ViewpointX: 0 Viewer.ViewpointY: -0.7 Viewer.ViewpointZ: -1.8 Viewer.ViewpointF: 500
3. 使用笔记本摄像头运行ORB-SLAM
至此,准备工作完成.
1. 将ORB-SLAM的ROS包路径添加到环境变量
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/home/shang/catkin_ws/ORB_SLAM2/Examples/ROS // you should change /home/shang/catkin_ws to your catkin workspace
2. 编译ORB-SLAM的ROS节点
cd src/ORB_SLAM2/Examples/ROS/ORB_SLAM2 mkdir build cd build cmake .. -DROS_BUILD_TYPE=Release make -j
3. 这一步是最重要的!
ORB ROS节点订阅的topic和usb_cam发布的topic名称不同!
有两种方法,第一中较费事,但是可以帮助理解ROS的工作过程,第二种很简单,去ORB_SLAM中将其订阅的代码改掉,重新编译。
方法一:
编写自定义的ROS包,让ORB-SLAM的ROS节点订阅笔记本摄像头发布图像的topic
问题是,ORB-SLAM ROS节点订阅的topic为/camera/image_view,而笔记本摄像头图像流发布topic为/usb_cam/image_raw,这些可以通过rostopic list -v / rosnode list看到.
因此需要自己写一个ROS node程序,将这两个topic联合起来,我们选择自己重新定义一个ros packge
cd catkin_ws/src catkin_create_pkg orb_image_transport image_transport cv_bridge cd .. catkin_make cd orb_image_transport gedit orb_image_converter.cpp
orb_image_converter.cpp文件负责将笔记本摄像头图像publish到一个topic,让ORB-SLAM订阅这个topic
#include <ros/ros.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h> #include <opencv2/imgproc/imgproc.hpp> //include the headers for OPENCV's image processing and GUI module #include <opencv2/highgui/highgui.hpp> // static const std::string OPENCV_WINDOW = "Image window"; //define show image gui class ImageConverter { ros::NodeHandle nh_; //define Nodehandle image_transport::ImageTransport it_; //use this to create a publisher or subscriber image_transport::Subscriber image_sub_; // image_transport::Publisher image_pub_; public: ImageConverter() : it_(nh_) { // Subscrive to input video feed and publish output video feed image_sub_ = it_.subscribe("/usb_cam/image_raw", 1, &ImageConverter::imageCb, this); //image_pub_ = it_.advertise("/image_converter/output_video", 1); image_pub_ = it_.advertise("/camera/image_raw", 1); cv::namedWindow(OPENCV_WINDOW); //Opencv HighGUI calls to create/destroy a display window on start-up / shutdon } ~ImageConverter() { cv::destroyWindow(OPENCV_WINDOW); } void imageCb(const sensor_msgs::ImageConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } cv::imshow(OPENCV_WINDOW, cv_ptr->image); cv::waitKey(3); // Output modified video stream image_pub_.publish(cv_ptr->toImageMsg()); } }; int main(int argc, char** argv) { ros::init(argc, argv, "image_converter"); ImageConverter ic; ros::spin(); return 0; }
并在CMakeLists.txt文件最后添加
add_executable(orb_image_converter orb_image_converter.cpp)
target_link_libraries(orb_image_converter ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
catkin_make后就完成了所有的工作.
注意这里没有使用自定义的消息类型,不需要对Package.xml和CMakeLists.txt做别的改动.
最后一次运行就可以完成ORB-SLAM在笔记本摄像头上的运行
roslaunch usb_cam usb_cam-test.launch rosrun orb_image_transport orb_image_converter rosrun ORB_SLAM2 Mono /home/shang/catkin_ws/src/ORB_SLAM2/Vocabulary/ORBvoc.txt /home/shang/catkin_ws/src/ORB_SLAM2/Examples/Monocular/mycam.yaml // change /home/shang to your directory
也可以使用一个脚本运行所有的节点:
demo.sh
gnome-terminal -x bash -c "rosrun orb_image_transport orb_image_converter; exec $SHELL" gnome-terminal -x bash -c "rosrun ORB_SLAM2 Mono /home/shang/catkin_ws/src/ORB_SLAM2/Vocabulary/ORBvoc.txt /home/shang/catkin_ws/src/ORB_SLAM2/Examples/Monocular/mycam.yaml ; exec $SHELL" roslaunch usb_cam usb_cam-test.launch
直接运行./demo.sh即可完成
方法二:
后来发现这种方法太笨,在安装了博世的ROS摄像头驱动包usb_cam以后,摄像头的图像将发布到/usb_cam/image_raw,因此在ORB的代码中将其订阅的topic从/camera/image_raw改为/usb_cam/image_raw即可,在ROS目录下的ros_mono.cc文件中修改即可,双目,深度以及AR demo同理。
这样,只需要使用以下两条命令即可。
roslaunch usb_cam usb_cam-test.launch
rosrun ORB_SLAM2 Mono /home/shang/catkin_ws/src/ORB_SLAM2/Vocabulary/ORBvoc.txt /home/shang/catkin_ws/src/ORB_SLAM2/Examples/ROS/ORB_SLAM2/mycam.yaml
参考:
1. http://www.jianshu.com/p/c3e8c88edb64
2. http://www.cnblogs.com/li-yao7758258/p/5912663.html