slam资料整理
slam资料整理
slam视频课程,ppt简单教程; 书籍 ; slam论文(综述 常用方法 视觉slam 激光slam 闭环 ) ; openSLAM
slam数据集 slam研究者群 slam研究者
===============课程=============
== 国外机器人/移动机器人相关视频==
Autonome Intelligente Systeme
CS 287: Advanced Robotics, Fall 2012 University of California at BerkeleyDept of Electrical Engineering & Computer Sciences
Introduction to Mobile Robotics - SS 2012
slam视频教程(请勿商用) 链接:http://pan.baidu.com/s/1o6Oku4y 密码:sd4c
苏黎世理工的robot课程:http://www.asl.ethz.ch/education/master/mobile_robotics
==机器学习视频==
斯坦福大学公开课 :机器学习课程 (吴恩达) 链接: http://pan.baidu.com/s/1pJSzxpT 密码: 68eu
=========书籍============
Probabilistic Robotics 链接:http://pan.baidu.com/s/1o6MOiJw 密码:iqcf
Multiple View Geometry in Computer Vision Second Edition
Robotics Vision and Control
通过MATLAB几乎把机器人学给贯穿了,里面每章节都有对应的Code,关于里面Matlab的codes
澳大利亚昆士兰理工大学的Peter Corke是机器视觉领域的大牛人物,他所编写的Robotics, vision and control一书更是该领域的经典教材
配套有matlab工具箱。工具箱分为两部分,一部分为机器人方面的,另一部分为视觉方面的工具箱
源代码都是开放免费下载的: http://petercorke.com/Toolbox_software.html
=======论文=========================
slam基本方法:
滤波框架:
卡尔曼滤波 : EKF UKF EIF 等
粒子滤波: PF RBPF FASTSAM 1.0 2.0 MCL
图优化框架:
Graph-slam 工具: g20
开源算法:
激光: gampping karto(图) scan-matching
视觉: Mono-slam(SceneLib davison c++) ekfmonoo-slam(逆深度观测模型 matlab)
PTAM SVO ORB
rgbd-slam V2 高博一起做slam(kinect 视觉slam入门必看)
闭环检测
开源代码汇总
openslam https://www.openslam.org/
//======
我曾经下载的论文,看了部分. 仅作自由分享,勿商用 链接:http://pan.baidu.com/s/1ntW7mch 密码:c6j1
基础的同学论文先看综述类,基础类. 再根据自己打算做的,把相关方向的论文都涉猎下看看.再集中解决你文献调研中出现的问题.
============数据集======
The Robotics Data Set Repository (Radish for short) provides acollection of standard robotics data sets. Contained here-in you will find:
-
Logs of odometry, laser and sonar data taken from real robots. -
Logs of all sorts of sensor data taken from simulated robots. -
Environment maps generated by robots. -
Environment maps generated by hand (i.e., re-touched floor-plans).
-----转自
-
SLAM benchmarking. http://kaspar.informatik.uni-freiburg.de/~slamEvaluation/datasets.php -
KITTI SLAM dataset. http://www.cvlibs.net/datasets/kitti/eval_odometry.php. 包括 单目视觉 ,双目视觉, velodyne, POS 轨迹 -
OpenSLAM .https://www.openslam.org/links.html -
CMU Visual Localization Data Set: Dataset collected using the Navlab 11 equipped with IMU, GPS, Lidars and cameras. -
NYU RGB-D Dataset: Indoor dataset captured with a Microsoft Kinect that provides semantic labels. -
TUM RGB-D Dataset: Indoor dataset captured with Microsoft Kinect and high-accuracy motion capturing. -
New College Dataset: 30 GB of data for 6 D.O.F. navigation and mapping (metric or topological) using vision and/or laser. -
The Rawseeds Project: Indoor and outdoor datasets with GPS, odometry, stereo, omnicam and laser measurements for visual, laser-based, omnidirectional, sonar and multi-sensor SLAM evaluation. -
Victoria Park Sequence: Widely used sequence for evaluating laser-based SLAM. Trees serve as landmarks, detection code is included. -
Malaga Dataset 2009 and Malaga Dataset 2013: Dataset with GPS, Cameras and 3D laser information, recorded in the city of Malaga, Spain. -
Ford Campus Vision and Lidar Dataset: Dataset collected by a Ford F-250 pickup, equipped with IMU, Velodyne and Ladybug.
------转自
1. Tum数据集
这个大家用的人都知道,RGB-D数据集,有很多个sequence,自带Ground-truth轨迹与测量误差的脚本(python写的,还有一些有用的函数)。
有一些很简单(xyz, 360系列),但也有的很难(各个slam场景)。
由于它的目标场景是机器人救援(虽然看不太出来),场景都比较空旷,许多时候kinect的深度只够看一个地板。对视觉算法可靠性的要求还是蛮高的。
网址: http://vision.in.tum.de/data/datasets/rgbd-dataset
2. MRPT
坛友SLAM_xian已经给出了地址:见此贴
含有多种传感器数据,包括双目,laser等等。
MRPT本身是个机器人用的开发包(然而我还是没用过),有兴趣的同学可以尝试。
3. Kitti
坛友zhengshunkai给出了地址:见此贴
著名的室外数据集,双目,有真值。场景很大,数据量也很大(所以在我这种流量限制的地方用不起……)。如果你做室外的请务必尝试一下此数据集。就算你不用审稿人也会让你用的。
4. Oxford数据集
含有一些Fabmap相关的数据集,用来验证闭环检测的算法。室外场景。它提供了ground-truth闭环(据说是手工标的,真是有耐心啊)。
网址:http://www.robots.ox.ac.uk/~mobile/wikisite/pmwiki/pmwiki.php?n=Main.Datasets#userconsent#
5. ICL-NUIM数据集
(又)是帝国理工弄出来的,RGB-D数据集,室内向。提供ground-truth和odometry。
网址:http://www.doc.ic.ac.uk/%7Eahanda/VaFRIC/iclnuim.html
6. NYUV2 数据集
一个带有语义标签的RGB-D数据集,原本是用来做识别的,也可以用来做SLAM。特点是有一个训练集(1400+手工标记的图像,好像是雇人弄的),以前一大堆video sequence。
网址:http://cs.nyu.edu/silberman/datasets/nyu_depth_v2.html (似乎访问有问题,不知道会不会修复)
7. KOS的3d scan数据集
一个激光扫描的数据集。
网址:http://kos.informatik.uni-osnabrueck.de/3Dscans/
--------------
=============研究者===============
slam研究者群: 254787961
Ronald Parr
-
LSPI: Fast andefficient reinforcement learning with linear value function approximationfor MDPs and multi agent systems. -
DPSLAM: Fast,accurate, truly simultaneous localization and mapping without landmarks. -
Textured Occupancy Grids: Monocular Localization without Features. We provide some 3D data sets using a variety of sensors.
Andrew Davison
Dr. Thomas Whelan : research focuses on real-time dense visual SLAM and on a broader scale, general robotic perception.