Slam

openvslam
https://github.com/xdspacelab/openvslam

Neural
https://github.com/magicleap/Atlas End-to-End 3D Scene Reconstruction from Posed Images

https://github.com/MIT-SPARK/Kimera-VIO

https://github.com/martinruenz/maskfusion

https://github.com/martinruenz/co-fusion

Breezyslam
https://github.com/simondlevy/BreezySLAM from Christ Fotache at Pytorch. See Arduino_code and simond Lidar code TOF.

Leap slam using Kinect
http://diydrones.com/profiles/blogs/leap-is-kinect-with-200-resolution-slam

github
Unversity of Zurich https://github.com/uzh-rpg/dslam_open  linked from University Zurich quadcopter

cvprtum
https://www.youtube.com/watch?v=LZChzEcLNzI Semi-Dense Visual Odometry for a Monocular Camera (ICCV '13) http://vision.in.tum.de/research/vslam/lsdslam?redirect=2 LSD-SLAM is a novel, direct monocular SLAM technique: Instead of using keypoints, it directly operates on image intensities both for tracking and mapping. The camera is tracked using direct image alignment, while geometry is estimated in the form of semi-dense depth maps, obtained by filtering over many pixelwise stereo comparisons. We then build a Sim(3) pose-graph of keyframes, which allows to build scale-drift corrected, large-scale maps including loop-closures. LSD-SLAM runs in real-time on a CPU, and even on a modern smartphone.
 * https://github.com/tum-vision/lsd_slam code release of LSD slam

add
The latest issue of IEEE Trans. on Robotics is all about SLAM. There are some interesting new algorithms, and articles on all the old ones. This stuff actually works now, with nothing more than a camera as input.

Willow Garage is implementing some SLAM algorithms and open sourcing the code by putting it back into OpenCV.

http://www.willowgarage.com

http://openslam.org/

dimm
EmbeddedPc

Links
Lidar