Depthperception

oak
https://www.kickstarter.com/projects/opencv/opencv-ai-kit/description embedded object detection(person), depth perception and Object tracking. OAK AI training - that allows you to embed the super-power of spatial AI plus accelerated computer vision functions into your product.

Huangthien94 github
https://github.com/hoangthien94/vision_to_mavros and https://diydrones.com/profiles/blogs/autonomous-flying-using-the-realsense-t265-tracking-camera-in-gps Since the beginning of this project, Thien has delivered a series of Labs to serve not only as milestones for the project but also a step-by-step guideline for anyone who wishes to learn about using the power of computer vision for autonomous robot to follow. The labs include

https://www.intelrealsense.com/tracking-camera-t265/

https://discuss.ardupilot.org/t/gsoc-2019-integration-of-ardupilot-and-vio-tracking-camera-for-gps-less-localization-and-navigation/42394

jetson
https://diydrones.com/profiles/blogs/installing-the-intel-realsense-d435-depth-camera-on-a-jetson-tx2

https://mikeisted.wordpress.com/2018/04/09/intel-realsense-d435-on-jetson-tx2/ Here’s a quick technical post for anyone attempting to harness the capabilities of a Realsense D435 camera on a Jetson TX2. For me, this is about getting usable depth perception on a UAV, but it has proved more problematic than I originally anticipated.

This post aims to provide some simple instructions that now work for me, but took a long time to find out!

The Problem The Intel librealsense2 library does not support ARM architectures as I write. This causes a fatal compile error when the file librealsense/src/image.cpp is accessed, as it queries the system architecture.

Solution Modify image.cpp as in my Github gist here. This bypasses the architecture check.

https://github.com/IntelRealSense/librealsense

https://www.youtube.com/watch?v=5tY2A-_VBi8

links
https://taulidar.com/ depth perception camera in the dark.

Lidar

Nvidia Jetson