Depth map improvements for Drones
At the recent 12th International Symposium on Multimedia Applications and Processing (MMAP’19) the best paper award was presented to Sergey Dorodnicov, Daniel Pohl and Markus Achtelik of Intel® for their paper Depth Map Improvements for Stereo-based Depth Cameras on Drones. Drones are increasingly using a variety of sensing devices such as depth cameras to perform tasks like collision avoidance – with more drones in the sky, and with a wide variety of applications from commercial to consumer, it’s crucial that drones operate safely in a wide variety of conditions.
While there are lots of different kind of sensors that can be used on drones, stereo depth cameras are a popular choice since they work well outdoors at lower power levels, and don’t experience interference from things like bright sunlight or other sensors in the area. In order for drones to avoid accidents, injuries and crashes, obstacle and collision avoidance must work well – avoiding both false positives where an obstacle that doesn’t exist is detected, and false negatives, where an obstacle is not correctly detected.
In this paper, Daniel and Sergey explore a variety of methods to tune stereo depth camera imaging specifically for drone use cases. These methods include calibrating the depth camera, changing a variety of settings and post processing depth images in order to greatly increase the accuracy and confidence of the depth values detected. By improving confidence values and only accepting very high confidence values, false positives can be avoided, while still reliably detecting objects that do exist.
For more, please read the paper here.
Subscribe here to get blog and news updates.
You may also be interested in
“Intel RealSense acts as the eyes of the system, feeding real-world data to the AI brain that powers the MR
In a three-dimensional world, we still spend much of our time creating and consuming two-dimensional content. Most of the screens