Multi-camera configurations with the Intel® RealSense™ LiDAR Camera L515
There are many situations in which you may wish to use more than one depth camera at once. A great example might be something like 3D scanning your dog. It’s challenging to get a dog to sit entirely still even for a single photo, but for a full-body scan with a single camera it would be impossible. By positioning multiple cameras around the dog, one snap can provide the data necessary for a full 3D scan.
We have previously shared methods for multiple camera configurations with the D400 series stereo depth cameras, however when using the LiDAR Camera L515, new considerations are necessary. Stereo cameras inherently do not interfere with other stereo cameras nearby – all visual information and IR noise is good, as far as a stereo camera is concerned, since they are calculating depth based on the discrepancy between the images received by two separate sensors.
For a LiDAR based camera, depth information is calculated by sweeping laser light across the scene and timing how long it takes for the light to reflect of an object and return to the camera. Because of this, if the cameras are aimed at overlapping areas of an object, which is usually ideal for a full body scan, they can interfere with each other.
In this whitepaper, we share how to properly set up hardware sync to use the L515 in multiple camera configurations without interference.
Subscribe here to get blog and news updates.
You may also be interested in
“Intel RealSense acts as the eyes of the system, feeding real-world data to the AI brain that powers the MR
In a three-dimensional world, we still spend much of our time creating and consuming two-dimensional content. Most of the screens