Fusion de données RVB-D par stéréophotométrie colorée
Nous montrons comment utiliser la stéréophotométrie colorée pour améliorer le relief fourni par un capteur RVB-D. Le capteur est équipé de trois LED colorées, de telle sorte que l’image RVB permet de retrouver les détails les plus ﬁns du relief, grâce à la stéréophotométrie. Cette estimation ﬁne du relief est fusionnée avec la carte de profondeur fournie par le capteur, grâce à une nouvelle approche différentielle et variationnelle de la stéréophotométrie adaptée aux sources ponctuelles anisotropes de type LED. Cette approche permet d’estimer la profondeur directement et de façon robuste, sans estimation préalable des normales et de l’albédo.
This paper, which unifies and extends two conference papers presented during the RFIA 2016 conference in Clermont-Ferrand (Durix et al., 2016 ; Quéau et al., 2016a), aims at improving the accuracy of the depth map provided by an RGB-D sensor. To this end, we suggest a simple, yet very effective, modification of the sensor, which consists in equipping it with three colored LEDs. The color photometric stereo technique can then be applied to the RGB data, which provides a new estimate of shape. By appropriately fusing the depth map from the sensor with that estimatation using photometric stereo, the low-frequency bias in photometric stereo and the high- frequency one in the depth sensor are simultaneously eliminated.
We first describe how to model the luminous flux emitted by a LED. Considering such a light source as a nearby pointwise source, it is shown that the parameters of this model (position, orientation, and relative intensity) are easily calibrated using a standard pinhole camera and cheap additional material. The location of each LED is estimated by capturing images of multiple specular spheres (e.g., billard balls), identifying the specularities and intersecting the light rays responsible for these specularities. Given the sources locations, it is then easy to estimate their other parameters, by using images of a white planar Lambertian checkerboard and inverting Lambert’s law.
Then, we recall the variational and differential approach for photometric stereo which was presented in Quéau et al. (2016b). It consists in directly estimating depth from photometric stereo images, by resorting to image ratios in order to eliminate the non-linearities and the unknown albedo. This yields a system of quasi-linear PDEs, which can be solved in an approximate manner through a variational approach. At this stage, the RGB data from the proposed modified RGB-D sensor system can already be used for photometric stereo: by simultaneously illuminating the scene through the three calibrated LEDs which are colored, respectively, in red, green and blue, each channel of the RGB image can be viewed as a gray level image obtained under a different illumination.
However, because our assumption that the LEDs are monochromatic does not perfectly hold, such a 3D-reconstruction remains biased. However, this variational approach to photometric stereo is straightforward to extend in order to include a prior on the depth map. This provides us with a natural way to fuse RGB-based 3D-reconstruction by photometric stereo with the depth map provided by the sensor. We discuss a fast solution based on least-squares, and then a more robust one based on the L1 norm. Empirical evidence that the proposed system improves the accuracy of the depth sensor is provided on several real-world datasets.
Overall, these contributions yield the first RGB-D sensor-based system which can recover high quality depth in a single shot, by appropriately combining RGB photometric stereo and depth sensing within a variational framework.
Yvain QUÉAU, Bastien DURIX, Tom LUCAS, Jade BOUMAZA, Jean-Denis DUROU, François LAUZE
reconstruction 3D, stéréophotométrie, capteurs RVB-D, méthodes variationnelles.
3D-reconstruction, photometric stereo, RGB-D sensors, variational methods.