Sorry, you need to enable JavaScript to visit this website.

[Presentation Slide] A Novel Kinect V2 Registration Method For Large-Displacement Environments Using Camera And Scene Constraints

Citation Author(s):
Yuan Gao, Sandro Esquivel, Reinhard Koch, Matthias Ziegler, Frederik Zilly, Joachim Keinert
Submitted by:
Yuan Gao
Last updated:
19 April 2018 - 11:16am
Document Type:
Presentation Slides
Document Year:
Presenters Name:
Yuan Gao
Paper Code:



In a lot of multi-Kinect V2-based systems, the registration of these Kinect V2 sensors is an important step which directly affects the system precision. The coarse-to-fine method using calibration objects is an effective way to solve the Kinect V2 registration problem. However, for the registration of Kinect V2 cameras with large displacements, this kind of method may fail. To this end, a novel Kinect V2 registration method, which is also based on the coarse-to-fine framework, is proposed by using camera and scene constraints. Specifically, in the coarse estimation stage, scene constraints are explored using off-the-shelf feature point detectors and camera constraints are explored using homography and fundamental matrices. In the estimation refinement stage, an Iterative Closest Point (ICP)-based point cloud registration method is utilized. Experimental results show that the proposed Kinect V2 registration method using camera and scene constraints performs much better in precision than using calibration objects in the large-displacement environment.

0 users have voted:

Dataset Files