Abstract:In order to achieve efficient, high precision and inexpensive full view 3D reconstruction, a method of full-view 3D reconstruction by fusing depth camera and illumination constraints is proposed. In the single frame reconstruction, a method of 3D reconstruction by fusing RGBD and shape from shading (SFS) is used, that is, the illumination constraints are added in the original depth data to optimize the depth value. In the registration of two adjacent frames, fast point feature histogram (FPFH) features are used for matching and filtering out the wrong matching points by random sample consensus (RANSAC) algorithm. Then the transformation relation between cameras is obtained through iterative closest point (ICP) algorithm. In the full angle of 3D reconstruction, the bundle adjustment is used to optimize the position and pose of the camera in order to solve the problem that the first and last frames can not be completely overlapped by the cumulative error. Finally, a complete model is generated. The method integrates the illumination information of the surface of the object, therefore, the generated 3D model is smoother, and contains more detailed information of the surface of the object, which improves the reconstruction accuracy. The method can complete the reconstruction of multi-reflectivity 3D objects in a natural light environment with a single photo, and has a wider application range. The entire experiment can be carried out with a handheld depth camera, which makes it easier to operate without turntable.