Depth camera 3D pose estimation using 3D CAD models
First Claim
Patent Images
1. A method of real-time depth camera pose estimation comprising:
- at a processor,receiving a sequence of depth map frames from a moving mobile depth camera, each depth map frame comprising a plurality of image elements, each image element being associated with a depth value related to a distance from the mobile depth camera to a surface in the scene captured by the mobile depth camera;
tracking a 3D position and orientation of the mobile depth camera using the depth map frames and a 3D CAD model of the environment, the 3D position and orientation defining a pose of the mobile depth camera, the tracking involving storing the 3D position and orientation of the mobile depth camera in a storage device;
computing, using an initial camera pose estimate, pairs of corresponding corner features between a current depth map frame and the 3D CAD model;
updating the initial camera pose estimate by optimizing an error metric applied to the computed corresponding corner feature pairs;
outputting the updated camera pose estimate; and
wherein computing pairs of corresponding corner features using the initial camera pose estimate comprises;
receiving the initial camera pose estimate, a current depth map, and 3D CAD model corners;
identifying model corners predicted to be in a field of view of the mobile depth camera;
projecting the current depth map onto the 3D CAD model using the initial camera pose estimate to generate a projected depth map;
for each identified model corner,searching a surrounding area for corresponding corner candidates in the projected death map;
selecting candidate corresponding corners according to a distance metric;
generating four point corner features from the model and the depth map; and
outputting the four point corner features.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems and methods for indoor localization in large-scale scenes, such as indoor environments are described. Systems and related methods for estimating the 3D camera pose of a depth camera by automatically aligning 3D depth images of a scene to a 3D CAD model of the scene are described.
-
Citations
18 Claims
-
1. A method of real-time depth camera pose estimation comprising:
-
at a processor, receiving a sequence of depth map frames from a moving mobile depth camera, each depth map frame comprising a plurality of image elements, each image element being associated with a depth value related to a distance from the mobile depth camera to a surface in the scene captured by the mobile depth camera; tracking a 3D position and orientation of the mobile depth camera using the depth map frames and a 3D CAD model of the environment, the 3D position and orientation defining a pose of the mobile depth camera, the tracking involving storing the 3D position and orientation of the mobile depth camera in a storage device; computing, using an initial camera pose estimate, pairs of corresponding corner features between a current depth map frame and the 3D CAD model; updating the initial camera pose estimate by optimizing an error metric applied to the computed corresponding corner feature pairs; outputting the updated camera pose estimate; and wherein computing pairs of corresponding corner features using the initial camera pose estimate comprises; receiving the initial camera pose estimate, a current depth map, and 3D CAD model corners; identifying model corners predicted to be in a field of view of the mobile depth camera; projecting the current depth map onto the 3D CAD model using the initial camera pose estimate to generate a projected depth map; for each identified model corner, searching a surrounding area for corresponding corner candidates in the projected death map; selecting candidate corresponding corners according to a distance metric; generating four point corner features from the model and the depth map; and outputting the four point corner features. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A system of real-time depth camera pose estimation comprising a persistent data store storing instructions executable by a processor to:
-
receive a sequence of depth map frames from a moving mobile depth camera, each depth map frame comprising a plurality of image elements, each image element having a depth value being related to a distance from the mobile depth camera to a surface in the scene captured by the mobile depth camera; track a 3D position and orientation of the mobile depth camera using the depth map frames and a 3D CAD model of the environment, the 3D position and orientation defining a pose of the mobile depth camera, the tracking involving storing the 3D position and orientation of the mobile depth camera in the persistent data store; compute, using an initial camera pose estimate, pairs of corresponding corner features between a current depth map frame and the 3D CAD model; and update the estimate of the camera pose by optimizing an error metric applied to the computed corresponding corner feature points; store one or more depth map frames and the estimate of the camera pose in the persistent data store; output the estimate of the camera pose; and wherein to compute pairs of corresponding corner features using the initial camera pose estimate, the system instructs the processor to; receive the initial camera pose estimate, a current depth map, and 3D CAD model corners; identify model corners predicted to be in a field of view of the mobile depth camera; project the current depth mad onto the 3D CAD model using the initial camera pose estimate to generate a projected depth map; for each identified model corner, search a surrounding area for corresponding corner candidates in the projected depth select candidate corresponding corners according to a distance metric; generate four point corner features from the model and the depth map; and output the four point corner features. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
-
17. A non-transitory computer-readable storage medium comprising computer-executable instructions for causing a processor to compute real-time depth camera pose estimations by:
-
storing a 3D position and orientation of a mobile depth camera using death map frames and a 3D CAD model of the environment, the 3D position and orientation defining a pose of the mobile depth camera; forming an initial estimate of camera pose using depth map frames captured by fall the moving mobile depth camera; computing pairs of corresponding corner features using the initial estimate; calculating an optimal estimate of the camera pose by minimizing an error metric applied to the computed corresponding corner features; determining that convergence is reached; outputting the optimal estimate of camera pose; and wherein the computing of the pairs of corresponding corner features using the initial camera pose estimate comprises; receiving the initial camera pose estimate, a current depth map, and 3D CAD model corners; identifying model corners predicted to be in a field of view of the mobile death camera; projecting the current death map onto the 3D CAD model using the initial camera pose estimate to generate a projected death map; for each identified model corner, searching a surrounding area for corresponding corner candidates in the projected death map; selecting candidate corresponding corners according to a distance metric; generating four point corner features from the model and the death map; and outputting the four point corner features. - View Dependent Claims (18)
-
Specification