Mobile Device Localization In Complex, Three-Dimensional Scenes
First Claim
1. A method of estimating the pose of a mobile device in a three-dimensional scene, the method comprising:
- receiving, by a processor of the mobile device, a plurality of depth data measurements, the depth data measurements indicative of a depth from the mobile device to the three-dimensional scene;
estimating, based on a first depth measurement of the plurality of depth data measurements, a first pose of the mobile device with respect to the three-dimensional scene;
estimating, based on a second depth measurement of the plurality of depth data measurements, a second pose of the mobile device with respect to the three-dimensional scene;
estimating, with a pose model based on the first pose and the second pose, a third pose of the mobile device with respect to the three-dimensional scene; and
providing, based on the third pose, an output to a user.
3 Assignments
0 Petitions
Accused Products
Abstract
The present embodiments relate to localizing a mobile device in a complex, three-dimensional scene. By way of introduction, the present embodiments described below include apparatuses and methods for using multiple, independent pose estimations to increase the accuracy of a single, resulting pose estimation. The present embodiments increase the amount of input data by windowing a single depth image, using multiple depth images from the same sensor, and/or using multiple depth image from different sensors. The resulting pose estimation uses the input data with a multi-window model, a multi-shot model, a multi-sensor model, or a combination thereof to accurately estimate the pose of a mobile device.
-
Citations
20 Claims
-
1. A method of estimating the pose of a mobile device in a three-dimensional scene, the method comprising:
-
receiving, by a processor of the mobile device, a plurality of depth data measurements, the depth data measurements indicative of a depth from the mobile device to the three-dimensional scene; estimating, based on a first depth measurement of the plurality of depth data measurements, a first pose of the mobile device with respect to the three-dimensional scene; estimating, based on a second depth measurement of the plurality of depth data measurements, a second pose of the mobile device with respect to the three-dimensional scene; estimating, with a pose model based on the first pose and the second pose, a third pose of the mobile device with respect to the three-dimensional scene; and providing, based on the third pose, an output to a user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A system for determining the pose of a mobile device in a three-dimensional scene, the system comprising:
-
a memory configured to store a plurality of depth data measurements, the depth data measurements indicative of a depth from the mobile device to the three-dimensional scene; and a processor configured to; receive the plurality of depth data measurements; and determine a pose of the mobile device with respect to the three-dimensional scene, the pose based on fusing estimated poses for each of the plurality of depth data measurements with a dynamic model, the pose comprising a location and viewing angle of the mobile device. - View Dependent Claims (13, 14, 15, 16)
-
-
17. A method of localizing a mobile device in a three-dimensional scene, the method comprising:
-
capturing, by a sensor of the mobile device, a plurality of depth image data sets of the three-dimensional scene; generating, by a processor of the mobile device, a plurality of initial pose estimations of the mobile device with respect to the three-dimensional scene, each of the initial pose estimations based on a different depth image data set; generating, by the processor of the mobile device, a fused pose estimation of the mobile device with respect to the three-dimensional scene, the fused pose estimation determined using a trained machine-learning model based on the initial pose estimations; and displaying, by a display of the mobile device, an output based on the fused pose estimation. - View Dependent Claims (18, 19, 20)
-
Specification