Image and point cloud based tracking and in augmented reality systems
First Claim
1. A method for reducing augmented reality perspective position error comprising:
- accessing three-dimensional (3D) point cloud data describing an environment associated with a client device and a first position estimate for an image sensor of a companion device associated with the client device;
accessing a first image of the environment captured by the image sensor of the companion device, wherein the companion device is separate from the client device and associated with a different location than the first position estimate;
processing the first image to match at least a portion of a set of key points of the 3D point cloud to the first image;
determining, based on the match of the portion of the set of key points of the 3D point cloud to the first image, a position error associated with the first position estimate along with a second position estimate for the image sensor of the companion device;
generating a model of a virtual object within the 3D point cloud; and
generating a first augmented reality image comprising the virtual object in the environment using the second position estimate for the client device, the model of the virtual object within the 3D point cloud, and the match of the portion of the set of key points of the 3D point cloud to the first image.
2 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A 3D point cloud data describing an environment is then accessed. A first image of an environment is captured, and a portion of the image is matched to a portion of key points in the 3D point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the 3D point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
-
Citations
20 Claims
-
1. A method for reducing augmented reality perspective position error comprising:
-
accessing three-dimensional (3D) point cloud data describing an environment associated with a client device and a first position estimate for an image sensor of a companion device associated with the client device; accessing a first image of the environment captured by the image sensor of the companion device, wherein the companion device is separate from the client device and associated with a different location than the first position estimate; processing the first image to match at least a portion of a set of key points of the 3D point cloud to the first image; determining, based on the match of the portion of the set of key points of the 3D point cloud to the first image, a position error associated with the first position estimate along with a second position estimate for the image sensor of the companion device; generating a model of a virtual object within the 3D point cloud; and generating a first augmented reality image comprising the virtual object in the environment using the second position estimate for the client device, the model of the virtual object within the 3D point cloud, and the match of the portion of the set of key points of the 3D point cloud to the first image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A device comprising:
-
a memory; and one or more processors coupled to the memory and configured to; access three-dimensional (3D) point cloud data describing an environment associated with the device, a companion device associated with the device, and a first position estimate for an image sensor of the companion device; access a first image of the environment captured by the image sensor of the companion device, wherein the companion device is separate from the device and associated with a different location than the first position estimate; process the first image to match at least a portion of a set of key points of the 3D point cloud to the first image; determine, based on the match of the portion of the set of key points of the 3D point cloud to the first image, a position error associated with the first position estimate and a second position estimate for the image sensor of the companion device; generate a model of a virtual object within the 3D point cloud; and generate a first augmented reality image comprising the virtual object in the environment using the second position estimate for the device, the model of the virtual object within the 3D point cloud, and the match of the portion of the set of key points of the 3D point cloud to the first image. - View Dependent Claims (14, 15, 16, 17)
-
-
18. A non-transitory computer readable medium comprising instructions that, when performed by a device, cause the device to perform a method comprising:
-
accessing three-dimensional (3D) point cloud data describing an environment associated with the device, a companion device associated with the device, and a first position estimate for an image sensor of the companion device; accessing a first image of the environment captured by the image sensor of the companion device, wherein the companion device is separate from the device and associated with a different location than the first position estimate; processing the first image to match at least a portion of a set of key points of the 3D point cloud to the first image; determining, based on the match of the portion of the set of key points of the 3D point cloud to the first image, a position error associated with the first position estimate and a second position estimate for the image sensor of the companion device; generating a model of a virtual object within the 3D point cloud; and generating a first augmented reality image comprising the virtual object in the environment using the second position estimate for the device, the model of the virtual object within the 3D point cloud, and the match of the portion of the set of key points of the 3D point cloud to the first image. - View Dependent Claims (19, 20)
-
Specification