Image and point cloud based tracking and in augmented reality systems
First Claim
1. A method for reducing augmented reality perspective position error comprising:
- determining, using a first positioning system of a device, a first position estimate for the device, wherein the first positioning system comprises at least a first positioning hardware module coupled to a memory and at least one processor of the device;
accessing, based on the first position estimate, three-dimensional (3D) point cloud data describing an environment associated with the first position estimate;
accessing a first image of the environment captured by a companion device, wherein the companion device is separate from the device and associated with a different location than the device;
processing the first image to match at least a portion of a set of key points of the 3D point cloud to the first image;
determining, based on the match of the portion the set of key points of the 3D point cloud to the first image, a second position estimate for the companion device;
generating a model of a virtual object within the 3D point cloud;
generating, using the second position estimate for the companion device, the model of the virtual object within the 3D point cloud, and the match of the portion the set of key points of the 3D point cloud to the first image, a first augmented reality image comprising the virtual object in the environment;
communicating the first position estimate and the first image together as part of a first communication from the device to a cloud server computer;
wherein the processing of the image to match at least a portion of a set of key points of the 3D point cloud to the first image and the determining of the second position estimate are performed by the cloud server computer;
tracking, at the device, motion of the companion device;
receiving, at the device from the cloud server computer, the second position estimate; and
generating, at the device, using the second position estimate and the motion of the companion device from a first image capture time to a second position estimate receipt time, a third position estimate;
wherein the first augmented reality image is further generated using the third position estimate to align the virtual object within a second image of the environment,wherein the companion device comprises a device selected from the set of;
an augmented reality helmet, an augmented reality visor, augmented reality glasses, and an augmented reality glasses attachment; and
wherein the device comprises a smartphone.
2 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A 3D point cloud data describing an environment is then accessed. A first image of an environment is captured, and a portion of the image is matched to a portion of key points in the 3D point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the 3D point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
-
Citations
15 Claims
-
1. A method for reducing augmented reality perspective position error comprising:
-
determining, using a first positioning system of a device, a first position estimate for the device, wherein the first positioning system comprises at least a first positioning hardware module coupled to a memory and at least one processor of the device; accessing, based on the first position estimate, three-dimensional (3D) point cloud data describing an environment associated with the first position estimate; accessing a first image of the environment captured by a companion device, wherein the companion device is separate from the device and associated with a different location than the device; processing the first image to match at least a portion of a set of key points of the 3D point cloud to the first image; determining, based on the match of the portion the set of key points of the 3D point cloud to the first image, a second position estimate for the companion device; generating a model of a virtual object within the 3D point cloud; generating, using the second position estimate for the companion device, the model of the virtual object within the 3D point cloud, and the match of the portion the set of key points of the 3D point cloud to the first image, a first augmented reality image comprising the virtual object in the environment; communicating the first position estimate and the first image together as part of a first communication from the device to a cloud server computer; wherein the processing of the image to match at least a portion of a set of key points of the 3D point cloud to the first image and the determining of the second position estimate are performed by the cloud server computer; tracking, at the device, motion of the companion device; receiving, at the device from the cloud server computer, the second position estimate; and generating, at the device, using the second position estimate and the motion of the companion device from a first image capture time to a second position estimate receipt time, a third position estimate; wherein the first augmented reality image is further generated using the third position estimate to align the virtual object within a second image of the environment, wherein the companion device comprises a device selected from the set of;
an augmented reality helmet, an augmented reality visor, augmented reality glasses, and an augmented reality glasses attachment; andwherein the device comprises a smartphone. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A device comprising:
-
at least one processor; a memory coupled to the processor; a wireless transceiver coupled to the memory and the processor and configured to communicate with a companion device that is separate from the device; a first positioning module coupled to the memory and configured to determine a first position estimate based on a position of the device; an augmented reality tracking module configured to; access, based on the first position estimate, three-dimensional (3D) point cloud data describing an environment associated with the first position estimate; access a first image of an environment from the companion device; match a portion of the a set of key points of the 3D point cloud data to the first image; determine, based on the match of the portion the set of key points of the 3D point cloud to the first image, a second position estimate for the companion device; generate a model of a virtual object within the 3D point cloud; generate, using the second position estimate for the companion device, the model of the virtual object within the 3D point cloud, and the match of the portion the set of key points of the 3D point cloud to the first image, a first augmented reality image comprising the virtual object in the environment; communicate the first position estimate and the first image together as part of a first communication from the device to a cloud server computer; wherein the matching a portion of the set of key points of the 3D point cloud to the first image and the determining of the second position estimate are performed by the cloud server computer; track, at the device, motion of the companion device; receive, at the device from the cloud server computer, the second position estimate; and generate, at the device, using the second position estimate and the motion of the companion device from a first image capture time to a second position estimate receipt time, a third position estimate; wherein the first augmented reality image is further generated using the third position estimate to align the virtual object within a second image of the environment, wherein the companion device comprises a device selected from the set of;
an augmented reality helmet, an augmented reality visor, augmented reality glasses, and an augmented reality glasses attachment; andwherein the device comprises a smartphone. - View Dependent Claims (12)
-
-
13. A non-transitory computer readable medium comprising instructions that, when performed by a device, cause the device to perform a method comprising:
-
determining, using a first positioning system of the device, a first position estimate for the device, wherein the first positioning system comprises at least a first positioning hardware module coupled to a memory and at least one processor of the device; accessing, based on the first position estimate, three-dimensional (3D) point cloud data describing an environment associated with the first position estimate; receiving a first image of the environment from a companion device separate from the device via a wireless interface; processing the first image to match at least a portion of a set of key points of the 3D point cloud to the first image; determining, based on the match of the portion the set of key points of the 3D point cloud to the first image, a second position estimate for the companion device; generating a model of a virtual object within the 3D point cloud; generating, using the second position estimate for the companion device, the model of the virtual object within the 3D point cloud, and the match of the portion the set of key points of the 3D point cloud to the first image, a first augmented reality image comprising the virtual object in the environment; communicating the first position estimate and the first image together as part of a first communication from the device to a cloud server computer; wherein the processing of the image to match at least a portion of a set of key points of the 3D point cloud to the first image and the determining of the second position estimate are performed by the cloud server computer; tracking, at the device, motion of the companion device; receiving, at the device from the cloud server computer, the second position estimate; and generating, at the device, using the second position estimate and the motion of the companion device from a first image capture time to a second position estimate receipt time, a third position estimate; wherein the first augmented reality image is further generated using the third position estimate to align the virtual object within a second image of the environment, wherein the companion device comprises a device selected from the set of;
an augmented reality helmet, an augmented reality visor, augmented reality glasses, and an augmented reality glasses attachment; andwherein the device comprises a smartphone. - View Dependent Claims (14, 15)
-
Specification