Federated mobile device positioning
First Claim
Patent Images
1. A method comprising:
- determining, using at least one sensor of a mobile device, a current location of the mobile device and an exterior orientation of a camera of the mobile device, wherein determining the exterior orientation of the camera of the mobile device comprises using the at least one sensor to estimate an attitude of the mobile device;
displaying, on a display of the mobile device, a video image of an environment of the mobile device;
determining, using the at least one sensor of the mobile device, a displacement of the mobile device relative to a real-world object represented within the video image;
overlaying on the video image, by the mobile device, at least one object from a virtual reality model of the environment, wherein a position at which the at least one object from the virtual reality model is overlaid on the video image is determined based at least in part on the current location of the mobile device, the displacement of the mobile device, and the exterior orientation of the camera of the mobile device;
in response to user input received at the mobile device, adjusting, by the mobile device, the position of the overlaid at least one object from the virtual reality model by changing coordinates of the virtual reality model relative to the video image;
in response to determining that the user input includes a lock-in input, sending, by the mobile device, an updated copy of the virtual reality model to match the current position of the overlaid at least one object to a central repository in which a final edition of the virtual model of the environment is stored,wherein the central repository receives and stores a plurality of changed coordinates of the virtual model from a plurality of mobile devices over a period of time, compares the plurality of changed coordinates to each other, and creates an updated final edition of the virtual model by aggregating the changed coordinates of the virtual reality model having similar coordinate values from among the plurality of changed coordinates of the virtual model in order to adjust the coordinates of the final edition of the virtual model;
andreceiving, by the mobile device, the updated final edition of the virtual model.
1 Assignment
0 Petitions
Accused Products
Abstract
A user interface enables a user to calibrate the position of a three dimensional model with a real-world environment represented by that model. Using a device'"'"'s sensor suite, the device'"'"'s location and orientation is determined. A video image of the device'"'"'s environment is displayed on the device'"'"'s display. The device overlays a representation of an object from a virtual reality model on the video image. The position of the overlaid representation is determined based on the device'"'"'s location and orientation. In response to user input, the device adjusts a position of the overlaid representation relative to the video image.
57 Citations
22 Claims
-
1. A method comprising:
-
determining, using at least one sensor of a mobile device, a current location of the mobile device and an exterior orientation of a camera of the mobile device, wherein determining the exterior orientation of the camera of the mobile device comprises using the at least one sensor to estimate an attitude of the mobile device; displaying, on a display of the mobile device, a video image of an environment of the mobile device; determining, using the at least one sensor of the mobile device, a displacement of the mobile device relative to a real-world object represented within the video image; overlaying on the video image, by the mobile device, at least one object from a virtual reality model of the environment, wherein a position at which the at least one object from the virtual reality model is overlaid on the video image is determined based at least in part on the current location of the mobile device, the displacement of the mobile device, and the exterior orientation of the camera of the mobile device; in response to user input received at the mobile device, adjusting, by the mobile device, the position of the overlaid at least one object from the virtual reality model by changing coordinates of the virtual reality model relative to the video image; in response to determining that the user input includes a lock-in input, sending, by the mobile device, an updated copy of the virtual reality model to match the current position of the overlaid at least one object to a central repository in which a final edition of the virtual model of the environment is stored, wherein the central repository receives and stores a plurality of changed coordinates of the virtual model from a plurality of mobile devices over a period of time, compares the plurality of changed coordinates to each other, and creates an updated final edition of the virtual model by aggregating the changed coordinates of the virtual reality model having similar coordinate values from among the plurality of changed coordinates of the virtual model in order to adjust the coordinates of the final edition of the virtual model; and receiving, by the mobile device, the updated final edition of the virtual model. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A computer readable storage medium having instructions stored thereon, which, when executed by one or more processors, cause the one or more processors to perform operations comprising:
-
determining, using at least one sensor of a mobile device, a current location of the mobile device and an exterior orientation of a camera of the mobile device, wherein determining the exterior orientation of the camera of the mobile device comprises using the at least one sensor to estimate an attitude of the mobile device; displaying, on a display of the mobile device, a video image of an environment of the mobile device; determining, using the at least one sensor of the mobile device, a displacement of the mobile device relative to a real-world object represented within the video image; overlaying on the video image, by the mobile device, at least one object from a virtual reality model of the environment, wherein a position at which the at least one object from the virtual reality model is overlaid on the video image is determined based at least in part on the current location of the mobile device, the displacement of the mobile device, and the exterior orientation of the camera of the mobile device; in response to user input received at the mobile device, adjusting, by the mobile device, the position of the overlaid at least one object from the virtual reality model by changing coordinates of the virtual reality model relative to the video image; in response to determining that the user input includes a lock-in input, sending, by the mobile device, an updated copy of the virtual reality model to match the current position of the overlaid at least one object to a central repository in which a final edition of the virtual model of the environment is stored, wherein the central repository receives and stores a plurality of changed coordinates of the virtual model from a plurality of mobile devices over a period of time, compares the plurality of changed coordinates to each other, and creates an updated final edition of the virtual model by aggregating the changed coordinates of the virtual reality model having similar coordinate values from among the plurality of changed coordinates of the virtual model in order to adjust the coordinates of the final edition of the virtual model; and receiving, by the mobile device, the updated final edition of the virtual model. - View Dependent Claims (10, 11, 12, 13, 14, 15)
-
-
16. A mobile device comprising:
-
a camera; a display; one or more processors; one or more sensors; and a memory having instructions stored thereon, which, when executed by the one or more processors, cause the one or more processors to perform operations comprising; determining, using the one or more sensors, a current location of the mobile device and an exterior orientation of the camera, wherein determining the exterior orientation of the camera comprises using the one or more sensors to estimate an attitude of the mobile device; displaying, on the display, a video image of an environment of the mobile device; determining, using the one or more sensors, a displacement of the mobile device relative to a real-world object represented within the video image; overlaying on the video image, at least one object from a virtual reality model of the environment, wherein a position at which the at least one object is overlaid on the video image is determined based at least in part on the current location of the mobile device, the displacement of the mobile device, and the exterior orientation of the camera; in response to user input received at the mobile device, adjusting the position of the overlaid at least one object from the virtual reality model by changing coordinates of the virtual reality model relative to the video image; in response to determining that the user input includes a lock-in input, sending, by the mobile device, an updated copy of the virtual reality model to match the current position of the overlaid at least one object to a central repository in which a final edition of the virtual model of the environment is stored, wherein the central repository receives and stores a plurality of changed coordinates of the virtual model from a plurality of mobile devices over a period of time, compares the plurality of changed coordinates to each other, and creates an updated final edition of the virtual model by aggregating the changed coordinates of the virtual reality model having similar coordinate values from among the plurality of changed coordinates of the virtual model in order to adjust the coordinates of the final edition of the virtual model; and receiving, by the mobile device, the updated final edition of the virtual model. - View Dependent Claims (17, 18, 19, 20, 21, 22)
-
Specification