Registration between actual mobile device position and environmental model
First Claim
Patent Images
1. A method comprising:
- determining, using at least one sensor of a device, a location and orientation of the device based on a direction and a speed of the movement of the device, a plurality of different signals from a plurality of sources over a time interval, and geographical locations of the plurality of sources;
displaying, on a display of the device, a video image of an environment of the device;
overlaying on the video image, by the device, at least one object from a virtual reality model of the environment, wherein a position at which the at least one object is overlaid on the video image is determined based at least in part on the location and orientation of the device determined by the at least one sensor of the device;
detecting, using at least one sensor of the device, positions of one or more markers physically located within the video image of the environment;
aligning the position of the at least one object from the virtual reality model within the video image based at least in part on the positions of the one or more markers in order to calibrate the virtual reality model environment with the video image of the environment of the device;
receiving a user lock-in input;
in response to receiving the user lock-in input, sending, by the device, an updated copy of the virtual reality model to match the current position of the overlaid at least one object to a central repository in which a final edition of the virtual model of the environment is stored,wherein the central repository receives and stores a plurality of changed coordinates of the virtual model from a plurality of mobile devices over a period of time, compares the plurality of changed coordinates to each other, and creates an updated final edition of the virtual model by aggregating the changed coordinates of the virtual reality model having similar coordinate values from among the plurality of changed coordinates of the virtual model in order to adjust the coordinates of the final edition of the virtual model;
receiving, by the device, the updated final edition of the virtual model.
1 Assignment
0 Petitions
Accused Products
Abstract
A user interface enables a user to calibrate the position of a three dimensional model with a real-world environment represented by that model. Using a device'"'"'s sensor, the device'"'"'s location and orientation is determined. A video image of the device'"'"'s environment is displayed on the device'"'"'s display. The device overlays a representation of an object from a virtual reality model on the video image. The position of the overlaid representation is determined based on the device'"'"'s location and orientation. In response to user input, the device adjusts a position of the overlaid representation relative to the video image.
-
Citations
27 Claims
-
1. A method comprising:
-
determining, using at least one sensor of a device, a location and orientation of the device based on a direction and a speed of the movement of the device, a plurality of different signals from a plurality of sources over a time interval, and geographical locations of the plurality of sources; displaying, on a display of the device, a video image of an environment of the device; overlaying on the video image, by the device, at least one object from a virtual reality model of the environment, wherein a position at which the at least one object is overlaid on the video image is determined based at least in part on the location and orientation of the device determined by the at least one sensor of the device; detecting, using at least one sensor of the device, positions of one or more markers physically located within the video image of the environment; aligning the position of the at least one object from the virtual reality model within the video image based at least in part on the positions of the one or more markers in order to calibrate the virtual reality model environment with the video image of the environment of the device; receiving a user lock-in input; in response to receiving the user lock-in input, sending, by the device, an updated copy of the virtual reality model to match the current position of the overlaid at least one object to a central repository in which a final edition of the virtual model of the environment is stored, wherein the central repository receives and stores a plurality of changed coordinates of the virtual model from a plurality of mobile devices over a period of time, compares the plurality of changed coordinates to each other, and creates an updated final edition of the virtual model by aggregating the changed coordinates of the virtual reality model having similar coordinate values from among the plurality of changed coordinates of the virtual model in order to adjust the coordinates of the final edition of the virtual model; receiving, by the device, the updated final edition of the virtual model. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21)
-
-
22. A device comprising:
-
one or more sensors; one or more processors configured to; determine, using at least one sensor of a device, a location and orientation of the device based on a direction and a speed of the movement of the device, a plurality of different signals from a plurality of sources over a time interval, and geographical locations of the plurality of sources; display, on a display of the device, a video image of an environment of the device; overlay on the video image, by the device, at least one object from a virtual reality model of the environment, wherein a position at which the at least one object is overlaid on the video image is determined based at least in part on the location and orientation of the device determined by the at least one sensor of the device; detect, using at least one sensor of the device, positions of one or more markers physically located within the video image of the environment; align the position of the at least one object from the virtual reality model within the video image based at least in part on the positions of the one or more markers in order to calibrate the virtual reality model environment with the video image of the environment of the device; receive a user lock-in input; in response to receiving the user lock-in input, send, by the device, an updated copy of the virtual reality model to match the current position of the overlaid at least one object to a central repository in which a final edition of the virtual model of the environment is stored, wherein the central repository receives and stores a plurality of changed coordinates of the virtual model from a plurality of mobile devices over a period of time, compares the plurality of changed coordinates to each other, and creates an updated final edition of the virtual model by aggregating the changed coordinates of the virtual reality model having similar coordinate values from among the plurality of changed coordinates of the virtual model in order to adjust the coordinates of the final edition of the virtual model; receive, by the device, the updated final edition of the virtual model. - View Dependent Claims (23, 24, 25, 26)
-
-
27. A computer product comprising a non-transitory computer readable medium storing instructions that when executed causes one or more processors to perform a method comprising:
-
determining, using at least one sensor of a device, a location and orientation of the device based on a direction and a speed of the movement of the device, a plurality of different signals from a plurality of sources over a time interval, and geographical locations of the plurality of sources; displaying, on a display of the device, a video image of an environment of the device; overlaying on the video image, by the device, at least one object from a virtual reality model of the environment, wherein a position at which the at least one object is overlaid on the video image is determined based at least in part on the location and orientation of the device determined by the at least one sensor of the device; detecting, using at least one sensor of the device, positions of one or more markers physically located within the video image of the environment; aligning the position of the at least one object from the virtual reality model within the video image based at least in part on the positions of the one or more markers in order to calibrate the virtual reality model environment with the video image of the environment of the device; receiving a user lock-in input; in response to receiving the user lock-in input, sending, by the device, an updated copy of the virtual reality model to match the current position of the overlaid at least one object to a central repository in which a final edition of the virtual model of the environment is stored, wherein the central repository receives and stores a plurality of changed coordinates of the virtual model from a plurality of mobile devices over a period of time, compares the plurality of changed coordinates to each other, and creates an updated final edition of the virtual model by aggregating the changed coordinates of the virtual reality model having similar coordinate values from among the plurality of changed coordinates of the virtual model in order to adjust the coordinates of the final edition of the virtual model; receiving, by the device, the updated final edition of the virtual model.
-
Specification