Guided navigation through geo-located panoramas
First Claim
1. A computer-implemented method for guided navigation in three-dimensional environments, comprising:
- displaying, by one or more computing devices, a first three-dimensional representation of a panoramic image within a three-dimensional environment from a viewpoint of a virtual camera;
identifying, by the one or more computing devices, one or more additional panoramic images linked to the first panoramic image based on metadata associated with the first panoramic image;
determining, by the one or more computing devices, a region of visual quality that satisfies a criteria associated with a visual representation for each of the first panoramic image and the one or lore additional panoramic images within the three-dimensional environment;
generating, by the one or more computing devices, one or more navigation channels relative to a path between the first panoramic image and the one or more additional panoramic images based on each region of visual quality, wherein each navigation channel has a rendering surface that constrains movement of the virtual camera within the region of visual quality in a bounded volume of space defined by the navigation channel; and
constructing, by the one or more computing devices, a navigation fillet around an intersection of the one or more navigation channels by fitting a collision sphere tangentially between different navigation channels of the one or more navigation channels, such that the navigation fillet represents a collision free zone and is parameterized according to distances of points of tangency relative to a center of one of the first panoramic images; and
in response to an input event indicating a desired movement of the virtual camera to a location in a second panoramic image of the one or more additional panoramic images, repositioning, by the one or more computing devices, the virtual camera in the three-dimensional environment along the path, within the collision free zone, from a first position associated with the first panoramic image toward a second position associated with the second panoramic image of the one or more additional panoramic images based on the input event, wherein moving comprises preventing movement of the virtual camera outside the region of visual quality in the bounded volume of space defined by the navigation channel even if the location indicated by the input event is outside the navigation channel.
2 Assignments
0 Petitions
Accused Products
Abstract
A capability for guided navigation in an interactive virtual three-dimensional environment is provided. Such a capability may enhance user experience by providing the feeling of free-form navigation to a user. It may be necessary to constrain the user to certain areas of good visual quality, and subtly guide the user towards viewpoints with better rendering results without disrupting the metaphor of freeform navigation. Additionally, such a capability may enable users to “drive” down a street, follow curving roads, and turn around intersections within the interactive virtual three-dimensional environment. Further, this capability may be applicable to image-based rendering techniques in addition to any three-dimensional graphics system that incorporates navigation based on road networks and/or paths.
8 Citations
27 Claims
-
1. A computer-implemented method for guided navigation in three-dimensional environments, comprising:
-
displaying, by one or more computing devices, a first three-dimensional representation of a panoramic image within a three-dimensional environment from a viewpoint of a virtual camera; identifying, by the one or more computing devices, one or more additional panoramic images linked to the first panoramic image based on metadata associated with the first panoramic image; determining, by the one or more computing devices, a region of visual quality that satisfies a criteria associated with a visual representation for each of the first panoramic image and the one or lore additional panoramic images within the three-dimensional environment; generating, by the one or more computing devices, one or more navigation channels relative to a path between the first panoramic image and the one or more additional panoramic images based on each region of visual quality, wherein each navigation channel has a rendering surface that constrains movement of the virtual camera within the region of visual quality in a bounded volume of space defined by the navigation channel; and constructing, by the one or more computing devices, a navigation fillet around an intersection of the one or more navigation channels by fitting a collision sphere tangentially between different navigation channels of the one or more navigation channels, such that the navigation fillet represents a collision free zone and is parameterized according to distances of points of tangency relative to a center of one of the first panoramic images; and in response to an input event indicating a desired movement of the virtual camera to a location in a second panoramic image of the one or more additional panoramic images, repositioning, by the one or more computing devices, the virtual camera in the three-dimensional environment along the path, within the collision free zone, from a first position associated with the first panoramic image toward a second position associated with the second panoramic image of the one or more additional panoramic images based on the input event, wherein moving comprises preventing movement of the virtual camera outside the region of visual quality in the bounded volume of space defined by the navigation channel even if the location indicated by the input event is outside the navigation channel. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A system for guided navigation in three-dimensional environments, comprising:
-
a renderer module, implemented on a computing device and configured to display a first three-dimensional representation of a first panoramic image within a three-dimensional environment from a viewpoint of a virtual camera and identify one or more additional panoramic images linked to the first panoramic image based on metadata associated with the first panoramic image; a path planner module, implemented on the computing device and configured to determine a region of visual quality that satisfies a criteria associated with a visual representation for each of the first panoramic image and the one or more additional panoramic images within the three-dimensional environment; and a path motion module, implemented on the computing device and configured to (i) generate one or more navigation channels relative to a path between the first panoramic image and the one or more additional panoramic images based on each region of visual quality, wherein each navigation channel has a rendering surface that constrains movement of the virtual camera within the region of visual quality, (ii) construct a navigation fillet around an intersection of the one or more navigation channels by fitting a collision sphere tangentially between different navigation channels of the one or more navigation channels, such that the navigation fillet represents a collision free zone and is parameterized according to distances of points of tangency relative to a center of one of the first panoramic images, and (iii) reposition the virtual camera in the three-dimensional environment along the path, within the collision free zone, from a first position associated with the first panoramic image toward a second position associated with a second panoramic image of the one or more additional panoramic images based on user input, the path motion nodule further configured to, in response to an input event indicating a desired movement of the virtual camera outside a bounded volume of space defined by each navigation channel, move the virtual camera along one of the one or more navigation channels instead. - View Dependent Claims (17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27)
-
Specification