Method for creating 3D virtual reality from 2D images
First Claim
1. A method for creating 3D virtual reality from 2D images comprising:
- obtaining a plurality of 2D images of an environment from at least one camera;
stitching together said plurality of 2D images into one or more integrated 2D images of said environment;
projecting said one or more integrated 2D images onto a spherical surface, yielding a spherical surface image;
unwrapping said spherical surface image onto an unwrapped plane image;
dividing said unwrapped plane image into a plurality of regions;
assigning depth information to points of each of said plurality of regions; and
generating stereo images for a viewer at a viewer position and orientation in a virtual reality environment using said depth information and said unwrapped plane image;
wherein said assigning depth information to the points of each of said plurality of regions comprisesdefining a flat or curved surface for one or more of said plurality of regions;
rotating and translating said flat or curved surface for one or more of said plurality of regions in three-dimensional space; and
,obtaining said depth information from the three-dimensional space of the points on said flat or curved surface for one or more of said plurality of regions.
5 Assignments
0 Petitions
Accused Products
Abstract
A method that enables creation of a 3D virtual reality environment from a series of 2D images of a scene. Embodiments map 2D images onto a sphere to create a composite spherical image, divide the composite image into regions, and add depth information to the regions. Depth information may be generated by mapping regions onto flat or curved surfaces, and positioning these surfaces in 3D space. Some embodiments enable inserting, removing, or extending objects in the scene, adding or modifying depth information as needed. The final composite image and depth information are projected back onto one or more spheres, and then projected onto left and right eye image planes to form a 3D stereoscopic image for a viewer of the virtual reality environment. Embodiments enable 3D images to be generated dynamically for the viewer in response to changes in the viewer'"'"'s position and orientation in the virtual reality environment.
419 Citations
17 Claims
-
1. A method for creating 3D virtual reality from 2D images comprising:
-
obtaining a plurality of 2D images of an environment from at least one camera; stitching together said plurality of 2D images into one or more integrated 2D images of said environment; projecting said one or more integrated 2D images onto a spherical surface, yielding a spherical surface image; unwrapping said spherical surface image onto an unwrapped plane image; dividing said unwrapped plane image into a plurality of regions; assigning depth information to points of each of said plurality of regions; and generating stereo images for a viewer at a viewer position and orientation in a virtual reality environment using said depth information and said unwrapped plane image; wherein said assigning depth information to the points of each of said plurality of regions comprises defining a flat or curved surface for one or more of said plurality of regions; rotating and translating said flat or curved surface for one or more of said plurality of regions in three-dimensional space; and
,obtaining said depth information from the three-dimensional space of the points on said flat or curved surface for one or more of said plurality of regions. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A method for creating 3D virtual reality from 2D images comprising:
-
obtaining a plurality of 2D images of an environment from at least one camera; stitching together said plurality of 2D images into one or more integrated 2D images of said environment; projecting said one or more integrated 2D images onto a spherical surface, yielding a spherical surface image; unwrapping said spherical surface image onto an unwrapped plane image; dividing said unwrapped plane image into a plurality of regions, wherein said dividing said unwrapped plane image into a plurality of regions further comprises accepting mask region inputs to define objects in said plurality of 2D images; accepting external depth information and applying said external depth information to said plurality of regions; obtaining at least one mask within each of said plurality of regions; assigning depth information to points of each of said plurality of regions; calculating a best fit for a plane using a computer based on depth associated with each of the at least one mask; applying depth associated with the plane having the best fit to each of said plurality of regions; generating stereo images for a viewer at a viewer position and orientation in a virtual reality environment using said depth information and said unwrapped plane image; and
,altering automatically using said computer, any combination of position, orientation, shape, depth or curve of the plane in order to fit edges or corners of the plane with another plane. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17)
-
Specification