Efficient canvas view generation from intermediate views
First Claim
Patent Images
1. A method comprising:
- receiving, at a canvas view generation system, a set of camera views depicting a scene as captured by a plurality of cameras, each camera view associated with a camera view location from which that camera view was captured;
identifying a set of canvas view regions for a canvas view of the scene depicting a range of angles of the scene, each canvas view region in the set of regions associated with an angle in the range of angles;
generating the canvas view by, for each canvas view region in the set of regions;
determining a synthetic camera location for the canvas view region based on the angle;
generating a first mapping associating the canvas view region with a synthetic view region of a synthetic view associated with the synthetic camera location;
generating a second mapping associating regions of a plurality of camera views of the set of camera views with the synthetic view region;
combining the first mapping and the second mapping to generate a combined mapping associating the canvas view region of the canvas view with regions of one or more camera views of the set of camera views; and
applying the combined mapping to generate the canvas view for the canvas view region.
2 Assignments
0 Petitions
Accused Products
Abstract
A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.
-
Citations
20 Claims
-
1. A method comprising:
-
receiving, at a canvas view generation system, a set of camera views depicting a scene as captured by a plurality of cameras, each camera view associated with a camera view location from which that camera view was captured; identifying a set of canvas view regions for a canvas view of the scene depicting a range of angles of the scene, each canvas view region in the set of regions associated with an angle in the range of angles; generating the canvas view by, for each canvas view region in the set of regions; determining a synthetic camera location for the canvas view region based on the angle; generating a first mapping associating the canvas view region with a synthetic view region of a synthetic view associated with the synthetic camera location; generating a second mapping associating regions of a plurality of camera views of the set of camera views with the synthetic view region; combining the first mapping and the second mapping to generate a combined mapping associating the canvas view region of the canvas view with regions of one or more camera views of the set of camera views; and applying the combined mapping to generate the canvas view for the canvas view region.
-
-
2. The method of claim 1, wherein the second mapping is generated based on an optical flow vector field associating points in the set of camera views.
-
3. The method of claim 2, further comprising calculating a set of optical flow vector fields based on the synthetic camera locations and the set of camera views.
-
4. The method of claim 1, wherein the canvas view is a 360 degree panoramic or spherical panoramic image of the scene.
-
5. The method of claim 1, wherein the canvas view is output in cubemap, equirectangular, or cylindrical format.
-
6. The method of claim 1, further comprising determining a canvas viewpoint for each canvas view region in the set of regions and wherein the synthetic camera location for a region is based on the canvas viewpoint for the region.
-
7. The method of claim 6, wherein determining the synthetic camera location for a region is based on a line of sight from the canvas viewpoint of the region to a zero parallax distance in the scene.
-
8. The method of claim 6, wherein each canvas view region approximates the light information at the canvas viewpoint of the canvas view region.
-
9. The method of claim 1, wherein each camera view of the set camera views overlaps with at least one other camera view of the set of camera views.
-
10. The method of claim 1, further comprising sending the canvas view to a client virtual reality device for display.
-
11. The method of claim 1, wherein each canvas view region is a vertical column of pixels.
-
12. A system comprising:
-
a processor; and a non-transitory computer readable storage medium comprising instructions that, when executed by the processor, cause the processor to; receive a set of camera views depicting a scene as captured by a plurality of cameras, each camera view associated with a camera view location from which that camera view was captured; identify a set of canvas view regions for a canvas view of the scene depicting a range of angles of the scene, each canvas view region in the set of regions associated with an angle in the range of angles; and generate the canvas view by, for each canvas view region in the set of regions; determine a synthetic camera location for the canvas view region based on the angle; generate a first mapping associating the canvas view region with a synthetic view region of a synthetic view associated with the synthetic camera location; generate a second mapping associating regions of a plurality of camera views of the set of camera views with the synthetic view region; combine the first mapping and the second mapping to generate a combined mapping associating the canvas view region of the canvas view with regions of one or more camera views of the set of camera views; and apply the combined mapping to generate the canvas view for the canvas view region.
-
-
13. The system of claim 12, wherein the second mapping is generated based on an optical flow vector field associating points in the set of camera views.
-
14. The system of claim 12, wherein the instructions, when executed by the processor, further causes the processor to calculate a set of optical flow vector fields based on the synthetic camera locations and the set of camera views.
-
15. The system of claim 12, wherein the canvas view is a 360 degree panoramic or spherical panoramic image of the scene.
-
16. The system of claim 12, wherein the instructions, when executed by the processor, further causes the processor to determine a canvas viewpoint for each canvas view region in the set of regions and wherein the synthetic camera location for a region is based on the canvas viewpoint for the region.
-
17. The system of claim 16, wherein determining the synthetic camera location for a region is based on a line of sight from the canvas viewpoint of the region to a zero parallax distance in the scene.
-
18. The system of claim 16, wherein each canvas view region approximates the light information at the canvas viewpoint of the canvas view region.
-
19. The system of claim 12, wherein each camera view of the set camera views overlaps with at least one other camera view of the set of camera views.
-
20. The system of claim 12, further comprising sending the canvas view to a client virtual reality device for display.
Specification