Generating intermediate views using optical flow
First Claim
Patent Images
1. A method comprising:
- receiving a first camera view and a second camera view, each camera view representing an image captured by a camera and associated with a location from which the camera view was captured;
identifying a synthetic view location for a synthetic view located between the location associated with the first camera view and the location associated with the second camera view;
retrieving an optical flow comprising a vector displacement field, each vector of the optical flow indicating a displacement between corresponding locations in the first camera view and the second camera view;
shifting the first camera view to a first synthetic view based on the optical flow and the synthetic view location relative to the location associated with the first camera view;
shifting the second camera view to a second synthetic view based on the optical flow and the synthetic view location relative to the location associated with the second camera view;
andblending the first synthetic view and the second synthetic view to form a synthetic view.
2 Assignments
0 Petitions
Accused Products
Abstract
A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.
34 Citations
20 Claims
-
1. A method comprising:
-
receiving a first camera view and a second camera view, each camera view representing an image captured by a camera and associated with a location from which the camera view was captured; identifying a synthetic view location for a synthetic view located between the location associated with the first camera view and the location associated with the second camera view; retrieving an optical flow comprising a vector displacement field, each vector of the optical flow indicating a displacement between corresponding locations in the first camera view and the second camera view; shifting the first camera view to a first synthetic view based on the optical flow and the synthetic view location relative to the location associated with the first camera view; shifting the second camera view to a second synthetic view based on the optical flow and the synthetic view location relative to the location associated with the second camera view; and blending the first synthetic view and the second synthetic view to form a synthetic view.
-
-
2. The method of claim 1, wherein the optical flow comprises a vector displacement field determining the magnitude and direction to shift each region of the first camera view.
-
3. The method of claim 2, wherein shifting the first camera view based on the optical flow comprises proportionally shifting the first camera view based on a relative distance of the first camera view'"'"'s location to the synthetic view location.
-
4. The method of claim 3, wherein the optical flow associates corresponding pixels between the first camera view and the second camera view.
-
5. The method of claim 4, further comprising calculating an optical flow for the first camera view and the second camera view.
-
6. The method of claim 1, wherein the first and second camera views are received from an image capture system which captured the first and second camera views.
-
7. The method of claim 1, wherein the first and second camera views each depict one or more objects common to both camera views.
-
8. The method of claim 1, wherein blending the first and second synthetic views further comprises weighing the first and second camera views based on a relative distance of each of the first and second camera view'"'"'s location from the synthetic view location.
-
9. The method of claim 1, wherein the synthetic view is a partial synthetic view depicting a selected region of the first and second camera views.
-
10. The method of claim 1, wherein the synthetic view is a synthetic view mapping describing each pixel of the synthetic view as a combination of one or more pixels in the first and second camera views.
-
11. A non-transitory computer readable storage medium comprising instructions which, when executed by a processor, cause the processor to:
-
receive a first camera view and a second camera view, each camera view representing an image captured by a camera and associated with a location from which the camera view was captured; identify a synthetic view location for a synthetic view located between the location associated with the first camera view and the location associated with the second camera view; retrieve an optical flow comprising a vector displacement field, each vector of the optical flow indicating a displacement between corresponding locations in the first camera view and the second camera view; shift the first camera view to a first synthetic view based on the optical flow and the synthetic view location relative to the location associated with the first camera view; shift the second camera view to a second synthetic view based on the optical flow and the synthetic view location relative to the location associated with the second camera view; and blend the first synthetic view and the second synthetic view to form a synthetic view.
-
-
12. The non-transitory computer readable storage medium of claim 11, wherein the optical flow comprises a vector displacement field determining the magnitude and direction to shift each region of the first camera view.
-
13. The non-transitory computer readable storage medium of claim 12, wherein shifting the first camera view based on the optical flow comprises proportionally shifting the first camera view based on a relative distance of the first camera view'"'"'s location to the synthetic view location.
-
14. The non-transitory computer readable storage medium of claim 13, wherein the optical flow associates corresponding pixels between the first camera view and the second camera view.
-
15. The non-transitory computer readable storage medium of claim 14, wherein the instructions further cause the processor to calculate an optical flow for the first camera view and the second camera view.
-
16. The non-transitory computer readable storage medium of claim 11, wherein the first and second camera views are received from an image capture system which captured the first and second camera views.
-
17. The non-transitory computer readable storage medium of claim 11, wherein the first and second camera views each depict one or more objects common to both camera views.
-
18. The non-transitory computer readable storage medium of claim 11, wherein blending the first and second synthetic views further comprises weighing the first and second camera views based on a relative distance of each of the first and second camera view'"'"'s location from the synthetic view location.
-
19. The non-transitory computer readable storage medium of claim 11, wherein the synthetic view is a partial synthetic view depicting a selected region of the first and second camera views.
-
20. The non-transitory computer readable storage medium of claim 11, wherein the synthetic view is a synthetic view mapping describing each pixel of the synthetic view as a combination of one or more pixels in the first and second camera views.
Specification