Generation of video from spherical content using edit maps
First Claim
1. A method for managing media content generated from spherical video, the method comprising:
- storing, by a video server, a plurality of spherical videos;
receiving an edit map specifying for each frame time of an output video, an identifier of a spherical video of the plurality of spherical videos and a spatial location of a sub-frame in the identified spherical video, the sub-frame having a non-spherical field of view;
generating the output video based on the edit map, the output video including the sub-frame from the identified spherical video specified for each frame time, wherein the spherical video comprises a first hemispherical video and a second hemispherical video captured in synchronization, and wherein generating the output video comprises;
determining if a given sub-frame has a spatial region crossing a boundary between the first hemispherical video and the second hemispherical video;
responsive to the given sub-frame having the spatial region crossing the boundary, stitching corresponding portions of the first hemispherical video and the second hemispherical video contained within the given sub-frame to generate a stitched sub-frame; and
including the stitched sub-frame in the output video; and
outputting the output video.
3 Assignments
0 Petitions
Accused Products
Abstract
A spherical content capture system captures spherical video content. A spherical video sharing platform enables users to share the captured spherical content and enables users to access spherical content shared by other users. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest.
-
Citations
20 Claims
-
1. A method for managing media content generated from spherical video, the method comprising:
-
storing, by a video server, a plurality of spherical videos; receiving an edit map specifying for each frame time of an output video, an identifier of a spherical video of the plurality of spherical videos and a spatial location of a sub-frame in the identified spherical video, the sub-frame having a non-spherical field of view; generating the output video based on the edit map, the output video including the sub-frame from the identified spherical video specified for each frame time, wherein the spherical video comprises a first hemispherical video and a second hemispherical video captured in synchronization, and wherein generating the output video comprises; determining if a given sub-frame has a spatial region crossing a boundary between the first hemispherical video and the second hemispherical video; responsive to the given sub-frame having the spatial region crossing the boundary, stitching corresponding portions of the first hemispherical video and the second hemispherical video contained within the given sub-frame to generate a stitched sub-frame; and including the stitched sub-frame in the output video; and outputting the output video. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A non-transitory computer-readable storage medium storing instructions for managing media content generated from spherical video, the instructions when executed by one or more processors causing the one or more processors to perform steps including:
-
storing a plurality of spherical videos; receiving an edit map specifying for each frame time of an output video, an identifier of a spherical video of the plurality of spherical videos and a spatial location of a sub-frame in the identified spherical video, the sub-frame having a non-spherical field of view; generating the output video based on the edit map, the output video including the sub-frame from the identified spherical video specified for each frame time, wherein the spherical video comprises a first hemispherical video and a second hemispherical video captured in synchronization, and wherein generating the output video comprises; determining if a given sub-frame has a spatial region crossing a boundary between the first hemispherical video and the second hemispherical video; responsive to the given sub-frame having the spatial region crossing the boundary, stitching corresponding portions of the first hemispherical video and the second hemispherical video contained within the given sub-frame to generate a stitched sub-frame; and including the stitched sub-frame in the output video; and outputting the output video. - View Dependent Claims (10, 11, 12, 13, 14)
-
-
15. A video server for managing media content generated from spherical video, the video server comprising:
-
one or more processors; and a non-transitory computer-readable storage medium storing instructions for managing media content generated from spherical video, the instructions when executed by one or more processors causing the one or more processors to perform steps including; storing a plurality of spherical videos; receiving an edit map specifying for each frame time of an output video, an identifier of a spherical video of the plurality of spherical videos and a spatial location of a sub-frame in the identified spherical video, the sub-frame having a non-spherical field of view; generating the output video based on the edit map, the output video including the sub-frame from the identified spherical video specified for each frame time, wherein the spherical video comprises a first hemispherical video and a second hemispherical video captured in synchronization, and wherein generating the output video comprises; determining if a given sub-frame has a spatial region crossing a boundary between the first hemispherical video and the second hemispherical video; responsive to the given sub-frame having the spatial region crossing the boundary, stitching corresponding portions of the first hemispherical video and the second hemispherical video contained within the given sub-frame to generate a stitched sub-frame; and including the stitched sub-frame in the output video; and outputting the output video. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification