Centralized database for 3-D and other information in videos
First Claim
Patent Images
1. A method comprising:
- accessing a first version of video data captured by a camera to use in post capture processing by a video player, the video data comprising a video identifier;
querying a server, with the video identifier by the video player, for video metadata stored within a centralized database on the server, the video metadata comprising video lighting location metadata with coordinates of a light source with respect to the camera that captured the video data, and wherein the video metadata comprises time data indicating time information corresponding to the coordinates of the light source during the capture of the video data;
receiving the video metadata by the video player from the server;
editing the first version of the video data with the received video metadata by the video player to produce a second version of the video data, wherein editing the first version of the video data to produce the second version of the video data includes;
determining, based on the time data and the coordinates of the light source, a change in the light source during the capture of the video data; and
producing the second version of the video data based on the first version of the video data, the second version of the video data produced by adjusting light provided by the light source for the second version of the video data for a time period during the capture of the video data, the light being adjusted using the determined change in the light source, wherein adjusting the light provided by the light source for the second version of the video data includes;
adjusting the light for each object of a plurality of objects in the second version of the video data, wherein the light for the object is adjusted based on object segmentation data and depth map data included in the video metadata for the object, and wherein the light for the object is adjusted by;
creating a set of reference images of the object;
rotating the light source for the light provided for each reference image in the set of reference images;
generating a composite image of the object based on the set of reference images after rotating the light source, wherein the composite image includes information about a location of the light source for the reference image after rotating the light source and includes information about a reflectance of the object;
wherein the light for the object in the second version of the video data is adjusted based on the composite image generated for the object, and wherein the composite image of the object indicates an animated progression of a movement of the light source with respect to the object in the image; and
outputting the second version of the video data to a display by the video player while accessing the first version of the video data.
2 Assignments
0 Petitions
Accused Products
Abstract
Methods and systems for a centralized database for 3-D and other information in videos are presented. A centralized database contains video metadata such as camera, lighting, sound, object, depth, and annotation data that may be queried for and used in the editing of videos, including the addition and removal of objects and sounds. The metadata stored in the centralized database may be open to the public and admit contributor metadata.
25 Citations
24 Claims
-
1. A method comprising:
-
accessing a first version of video data captured by a camera to use in post capture processing by a video player, the video data comprising a video identifier; querying a server, with the video identifier by the video player, for video metadata stored within a centralized database on the server, the video metadata comprising video lighting location metadata with coordinates of a light source with respect to the camera that captured the video data, and wherein the video metadata comprises time data indicating time information corresponding to the coordinates of the light source during the capture of the video data; receiving the video metadata by the video player from the server; editing the first version of the video data with the received video metadata by the video player to produce a second version of the video data, wherein editing the first version of the video data to produce the second version of the video data includes; determining, based on the time data and the coordinates of the light source, a change in the light source during the capture of the video data; and producing the second version of the video data based on the first version of the video data, the second version of the video data produced by adjusting light provided by the light source for the second version of the video data for a time period during the capture of the video data, the light being adjusted using the determined change in the light source, wherein adjusting the light provided by the light source for the second version of the video data includes; adjusting the light for each object of a plurality of objects in the second version of the video data, wherein the light for the object is adjusted based on object segmentation data and depth map data included in the video metadata for the object, and wherein the light for the object is adjusted by; creating a set of reference images of the object; rotating the light source for the light provided for each reference image in the set of reference images; generating a composite image of the object based on the set of reference images after rotating the light source, wherein the composite image includes information about a location of the light source for the reference image after rotating the light source and includes information about a reflectance of the object; wherein the light for the object in the second version of the video data is adjusted based on the composite image generated for the object, and wherein the composite image of the object indicates an animated progression of a movement of the light source with respect to the object in the image; and outputting the second version of the video data to a display by the video player while accessing the first version of the video data. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 23, 24)
-
-
14. A system comprising:
-
a video player that performs operations comprising; accessing a first version of video data, the first version of the video data comprising a video identifier; querying a server with the video identifier for video metadata stored within a centralized database on the server, the video metadata comprising video lighting location metadata with coordinates of a light source with respect to a camera that captured the video data, and wherein the video metadata comprises time data indicating time information corresponding to the coordinates of the light source during capture of the video data; receiving the video metadata from the server; editing the first version of the video data with the received video metadata to produce a second version of the video data, wherein editing the first version of the video data to produce the second version of the video data includes; determining, based on the time data and the coordinates of the light source, a change in the light source during the capture of the first version of the video data; and producing the second version of the video data based on the first version of the video data, the second version of the video data produced by adjusting light provided by the light source for the second version of the video data for a time period during the capture of the video data, the light being adjusted using the determined change in the light source, wherein adjusting the light provided by the light source for the second version of the video data includes; adjusting the light for each object of a plurality of objects in the second version of the video data, wherein the light for the object is adjusted based on object segmentation data and depth map data included in the video metadata for the object, and wherein the light for the object is adjusted by; creating a set of reference images of the object; rotating the light source for the light provided for each reference image in the set of reference images; generating a composite image of the object based on the set of reference images after rotating the light source, wherein the composite image includes information about a location of the light source for the reference image after rotating the light source and includes information about a reflectance of the object; wherein the light for the object in the second version of the video data is adjusted based on the composite image generated for the object, and wherein the composite image of the object indicates an animated progression of a movement of the light source with respect to the object in the image; and outputting the second version of the video data for display; and a server that performs operations comprising; storing the video metadata. - View Dependent Claims (15, 16, 17, 18, 19)
-
-
20. A method comprising:
-
electronically receiving, by a computing resource, a first metadata contribution from a first contributor for a video, the first metadata contribution identifying information about an environment of the video as captured, the first metadata contribution including information enabling a video player to update the video before being displayed on a display, a first version of the video comprising a video identifier, the video player; querying the computing resource with the video identifier for the first metadata contribution, the first metadata contribution comprising video lighting location metadata with coordinates of a light source with respect to a camera that captured the video, and wherein the first metadata contribution comprises time data indicating time information corresponding to the coordinates of the light source during the capture of the video, receiving the first metadata contribution from the computing resource, editing the first version of the video with the received first metadata contribution to produce a second version of the video, wherein editing the first version of the video to produce the second version of the video includes; determining, based on the time data and the coordinates of the light source as located during the capture of the first version of the video, and camera movement in relation to an object in the video, a change in the light source during the capture of the video, producing the second version of the video based on the first version of the video, the second version of the video produced by adjusting light provided by the light source for the second version of the video for a time period during the capture of the video, the light being adjusted using the determined change in the light source, wherein adjusting the light provided by the light source for the second version of the video includes; adjusting the light for each object of a plurality of objects in the second version of the video, wherein the light for the object is adjusted based on object segmentation data and depth map data included in the received first metadata for the object, and wherein the light for the object is adjusted by; creating a set of reference images of the object; rotating the light source for the light provided for each reference image in the set of reference images; generating a composite image of the object based on the set of reference images after rotating the light source, wherein the composite image includes information about a location of the light source for the reference image after rotating the light source and includes information about a reflectance of the object; wherein the light for the object in the second version of the video is adjusted based on the composite image generated for the object, and wherein the composite image of the object indicates an animated progression of a movement of the light source with respect to the object in the image; and outputting the second version of the video for display; storing, by the computing resource, the first metadata contribution in a centralized video metadata database; receiving, by the computing resource, from a second contributor a second metadata contribution for a video, the second metadata contribution being received after the first metadata contribution; determining that the second metadata contribution is more accurate than the first metadata contribution in describing the coordinates of the light source and the camera movement of the video as captured; editing, by the video player, the first version of the video with the received second metadata contribution to produce a third version of the video that relights the second version of the video consistent with the coordinates of the light source as located during the capture of the first version of the video and camera movement in relation to the object in the video; and replacing at least a portion of the first metadata contribution stored on the centralized video metadata database with the second metadata contribution. - View Dependent Claims (21, 22)
-
Specification