Automatically generating notes and classifying multimedia content specific to a video production
First Claim
1. A method of automatically classifying multimedia content that is specific to a video production based on a user context, said method comprising:
- obtaining from a video sensor embedded in a video capturing device that captures a video associated with a first user and a second user, video sensor data comprising a time series of location data, direction data, orientation data, and a position of said first user and said second user being recorded;
identifying, for any given duration associated with said video, corresponding first user data, corresponding second user data, and corresponding video sensor data;
annotating said video with said corresponding first user data, said corresponding second user data, and said corresponding video sensor data to obtain an annotated multimedia content;
performing, by said processor, a comparison of a data pattern of said annotated multimedia content with data patterns that correspond to a plurality of predefined sections of a script stored in a database to obtain a recommended section for said annotated multimedia content, wherein said plurality of predefined sections are specific to said video production, and wherein said data pattern comprises first user data, second user data, and video sensor data associated with a section of said annotated multimedia content; and
automatically classifying, by said processor, said annotated multimedia content by associating said annotated multimedia content with said recommended section.
1 Assignment
0 Petitions
Accused Products
Abstract
Automatically classifying multimedia content that is specific to a video production includes obtaining, from a video sensor embedded in a video capturing device that captures a video, a time series of location data, direction data, orientation data, and a position of the first user and the second user, identifying for any given duration in the video, corresponding first user data, corresponding second user data, and corresponding video sensor data, annotating the video with the corresponding first user data, the corresponding second user data, and the corresponding video sensor data to obtain an annotated multimedia content, performing a comparison of a data pattern of the annotated multimedia content with data patterns of a script stored in a database to obtain a recommended section for the annotated multimedia content, and automatically classifying the annotated multimedia content by associating the annotated multimedia content with the recommended section.
-
Citations
20 Claims
-
1. A method of automatically classifying multimedia content that is specific to a video production based on a user context, said method comprising:
-
obtaining from a video sensor embedded in a video capturing device that captures a video associated with a first user and a second user, video sensor data comprising a time series of location data, direction data, orientation data, and a position of said first user and said second user being recorded; identifying, for any given duration associated with said video, corresponding first user data, corresponding second user data, and corresponding video sensor data; annotating said video with said corresponding first user data, said corresponding second user data, and said corresponding video sensor data to obtain an annotated multimedia content; performing, by said processor, a comparison of a data pattern of said annotated multimedia content with data patterns that correspond to a plurality of predefined sections of a script stored in a database to obtain a recommended section for said annotated multimedia content, wherein said plurality of predefined sections are specific to said video production, and wherein said data pattern comprises first user data, second user data, and video sensor data associated with a section of said annotated multimedia content; and automatically classifying, by said processor, said annotated multimedia content by associating said annotated multimedia content with said recommended section. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A system for automatically classifying multimedia content that is specific to a video production based on a user context, said system comprising:
-
a video capturing device that captures a video associated with a first user and a second user; a video sensor embedded in said video capturing device, wherein said video sensor captures a video sensor data comprising a time series of a location data, a direction data, an orientation data, and a position of said first user and said second user being recorded; a memory unit that stores instructions; a database operatively connected to said memory unit; and a processor, when configured by said instructions, executes a set of modules comprising; a sensor data obtaining module that obtains said first user data, said second user data, and said video sensor data; an identification module that identifies for any given duration associated with said video, corresponding first user data, corresponding second user data, and corresponding video sensor data; an annotation module that annotates said video with said corresponding first user data, said corresponding second user data, and said corresponding video sensor data to obtain an annotated multimedia content; a comparison module that performs a comparison of a data pattern of said annotated multimedia content with data patterns that correspond to a plurality of predefined sections of a script stored in said database to obtain a recommended section, wherein said plurality of predefined sections are specific to said video production, and wherein said data pattern comprises a first user data, a second user data and a video sensor data associated with a section of said annotated multimedia content; and a classification module that automatically classifies said annotated multimedia content by associating said annotated multimedia content with said recommended section. - View Dependent Claims (13, 14, 15, 16, 17, 18)
-
-
19. A system for automatically generating notes for a multimedia content that is specific to a video production based on a user context, said system comprising:
-
a first audio capturing device adapted to be attached to a first user, wherein said first audio capturing device captures a first audio; a second audio capturing device adapted to be attached to a second user, wherein said second audio capturing device captures a second audio; a first audio sensor coupled to said first audio capturing device, wherein said first audio sensor captures a first user data comprising a time series of a location data, a direction data, and an orientation data associated with said first user; a second audio sensor coupled to said second audio capturing device, wherein said second audio sensor captures a second user data comprising a time series of a location data, a direction data, and an orientation data associated with said second user; a video capturing device that captures a video associated with said first user and said second user; a video sensor embedded in said video capturing device, wherein said video sensor captures a video sensor data comprising a time series of a location data, a direction data, an orientation data, and a position of said first user and said second user being recorded; a memory unit that stores instructions; a database operatively connected to said memory unit; and a processor, when configured by said instructions, executes a set of module comprising; a first module that obtains said first user data, said second user data, and said video sensor data; a second module that identifies for any given duration associated with said first audio, said second audio, or said video, corresponding first user data, corresponding second user data, and corresponding video sensor data; a third module that annotates at least one of said first audio, said second audio and said video with said corresponding first user data, said corresponding second user data, and said corresponding video sensor data to obtain an annotated multimedia content; a fourth module that identifies that performs a comparison of a data pattern of said annotated multimedia content with data patterns that correspond to a plurality of predefined sections of a script stored in said database to obtain a recommended section, wherein said plurality of predefined sections are specific to said video production, and wherein said data pattern comprises a first user data, a second user data and a video sensor data associated with a section of said annotated multimedia content; a fifth module that automatically classifies said annotated multimedia content by associating said annotated multimedia content with said recommended section; and a sixth module that automatically generates notes for said recommended section of said annotated multimedia content from at least one of said first user data, said second user data, and said video sensor data that are associated with said section, and said predefined data based on said comparison. - View Dependent Claims (20)
-
Specification