System and method for generating semantic annotations
First Claim
1. A method, comprising:
- receiving a new video from one or more sensors;
generating a new content graph (CG) based on the new video;
comparing the new CG with a plurality of prior CGs, wherein the plurality of prior CGs are generated from a plurality of previously received videos;
identifying a first portion of the new CG matching a portion of a first prior CG among the plurality of prior CGs and a second portion of the new CG matching a portion of a second prior CG among the plurality of prior CGs;
analyzing a first set of semantic annotations (SAs) associated with the portion of the first prior CG and a second set of SAs associated with the portion of the second prior CG; and
generating a sequence of SAs that temporally corresponds with the new video by combining the first set of SAs and the second set of SAs based on the analysis of the first and the second set of SAs.
5 Assignments
0 Petitions
Accused Products
Abstract
In accordance with one aspect of the present technique, a method is disclosed. The method includes receiving a new video from one or more sensors and generating a new content graph (CG) based on the new video. The method also includes comparing the new CG with a plurality of prior CGs. The method further includes identifying a first portion of the new CG matching a portion of a first prior CG and a second portion of the new CG matching a portion of the second prior CG. The method further includes analyzing a first set of semantic annotations (SAs) associated with the portion of the first prior CG and a second set of SAs associated with the portion of the second prior CG. The method further includes generating a sequence of SAs for the new video based on the analysis of the first and the second set of SAs.
-
Citations
20 Claims
-
1. A method, comprising:
-
receiving a new video from one or more sensors; generating a new content graph (CG) based on the new video; comparing the new CG with a plurality of prior CGs, wherein the plurality of prior CGs are generated from a plurality of previously received videos; identifying a first portion of the new CG matching a portion of a first prior CG among the plurality of prior CGs and a second portion of the new CG matching a portion of a second prior CG among the plurality of prior CGs; analyzing a first set of semantic annotations (SAs) associated with the portion of the first prior CG and a second set of SAs associated with the portion of the second prior CG; and generating a sequence of SAs that temporally corresponds with the new video by combining the first set of SAs and the second set of SAs based on the analysis of the first and the second set of SAs. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A system, comprising:
-
at least one processor; a graph module stored in a memory and executable by the at least one processor, the graph module configured for receiving a new video from one or more sensors and generating a new content graph (CG) based on the new video; a comparison module stored in the memory and executable by the at least one processor, the comparison module communicatively coupled to the graph module for comparing the new CG with a plurality of prior CGs and identifying a first portion of the new CG matching a portion of a first prior CG among the plurality of prior CGs and a second portion of the new CG matching a portion of a second prior CG among the plurality of prior CGs, wherein the plurality of prior CGs are generated from a plurality of previously received videos; and a narrative module stored in the memory and executable by the at least one processor, the narrative module communicatively coupled to the comparison module for analyzing a first set of semantic annotations (SAs) associated with the portion of the first prior CG and a second set of SAs associated with the portion of the second prior CG and generating a sequence of SAs that temporally corresponds with the new video by combining the first set of SAs and the second set of SAs based on the analysis of the first and the second set of SAs. - View Dependent Claims (10, 11, 12, 13)
-
-
14. A computer program product comprising a non-transitory computer readable medium encoded with instructions that, in response to execution by at least one processor, cause the processor to perform operations comprising:
-
receiving a new video from one or more sensors; generating a new content graph (CG) based on the new video; comparing the new CG with a plurality of prior CGs, wherein the plurality of prior CGs are generated from a plurality of previously received videos; identifying a first portion of the new CG matching a portion of a first prior CG among the plurality of prior CGs and a second portion of the new CG matching a portion of a second prior CG among the plurality of prior CGs; analyzing a first set of semantic annotations (SAs) associated with the portion of the first prior CG and a second set of SAs associated with the portion of the second prior CG; and generating a sequence of SAs that temporally corresponds with the new video by combining the first set of SAs and the second set of SAs based on the analysis of the first and the second set of SAs. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification