Content adaptive video encoder
First Claim
1. A system for content adaptive encoding of data comprising:
- (1) an extractor that divides video content into temporal portions;
(2) a locator that associates descriptors to each portion based on portion content, the locator further locating at least one of subsegments and regions of interest;
(3) a mapper that maps each portion of the video content to a model from a plurality of models based on the portion descriptors, the mapper further comprising;
(a) a plurality of content model units, each content model unit of the plurality of content model units being associated with a model of the plurality of models;
(b) a plurality of comparators, each comparator of the plurality of comparators connected to a content model unit and an output from the extractor or an output from the locator; and
(c) a plurality of selectors, wherein each selector of the plurality of selectors is connected to two of the comparators; and
(4) a plurality of encoders, each encoder of the plurality of encoders configured to encode portions according to the model associated with the portion.
1 Assignment
0 Petitions
Accused Products
Abstract
A system for content adaptive encoding and decoding video is disclosed. The system comprises modules for segmenting video content into segments based on predefined classifications or models. Examples of such classifications comprise action scenes, slow scenes, low or high detail scenes, and brightness of the scenes. Based on the segment classifications, each segment is encoded with a different encoder chosen from a plurality of encoders. Each encoder is associated with a model. The chosen encoder is particularly suited to encoding the unique subject matter of the segment. The coded bit-stream for each segment includes information regarding which encoder was used to encode that segment. A matching decoder of a plurality of decoders is chosen using the information in the coded bitstream to decode each segment using a decoder suited for the classification or model of the segment. If scenes exist which do not fall in a predefined classification, or where classification is more difficult based on the scene content, these scenes are segmented, coded and decoded using a generic coder and decoder.
238 Citations
12 Claims
-
1. A system for content adaptive encoding of data comprising:
-
(1) an extractor that divides video content into temporal portions;
(2) a locator that associates descriptors to each portion based on portion content, the locator further locating at least one of subsegments and regions of interest;
(3) a mapper that maps each portion of the video content to a model from a plurality of models based on the portion descriptors, the mapper further comprising;
(a) a plurality of content model units, each content model unit of the plurality of content model units being associated with a model of the plurality of models;
(b) a plurality of comparators, each comparator of the plurality of comparators connected to a content model unit and an output from the extractor or an output from the locator; and
(c) a plurality of selectors, wherein each selector of the plurality of selectors is connected to two of the comparators; and
(4) a plurality of encoders, each encoder of the plurality of encoders configured to encode portions according to the model associated with the portion. - View Dependent Claims (2, 3)
-
-
4. A system for content adaptive encoding of data comprising:
-
(1) means for dividing video content into temporal portions;
(2) means for associating descriptors to each portion based on portion content, the means for associating descriptors further locating at least one of subsegments and regions of interest;
(3) means for mapping each portion of the video content to a model from a plurality of models based on the portion descriptors, the means for mapping further comprising;
(a) a plurality of content model units, each content model unit of the plurality of content model units being associated with a model of the plurality of models;
(b) a plurality of comparators, each comparator of the plurality of comparators connected to a content model unit and an output from the extractor or an output from the locator; and
(c) a plurality of selectors, wherein each selector of the plurality of selectors is connected to two of the comparators; and
(4) a plurality of encoders, each encoder of the plurality of encoders configured to encode portions according to the model associated with the portion. - View Dependent Claims (5, 6)
-
-
7. A method for content adaptive encoding of data, the method comprising:
-
(1) dividing video content into temporal portions;
(2) associating descriptors to each portion based on portion content, the means for associating descriptors further locating at least one of subsegments and regions of interest;
(3) mapping each portion of the video content to a model from a plurality of models based on the portion descriptors, the step of mapping each portion utilizing;
(a) a plurality of content model units, each content model unit of the plurality of content model units being associated with a model of the plurality of models;
(b) a plurality of comparators, each comparator of the plurality of comparators connected to a content model unit and an output from the extractor or an output from the locator; and
(c) a plurality of selectors, wherein each selector of the plurality of selectors is connected to two of the comparators; and
(4) encoding each portion of the video content using one of a plurality of encoders, each encoder of the plurality of encoders configured to encode portions according to the model associated with the portion. - View Dependent Claims (8, 9)
-
-
10. A computer-readable medium storing instructions for controlling a computing device to adaptively encode data based on content, the instructions comprising:
-
(1) dividing video content into temporal portions;
(2) associating descriptors to each portion based on portion content, the means for associating descriptors further locating at least one of subsegments and regions of interest;
(3) mapping each portion of the video content to a model from a plurality of models based on the portion descriptors, the step of mapping each portion utilizing;
(a) a plurality of content model units, each content model unit of the plurality of content model units being associated with a model of the plurality of models;
(b) a plurality of comparators, each comparator of the plurality of comparators connected to a content model unit and an output from the extractor or an output from the locator; and
(c) a plurality of selectors, wherein each selector of the plurality of selectors is connected to two of the comparators; and
(4) encoding each portion of the video content using one of a plurality of encoders, each encoder of the plurality of encoders configured to encode portions according to the model associated with the portion. - View Dependent Claims (11, 12)
-
Specification