Method of content adaptive video encoding
First Claim
1. A method comprising:
- assigning a predefined quantization model to each of two regions of interest of video content, wherein a respective region of interest of the two regions of interest comprises a rectangular-based region covering less than a full frame identified on a frame-by-frame basis by a first sub-segment in the respective region of interest and a last sub-segment in the respective region of interest, wherein the first sub-segment and the last sub-segment define a structure based on a number of sub-segments in the respective region of interest; and
encoding each of the two regions of interest differently based on the predefined quantization model assigned to the respective region of interest.
1 Assignment
0 Petitions
Accused Products
Abstract
A method of content adaptive encoding video is disclosed. The method comprises segmenting video content into segments based on predefined classifications or models. Examples of such classifications include action scenes, slow scenes, low or high detail scenes, and brightness of the scenes. Based on the segment classifications, each segment is encoded with a different encoder chosen from a plurality of encoders. Each encoder is associated with a model. The chosen encoder is particularly suited to encoding the unique subject matter of the segment. The coded bit-stream for each segment includes information regarding which encoder was used to encode that segment. A matching decoder of a plurality of decoders is chosen using the information in the coded bitstream to decode each segment using a decoder suited for the classification or model of the segment. If scenes exist which do not fall in a predefined classification, or where classification is more difficult based on the scene content, these scenes are segmented, coded and decoded using a generic coder and decoder.
-
Citations
11 Claims
-
1. A method comprising:
-
assigning a predefined quantization model to each of two regions of interest of video content, wherein a respective region of interest of the two regions of interest comprises a rectangular-based region covering less than a full frame identified on a frame-by-frame basis by a first sub-segment in the respective region of interest and a last sub-segment in the respective region of interest, wherein the first sub-segment and the last sub-segment define a structure based on a number of sub-segments in the respective region of interest; and encoding each of the two regions of interest differently based on the predefined quantization model assigned to the respective region of interest. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A method comprising:
-
assigning a predefined content model to a region of interest in video content, wherein the region of interest comprises a rectangular-based region covering less than a full frame identified on a frame-by-frame basis by a first descriptor associated with a first sub-segment in the region of interest and a second descriptor associated with a last sub-segment in the region of interest; and encoding at an encoder the region of interest differently from a portion of the full frame not covered by the region of interest based on the assigned predefined content model, the encoder having an associated predefined content model that is different from other encoders of a plurality of encoders, the encoder adding header information to instruct a decoder of a plurality of decoders how to decode the region of interest based on the assigned predefined content model. - View Dependent Claims (11)
-
Specification