Video compression repository and model reuse
First Claim
1. A method of providing video data, comprising:
- encoding a subject video stream by a feature-based compression process that utilizes feature models from a global feature model library, said encoding implicitly using the feature models to indicate macroblocks in the subject video to encode by forming tracks using location information from one or more of the feature models in multiple reference frames, the tracks indicating respective locations of macroblocks in a current frame of the subject video stream being encoded, and explicitly using pixels of the feature models to improve prediction of said macroblocks relative to a first prediction of said macroblocks,where the encoding results in an encoded video data,where the use of feature models to indicate macroblocks in the subject video avoids the need to segment features from non-features during the feature-based compression process of the subject video, and wherein the feature-based compression process applies feature-based prediction across multiple different video sources based on the feature models, the multiple different video sources being at least the subject video stream and one or more input videos used to generate the global feature model library;
transmitting the encoded video data to a requesting device upon command, said feature models from the global feature model library being made accessible to the requesting device and enabling decoding of the encoded video data at the requesting device;
wherein the global feature model library is formed by;
receiving the one or more input videos, each input video being different from the subject video stream;
for each of the input videos, generating feature information and a respective feature model; and
storing in a data storage device or on cloud storage the feature models generated from the input videos, the data store or cloud storage providing pertinent feature models to the feature-based compression process and the requesting device.
3 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods of improving video encoding/decoding efficiency may be provided. A feature-based processing stream is applied to video data having a series of video frames. Computer-vision-based feature and object detection algorithms identify regions of interest throughout the video datacube. The detected features and objects are modeled with a compact set of parameters, and similar feature/object instances are associated across frames. Associated features/objects are formed into tracks, and each track is given a representative, characteristic feature. Similar characteristic features are clustered and then stored in a model library, for reuse in the compression of other videos. A model-based compression framework makes use of the preserved model data by detecting features in a new video to be encoded, relating those features to specific blocks of data, and accessing similar model information from the model library. The formation of model libraries can be specialized to include personal, “smart” model libraries, differential libraries, and predictive libraries. Predictive model libraries can be modified to handle a variety of demand scenarios.
-
Citations
17 Claims
-
1. A method of providing video data, comprising:
-
encoding a subject video stream by a feature-based compression process that utilizes feature models from a global feature model library, said encoding implicitly using the feature models to indicate macroblocks in the subject video to encode by forming tracks using location information from one or more of the feature models in multiple reference frames, the tracks indicating respective locations of macroblocks in a current frame of the subject video stream being encoded, and explicitly using pixels of the feature models to improve prediction of said macroblocks relative to a first prediction of said macroblocks, where the encoding results in an encoded video data, where the use of feature models to indicate macroblocks in the subject video avoids the need to segment features from non-features during the feature-based compression process of the subject video, and wherein the feature-based compression process applies feature-based prediction across multiple different video sources based on the feature models, the multiple different video sources being at least the subject video stream and one or more input videos used to generate the global feature model library; transmitting the encoded video data to a requesting device upon command, said feature models from the global feature model library being made accessible to the requesting device and enabling decoding of the encoded video data at the requesting device; wherein the global feature model library is formed by; receiving the one or more input videos, each input video being different from the subject video stream; for each of the input videos, generating feature information and a respective feature model; and storing in a data storage device or on cloud storage the feature models generated from the input videos, the data store or cloud storage providing pertinent feature models to the feature-based compression process and the requesting device. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A video data system comprising:
-
a model library storing video data and serving as a source of streaming video; and a codec operatively coupled to the model library, and in response to a request for a subject video, the codec being executed by a processor to (i) encode stored video data in the model library corresponding to the requested certain subject video and to (ii) stream the encoded video data from the model library, wherein the codec applies feature-based prediction and compression using feature models from a global feature model library, wherein the global feature model library is formed by; receiving one or more input videos, each input video being different from the stored video data in the model library corresponding to the requested certain video; for each of the input videos, generating feature information and respective feature models, such that the codec applies feature-based compression, which includes implicitly using feature models to indicate which macroblocks in the subject video to encode by forming tracks using location information from one or more of the feature models in multiple reference frames, the tracks indicating respective locations of macroblocks in a current frame of the subject video stream being encoded, and explicitly using the pixels of the feature models to improve prediction of said macroblocks relative to a first prediction of said macroblocks, where association of the feature models from the global feature model library with macroblocks in the subject video avoids the need to segment features from non-features during the feature-based compression process of the subject video; and storing in a data storage device or on cloud storage the feature models generated from the input videos, the data store or cloud storage providing pertinent feature models to the feature-based compression process and the requesting device. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15, 16, 17)
-
Specification