End-to-end video and image compression
First Claim
1. An auto-encoder comprising:
- an encoder configured to;
receive video data to be compressed,select a model from a plurality of models based on the video data to be compressed, the model being based on at least one neural network, the at least one neural network configured to implement a lossy compression algorithm, the at least one neural network having been trained based on a type of the video data to be compressed,generate compressed video data using the selected model,generate at least one first parameter based on the compressed video data, andcommunicate the compressed video data and the model to at least one device configured to decode the compressed video data using an inverse algorithm based on the lossy compression algorithm;
a decoder configured to;
generate decoded video data based on the compressed video data using the inverse algorithm and the model,generate at least one second parameter based on the decoded video data, andtrain the model using the at least one first parameter and the at least one second parameter.
4 Assignments
0 Petitions
Accused Products
Abstract
A system (e.g., an auto-encoder system) includes an encoder, a decoder and a learning module. The encoder generates compressed video data using a lossy compression algorithm, the lossy compression algorithm being implemented using a trained neural network with at least one convolution, generate at least one first parameter based on the compressed video data, and communicate the compressed video data and the model to at least one device configured to decode the compressed video data using an inverse algorithm based on the lossy compression algorithm. The decoder generates decoded video data based on the compressed video data using the inverse algorithm and the model, and generate at least one second parameter based on the decoded video data. The learning module trains the model using the at least one first parameter and the at least one second parameter.
60 Citations
20 Claims
-
1. An auto-encoder comprising:
-
an encoder configured to; receive video data to be compressed, select a model from a plurality of models based on the video data to be compressed, the model being based on at least one neural network, the at least one neural network configured to implement a lossy compression algorithm, the at least one neural network having been trained based on a type of the video data to be compressed, generate compressed video data using the selected model, generate at least one first parameter based on the compressed video data, and communicate the compressed video data and the model to at least one device configured to decode the compressed video data using an inverse algorithm based on the lossy compression algorithm; a decoder configured to; generate decoded video data based on the compressed video data using the inverse algorithm and the model, generate at least one second parameter based on the decoded video data, and train the model using the at least one first parameter and the at least one second parameter. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A method comprising:
-
receiving video data to be compressed, selected a model from a plurality of models based on the video data to be compressed, the model being based on at least one neural network, the at least one neural network configured to implement a lossy compression algorithm, the at least one neural network having been trained based on a type of the video data to be compressed, generating first compressed video data using the selected model; generating at least one first parameter based on the compressed video data; communicating the compressed video data and the model to at least one device configured to decode the compressed video data using an inverse algorithm based on the lossy compression algorithm; generating decoded video data based on the compressed video data using the inverse algorithm and the model; generating at least one second parameter based on the decoded video data; training the model using the at least one first parameter and the at least one second parameter; and generating second compressed video data using the trained model. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A non-transitory computer readable medium having code segments stored thereon, the code segments, when executed by a processor cause the processor to:
-
receive video data to be compressed; select a model from a plurality of models based on the video data to be compressed, the model being based on at least one neural network, the at least one neural network configured to implement a lossy compression algorithm, the at least one neural network having been trained based on a type of the video data to be compressed; generate first compressed video data using the selected model; generate at least one first parameter based on the compressed video data; communicate the compressed video data and the model to at least one device configured to decode the compressed video data using an inverse algorithm based on the lossy compression algorithm; generate decoded video data based on the compressed video data using the inverse algorithm and the model; generate at least one second parameter based on the decoded video data; train the model using the at least one first parameter and the at least one second parameter; and generate second compressed video data using the trained model.
-
Specification