Method and assembly for video encoding, the video encoding including texture analysis and texture synthesis, and corresponding computer program and corresponding computer-readable storage medium
First Claim
1. A method for video encoding, comprising:
- performing a texture analysis of video scenes to identify areas of synthesizable textures;
encoding the video scenes and generating meta data for describing the areas identified and for describing the synthesizable textures using information on identified areas of synthesizable textures, and information on the textures of these areas; and
ensuring temporal consistency of recognizing synthesizable textures in a sequence of frames by means of a texture catalogue, bystoring the synthesizable textures of the identified areas of synthesizable textures in a first frame of the sequence in the texture catalogue in order to initialize same;
comparing the synthesizable textures of the identified areas of synthesizable textures in the following frames of the sequence with the synthesizable textures stored in the texture catalogue;
in the event of a match, assigning the respective synthesizable texture of an identified area of synthesizable texture among the following frames of the sequence to the respective synthesizable texture stored in the texture catalogue; and
in the event of no match, storing the respective synthesizable texture of an identified area of synthesizable texture among the following frames of the sequence in the texture catalogue.
2 Assignments
0 Petitions
Accused Products
Abstract
The invention relates to a method and assembly for video coding comprising a texture analysis, texture synthesis, corresponding computer program and a computer-readable recording medium. Said invention can be used, in particular for reducing a data rate during a video data transmission. An analysis of video scene texture by an encoder is performed in such a way that synthesized texture areas are determined. The video scene coding is carried out with the aid of data for said synthesized texture areas. Information on the texture of said areas, metadata used for describing the determined areas and the synthesized texture are generated. Coded data and metadata are evaluated by a decoder in such a way that the video scenes are reconstructed by synthetically generated textures by means of evaluation of metadata for determined areas.
15 Citations
45 Claims
-
1. A method for video encoding, comprising:
-
performing a texture analysis of video scenes to identify areas of synthesizable textures; encoding the video scenes and generating meta data for describing the areas identified and for describing the synthesizable textures using information on identified areas of synthesizable textures, and information on the textures of these areas; and ensuring temporal consistency of recognizing synthesizable textures in a sequence of frames by means of a texture catalogue, by storing the synthesizable textures of the identified areas of synthesizable textures in a first frame of the sequence in the texture catalogue in order to initialize same; comparing the synthesizable textures of the identified areas of synthesizable textures in the following frames of the sequence with the synthesizable textures stored in the texture catalogue; in the event of a match, assigning the respective synthesizable texture of an identified area of synthesizable texture among the following frames of the sequence to the respective synthesizable texture stored in the texture catalogue; and in the event of no match, storing the respective synthesizable texture of an identified area of synthesizable texture among the following frames of the sequence in the texture catalogue. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 18)
-
-
16. An apparatus for video encoding, comprising:
-
an analyzer for performing a texture analysis of video scenes to identify areas of synthesizable textures; an encoder for encoding the video scenes and a generator for generating meta data for describing the areas identified and for describing the synthesizable textures using information on identified areas of synthesizable textures, and information on the textures of these areas; and a unit for ensuring temporal consistency of recognizing synthesizable textures in a sequence of frames by means of a texture catalogue, by storing the synthesizable textures of the identified areas of synthesizable textures in a first frame of the sequence in the texture catalogue in order to initialize same; comparing the synthesizable textures of the identified areas of synthesizable textures in the following frames of the sequence with the synthesizable textures stored in the texture catalogue; in the event of a match, assigning the respective synthesizable texture of an identified area of synthesizable texture among the following frames of the sequence to the respective synthesizable texture stored in the texture catalogue; and in the event of no match, storing the respective synthesizable texture of an identified area of synthesizable texture among the following frames of the sequence in the texture catalogue.
-
-
19. A method for video encoding, comprising:
-
performing a texture analysis of video scenes to identify areas of synthesizable textures; encoding the video scenes and generating meta data for describing the areas identified and for describing the synthesizable textures using information on identified areas of synthesizable textures, and information on the textures of these areas, the step of generating the meta data comprising the step of estimating motion parameters describing a warping so as to adapt synthesizable areas in frames of a Group of Frames to corresponding texture areas in first or last frames of this group by means of the warping, the motion parameters being part of the meta data. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 36, 38)
-
-
33. A method for video decoding, comprising:
-
assessing encoded data of video scenes and meta data for describing identified areas of synthesizable textures in the video scenes and for describing the synthesizable textures of these areas; and reconstructing the video scenes by synthetically generating synthetically generated textures for the areas identified, wherein the meta data comprise motion parameters describing a warping so as to adapt synthesizable areas in frames of a Group of Frames to corresponding texture areas in first or last frames of this group by means of the warping, and wherein the step of reconstructing comprises the step of warping the corresponding texture areas in the first or last frames of the group in the direction of the adapted synthesizable areas in the frames of the Group of Frames using the motion parameters. - View Dependent Claims (37, 39, 40, 41, 42)
-
-
34. An assembly for video encoding, comprising:
-
an analyzer for performing a texture analysis of video scenes to identify areas of synthesizable textures; an encoder for encoding the video scenes and a generator for generating meta data for describing the areas identified and for describing the synthesizable textures using information on identified areas of synthesizable textures, and information on the textures of these areas; and the generator for generating the meta data being configured to estimate motion parameters describing a warping so as to adapt synthesizable areas in frames of a Group of Frames to corresponding texture areas in first or last frames of this group by means of the warping, the motion parameters being part of the meta data.
-
-
35. An assembly for video decoding, comprising:
-
an assessor for assessing encoded data of video scenes and meta data for describing identified areas of synthesizable textures in the video scenes and for describing the synthesizable textures of these areas; and a reconstructor for reconstructing the video scenes by synthetically generating synthetically generated textures for the areas identified, wherein the meta data comprise motion parameters describing a warping so as to adapt synthesizable areas in frames of a Group of Frames to corresponding texture areas in first or last frames of this group by means of the warping, and wherein the reconstructor is configured to warp the corresponding texture areas in the first or last frames of the group in the direction of the adapted synthesizable areas in the frames of the Group of Frames using the motion parameters. - View Dependent Claims (43, 44, 45)
-
Specification