Method and apparatus for sparsity-based de-artifact filtering for video encoding and decoding
First Claim
Patent Images
1. An apparatus, comprising:
- an encoder encoding at least a portion of an image using de-artifact filtering, said encoder comprising a de-artifacting filter that performs the de-artifact filtering by grouping regions within the portion based on dimensions and characteristics of the region or portion, transforming the grouped regions, adaptively performing the de-artifact filtering on the transformed regions, inverse transforming the de-artifacted regions to create replacement regions, and restoring the replacement regions to positions with the image from which the regions were taken prior to the grouping, wherein multiple estimates of the same pixel are obtained from overlapping regions and, for each pixel, adaptive sparsity-based filtering is used to fuse multiple estimates from redundant representations of the overlapping regions, and wherein a type and a strength of the de-artifact filtering is adaptively selected responsive to characteristics of at least one of quantization noise statistics, coding modes and motion information, local coding conditions, and compression requirements.
4 Assignments
0 Petitions
Accused Products
Abstract
Methods and apparatus are provided for sparsity-based de-artifact filtering for video encoding and decoding. An apparatus includes an encoder (400) for encoding at least a portion of an image by grouping regions within the portion based on a grouping metric, transforming the grouped regions, adaptively performing de-artifact filtering on the transformed regions using a de-artifacting filter (413) included in the encoder, inverse transforming the de-artifacted regions to create replacement regions, and restoring the replacement regions to positions with the image from which the regions were taken prior to the grouping.
-
Citations
29 Claims
-
1. An apparatus, comprising:
an encoder encoding at least a portion of an image using de-artifact filtering, said encoder comprising a de-artifacting filter that performs the de-artifact filtering by grouping regions within the portion based on dimensions and characteristics of the region or portion, transforming the grouped regions, adaptively performing the de-artifact filtering on the transformed regions, inverse transforming the de-artifacted regions to create replacement regions, and restoring the replacement regions to positions with the image from which the regions were taken prior to the grouping, wherein multiple estimates of the same pixel are obtained from overlapping regions and, for each pixel, adaptive sparsity-based filtering is used to fuse multiple estimates from redundant representations of the overlapping regions, and wherein a type and a strength of the de-artifact filtering is adaptively selected responsive to characteristics of at least one of quantization noise statistics, coding modes and motion information, local coding conditions, and compression requirements. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
8. A method performed in a video encoder, comprising:
-
encoding at least a portion of an image using de-artifact filtering, wherein said de-artifact filtering comprises; grouping regions within the portion based on dimensions and characteristics of the region or portion; transforming the grouped regions; adaptively performing de-artifact filtering on the transformed regions using a de-artifacting filter; inverse transforming the de-artifacted regions to create replacement regions; and restoring the replacement regions to positions with the image from which the regions were taken prior to the grouping, wherein multiple estimates of the same pixel are obtained from overlapping regions and, for each pixel, adaptive sparsity-based filtering is used to fuse multiple estimates from redundant representations of the overlapping regions, and, wherein a type and a strength of the de-artifact filtering is adaptively selected responsive to characteristics of at least one of quantization noise statistics, coding modes and motion information, local coding conditions, and compression requirements. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. An apparatus, comprising:
-
a decoder decoding at least a portion of an image using de-artifact filtering, said decoder comprising a de-artifact filter for performing the de-artifact filtering by grouping regions within the portion based on dimensions and characteristics of the region or portion, transforming the grouped regions, adaptively performing the de-artifact filtering on the transformed regions, inverse transforming the de-artifacted regions to create replacement regions, and restoring the replacement regions to positions with the image from which the regions were taken prior to the grouping, wherein multiple estimates of the same pixel are obtained from overlapping regions and, for each pixel, adaptive sparsity-based filtering is used to fuse multiple estimates from redundant representations of the overlapping regions, and, wherein a type and a strength of the de-artifact filtering is adaptively selected responsive to characteristics of at least one of quantization noise statistics, coding modes and motion information, local coding conditions, and compression requirements. - View Dependent Claims (16, 17, 18, 19, 20, 21)
-
-
22. A method performed in a video decoder, comprising:
-
decoding at least a portion of an image using de-artifact filtering, wherein said de-artifact filtering comprises; grouping regions within the portion based on dimensions and characteristics of the region or portion; transforming the grouped regions; adaptively performing the de-artifact filtering on the transformed regions using a de-artifacting filter; inverse transforming the de-artifacted regions to create replacement regions; and restoring the replacement regions to positions with the image from which the regions were taken prior to the grouping, wherein multiple estimates of the same pixel are obtained from overlapping regions and, for each pixel, adaptive sparsity-based filtering is used to fuse multiple estimates from redundant representations of the overlapping regions, and, wherein a type and a strength of the de-artifact filtering is adaptively selected responsive to characteristics of at least one of quantization noise statistics, coding modes and motion information, local coding conditions, and compression requirements. - View Dependent Claims (23, 24, 25, 26, 27, 28)
-
-
29. A non-transitory computer-readable storage media having video signal data encoded thereupon executable by a computer for performing a method, the method comprising:
-
encoding at least a portion of an image by an encoder using de-artifact filtering, said encoder comprising a de-artifacting filter that performs the de-artifact filtering by grouping regions within the portion based on dimensions and characteristics of the region or portion, transforming the grouped regions, adaptively performing de-artifact filtering on the transformed regions, inverse transforming the de-artifacted regions to create replacement regions, and restoring the replacement regions to positions with the image from which the regions were taken prior to the grouping, wherein multiple estimates of the same pixel are obtained from overlapping regions and, for each pixel, adaptive sparsity-based filtering is used to fuse multiple estimates from redundant representations of the overlapping regions, and, wherein a type and a strength of the de-artifact filtering is adaptively selected responsive to characteristics of at least one of quantization noise statistics, coding modes and motion information, local coding conditions, and compression requirements.
-
Specification