Method for searching for a similar image in an image database based on a reference image
First Claim
Patent Images
1. A method, comprising:
- extracting characteristic points from a first image;
generating for each characteristic point of the first image a plurality of components of a descriptor describing an image region around the characteristic point;
comparing the descriptors of the characteristic points of the first image;
classifying characteristic points of the first image based on the comparing and an ambiguity threshold;
comparing descriptors of characteristic points of the first image with descriptors of characteristic points of a second image, to obtain a set of pairs of characteristic points belonging respectively to the first and second images, the descriptors of which have a proximity lower than a first threshold;
defining a set of spatial transformation models linking the respective positions of the characteristic points of at least two pairs;
testing the spatial transformation models by determining whether a model links respective positions of the characteristic points of other pairs of characteristic points, within an error margin lower than a position error threshold; and
identifying the second image as a relevant visual search image based on whether at least one of the transformation models links characteristic points of a number of pairs greater than a threshold of number of pairs, wherein the comparing descriptors of characteristic points of the first image to descriptors of characteristic points of the second image is limited to characteristic points classified as non-ambiguous with respect to at least one of the first and second images and the testing considers ambiguous characteristic points.
2 Assignments
0 Petitions
Accused Products
Abstract
A method for extracting characteristic points from an image, includes extracting characteristic points from a first image, generating for each characteristic point a descriptor with several components describing an image region around the characteristic point, and comparing two by two the descriptors of the first image, the characteristic points whose descriptors have a proximity between them greater than an ambiguity threshold, being considered ambiguous.
-
Citations
23 Claims
-
1. A method, comprising:
-
extracting characteristic points from a first image; generating for each characteristic point of the first image a plurality of components of a descriptor describing an image region around the characteristic point; comparing the descriptors of the characteristic points of the first image; classifying characteristic points of the first image based on the comparing and an ambiguity threshold; comparing descriptors of characteristic points of the first image with descriptors of characteristic points of a second image, to obtain a set of pairs of characteristic points belonging respectively to the first and second images, the descriptors of which have a proximity lower than a first threshold; defining a set of spatial transformation models linking the respective positions of the characteristic points of at least two pairs; testing the spatial transformation models by determining whether a model links respective positions of the characteristic points of other pairs of characteristic points, within an error margin lower than a position error threshold; and identifying the second image as a relevant visual search image based on whether at least one of the transformation models links characteristic points of a number of pairs greater than a threshold of number of pairs, wherein the comparing descriptors of characteristic points of the first image to descriptors of characteristic points of the second image is limited to characteristic points classified as non-ambiguous with respect to at least one of the first and second images and the testing considers ambiguous characteristic points. - View Dependent Claims (2, 3, 4)
-
-
5. A method, comprising:
-
extracting characteristic points from a first image; generating for each characteristic point of the first image a plurality of components of a descriptor describing an image region around the characteristic point; comparing the descriptors of the characteristic points of the first image; classifying characteristic points of the first image based on the comparing and an ambiguity threshold; comparing descriptors of characteristic points of the first image with descriptors of characteristic points of a second image, to obtain a set of pairs of characteristic points belonging respectively to the first and second images, the descriptors of which have a proximity lower than a first threshold; defining a set of spatial transformation models linking the respective positions of the characteristic points of at least two pairs; testing the spatial transformation models by determining whether a model links respective positions of the characteristic points of other pairs of characteristic points, within an error margin lower than a position error threshold; and identifying the second image as a relevant visual search image based on whether at least one of the transformation models links characteristic points of a number of pairs greater than a threshold number of pairs, wherein the second image is one of a plurality of images of an image database and the method comprises; comparing the first image with the plurality of images of the image database; and associating a visual search relevance rating with an image of the image database identified as relevant based on a surface area of a region of the image of the database delimited by the characteristic points belonging to the pairs linked by the at least one transformation model. - View Dependent Claims (6, 7, 8, 9, 10)
-
-
11. A method, comprising:
-
extracting characteristic points from a first image; generating for each characteristic point of the first image a plurality of components of a descriptor describing an image region around the characteristic point; comparing the descriptors of the characteristic points of the first image; classifying characteristic points of the first image based on the comparing and an ambiguity threshold; comparing descriptors of characteristic points of the first image with descriptors of characteristic points of a second image, to obtain a set of pairs of characteristic points belonging respectively to the first and second images, the descriptors of which have a proximity lower than a first threshold; defining a set of spatial transformation models linking the respective positions of the characteristic points of at least two pairs; testing the spatial transformation models by determining whether a model links respective positions of the characteristic points of other pairs of characteristic points, within an error margin lower than a position error threshold; and identifying the second image as a relevant visual search image based on whether at least one of the transformation models links characteristic points of a number of pairs greater than a threshold of number of pairs, wherein the second image is one of a plurality of images of an image database; comparing the first image with the plurality of images of the image database; and associating a visual search relevance rating with an image of the image database identified as relevant based on a number of pairs linked by the at least one transformation model, wherein the relevance rating is completed for each image of the image database identified as relevant, ambiguous characteristic points in the first and second images are searched for pairs of characteristic points having a proximity of descriptors lower than the first threshold, and the relevance rating is incremented when the at least one transformation model links respective positions of the characteristic points of a pair of ambiguous characteristic points. - View Dependent Claims (12, 13, 14, 15, 16, 17)
-
-
18. A system, comprising:
-
a memory; and processing circuitry, which, in operation; compares descriptors of characteristic points of a first image with descriptors of characteristic points of a second image, the characteristic points of at least one of the first and second images being limited to characteristic points classified as non-ambiguous based on an ambiguity threshold; identifies, based on the comparisons of the descriptors, a set of pairs of characteristic points belonging respectively to the first and second images, the descriptors of which have a proximity lower than a first threshold; defines a set of spatial transformation models linking the respective positions of the characteristic points of at least two pairs of the set of pairs; tests the spatial transformation models by determining whether a model links respective positions of the characteristic points of other pairs of characteristic points, within an error margin lower than a position error threshold, wherein the testing considers ambiguous characteristic points; and identifies the second image as a relevant search image based on whether at least one of the transformation models links characteristic points of a number of pairs greater than a threshold of number of pairs. - View Dependent Claims (19, 20)
-
-
21. A method, comprising:
-
comparing, using digital image processing circuitry, descriptors of characteristic points of a first image with descriptors of characteristic points of a second image, the characteristic points of at least one of the first and second images being limited to characteristic points classified as non-ambiguous based on an ambiguity threshold; identifying, using the digital image processing circuitry and based on the comparisons of the descriptors, a set of pairs of characteristic points belonging respectively to the first and second images, the descriptors of which have a proximity lower than a first threshold; defining, using the digital image processing circuitry, a set of spatial transformation models linking the respective positions of the characteristic points of at least two pairs of the set of pairs; testing, using the digital image processing circuitry, the spatial transformation models by determining whether a model links respective positions of the characteristic points of other pairs of characteristic points, within an error margin lower than a position error threshold, wherein the testing considers ambiguous characteristic points; and identifying, using the digital image processing circuitry, the second image as a relevant search image based on whether at least one of the transformation models links characteristic points of a number of pairs greater than a threshold of number of pairs. - View Dependent Claims (22, 23)
-
Specification