Method for automatic determination of main subjects in photographic images
DCFirst Claim
Patent Images
1. A method for detecting a main subject in an image, the method comprising the steps of:
- a) receiving a digital image;
b) extracting regions of arbitrary shape and size defined by actual objects from the digital image;
c) extracting for each of the regions at least one structural saliency feature and at least one semantic saliency feature; and
, d) integrating the structural saliency feature and the semantic feature using a probabilistic reasoning engine into an estimate of a belief that each region is the main subject.
5 Assignments
Litigations
1 Petition
Accused Products
Abstract
A method for detecting a main subject in an image, the method comprises: receiving a digital image; extracting regions of arbitrary shape and size defined by actual objects from the digital image; grouping the regions into larger segments corresponding to physically coherent objects; extracting for each of the regions at least one structural saliency feature and at least one semantic saliency feature; and integrating saliency features using a probabilistic reasoning engine into an estimate of a belief that each region is the main subject.
431 Citations
37 Claims
-
1. A method for detecting a main subject in an image, the method comprising the steps of:
-
a) receiving a digital image;
b) extracting regions of arbitrary shape and size defined by actual objects from the digital image;
c) extracting for each of the regions at least one structural saliency feature and at least one semantic saliency feature; and
,d) integrating the structural saliency feature and the semantic feature using a probabilistic reasoning engine into an estimate of a belief that each region is the main subject. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
(c1) finding a minimum bounding rectangle of a region;
(c2) stretching the minimum bounding rectangle in all four directions proportionally; and
(c3) defining all regions intersecting the stretched minimum bounding rectangle as neighbors of the region.
-
-
8. The method as in claim 4, wherein step (c) includes using a centrality as the location feature, wherein the centrality feature is computed by the steps of:
-
(c1) determining a probability density function of main subject locations using a collection of training data;
(c2) computing an integral of the probability density function over an area of a region; and
,(c3) obtaining a value of the centrality feature by normalizing the integral by the area of the region.
-
-
9. The method as in claim 4, wherein step (c) includes using a hyperconvexity as the convexity feature, wherein the hyperconvexity feature is computed as a ratio of a perimeter-based convexity measure and an area-based convexity measure.
-
10. The method as in claim 4, wherein step (c) includes computing a maximum fraction of a region perimeter shared with a neighboring region as the surroundedness feature.
-
11. The method as in claim 4, wherein step (c) includes using an orientation-unaware borderness feature as the borderness feature, wherein the orientation-unaware borderness feature is categorized by the number and configuration of image borders a region is in contact with, and all image borders are treated equally.
-
12. The method as in claim 4, wherein step (c) includes using an orientation-aware borderness feature as the borderness feature, wherein the orientation-aware borderness feature is categorized by the number and configuration of image borders a region is in contact with, and each image border is treated differently.
-
13. The method as in claim 4, wherein step (c) includes using the borderness feature that is determined by what fraction of an image border is in contact with a region.
-
14. The method as in claim 4, wherein step (c) includes using the borderness feature that is determined by what fraction of a region border is in contact with an image border.
-
15. The method as in claim 1, wherein step (d) includes using a Bayes net as the reasoning engine.
-
16. The method as in claim 1, wherein step (d) includes using a conditional probability matrix that is determined by using fractional frequency counting according to a collection of training data.
-
17. The method as in claim 1, wherein step (d) includes using a belief sensor function to convert a measurement of a feature into evidence, which is an input to a Bayes net.
-
18. The method as in claim 1, wherein step (d) includes outputting a belief map, which indicates a location of and a belief in the main subject.
-
19. A method for detecting a main subject in an image, the method comprising the steps of:
-
a) receiving a digital image;
b) extracting regions of arbitrary shape and size defined by actual objects from the digital image;
c) grouping the regions into larger segments corresponding to physically coherent objects;
d) extracting for each of the regions at least one structural saliency feature and at least one semantic saliency feature; and
,e) integrating the structural saliency feature and the semantic feature using a probabilistic reasoning engine into an estimate of a belief that each region is the main subject. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37)
(c1) finding a minimum bounding rectangle of a region;
(c2) stretching the minimum bounding rectangle in all four directions proportionally; and
,(c3) defining all regions intersecting the stretched minimum bounding rectangle as neighbors of the region.
-
-
27. The method as in claim 23, wherein step (d) includes using a centrality as the location feature, wherein the centrality feature is computed by the steps of:
-
(c1) determining a probability density function of main subject locations using a collection of training data;
(c2) computing an integral of the probability density function over an area of a region; and
,(c3) obtaining a value of the centrality feature by normalizing the integral by the area of the region.
-
-
28. The method as in claim 23, wherein step (d) includes using a hyperconvexity as the convexity feature, wherein the hyperconvexity feature is computed as a ratio of a perimeter-based convexity measure and an area-based convexity measure.
-
29. The method as in claim 23, wherein step (d) includes computing a maximum fraction of a region perimeter shared with a neighboring region as the surroundedness feature.
-
30. The method as in claim 23, wherein step (d) includes using an orientation-unaware borderness feature as the borderness feature, wherein the orientation-unaware borderness feature is categorized by the number and configuration of image borders a region is in contact with, and all image borders are treated equally.
-
31. The method as in claim 23, wherein step (d) includes using an orientation-aware borderness feature as the borderness feature, wherein the orientation-aware borderness feature is categorized by the number and configuration of image borders a region is in contact with, and each image border is treated differently.
-
32. The method as in claim 23, wherein step (d) includes using the borderness feature that is determined by what fraction of an image border is in contact with a region.
-
33. The method as in claim 23, wherein step (d) includes using the borderness feature that is determined by what fraction of a region border is in contact with an image border.
-
34. The method as in claim 19, wherein step (e) includes using a Bayes net as the reasoning engine.
-
35. The method as in claim 19, wherein step (e) includes using a conditional probability matrix that is determined by using fractional frequency counting according to a collection of training data.
-
36. The method as in claim 19, wherein step (e) includes using a belief sensor function to convert a measurement of a feature into evidence, which is an input to a Bayes net.
-
37. The method as in claim 19, wherein step (e) includes outputting a belief map, which indicates a location of and a belief in the main subject.
Specification