Method for determining an in-focus position and a vision inspection system
First Claim
Patent Images
1. A method for determining a difference between a sample position and an in-focus position based on a single image, said method comprising:
- capturing image data for the single image, the image data depicting a sample when in said sample position;
determining foreground segments and background segments in said image data;
segmenting said foreground segments into segments of object classes, at least one of said object classes selected from a group including a red blood cell (RBC) object class and a white blood cell (WBC) object class;
extracting a feature set from said image data, said feature set including a plurality of contrast features, the plurality of contrast features being generated using at least a wavelet function based on said image data and a Vollath'"'"'s F4 function, which is an auto-correlation function, based on said image data;
providing the feature set including the plurality of contrast features as input to a machine learning algorithm that is trained to associate feature sets including pluralities of contrast features with respective position difference values; and
using the machine learning algorithm to classify said feature set including the plurality of contrast features into a single position difference value corresponding to said difference between the sample position and the in-focus position;
whereinsaid feature set further includes a sub-set of content features, said sub-set of content features including overall content features and segmental content features, andsaid segmental content features are selected from a group including a mean intensity for the foreground segments, a mean intensity for the background segments, a variance for the foreground segments, a variance for the background segments and an area function expressing an area distribution between different segments.
1 Assignment
0 Petitions
Accused Products
Abstract
In one embodiment of the present invention, a method is disclosed for determining a difference between a sample position and an in-focus position, as well as a vision inspection system. In a first step image data depicting a sample is captured. Next, a feature set is extracted from the image data. Thereafter, the feature set is classified into a position difference value, corresponding to the difference between the sample position and the in-focus position, by using a machine learning algorithm that is trained to associate image data features to a position difference value.
16 Citations
27 Claims
-
1. A method for determining a difference between a sample position and an in-focus position based on a single image, said method comprising:
-
capturing image data for the single image, the image data depicting a sample when in said sample position; determining foreground segments and background segments in said image data; segmenting said foreground segments into segments of object classes, at least one of said object classes selected from a group including a red blood cell (RBC) object class and a white blood cell (WBC) object class; extracting a feature set from said image data, said feature set including a plurality of contrast features, the plurality of contrast features being generated using at least a wavelet function based on said image data and a Vollath'"'"'s F4 function, which is an auto-correlation function, based on said image data; providing the feature set including the plurality of contrast features as input to a machine learning algorithm that is trained to associate feature sets including pluralities of contrast features with respective position difference values; and using the machine learning algorithm to classify said feature set including the plurality of contrast features into a single position difference value corresponding to said difference between the sample position and the in-focus position;
whereinsaid feature set further includes a sub-set of content features, said sub-set of content features including overall content features and segmental content features, and said segmental content features are selected from a group including a mean intensity for the foreground segments, a mean intensity for the background segments, a variance for the foreground segments, a variance for the background segments and an area function expressing an area distribution between different segments. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A vision inspection system comprising:
-
a slide holder adapted to hold at least one slide including a sample; an image capturing device configured to capture image data depicting said sample when in a sample position, wherein said image capturing device includes an optical system and an image sensor; a steering motor system configured to alter a distance between said sample and said optical system; and a processor connected to said image capturing device and said steering motor system; wherein said processor, in association with a memory, is configured to determine a difference between said sample position and an in-focus position based on a single image by receiving said image data depicting said sample from said image capturing device, determining foreground segments and background segments in said image data, segmenting said foreground segments into segments of object classes, at least one of said object classes selected from a group including a red blood cell (RBC) object class and a white blood cell (WBC) object class, extracting a feature set from said image data, said feature set including a plurality of contrast features, the plurality of contrast features being generated using at least a wavelet function based on said image data and a Vollath'"'"'s F4 function, which is an auto-correlation function, based on said image data, providing the feature set including the plurality of contrast features as input to a machine learning algorithm that is trained to associate features sets including pluralities of contrast features with respective position difference values, and using the machine learning algorithm to classify said feature set including the plurality of contrast features into a single position difference value corresponding to said difference between the sample position and the in-focus position; wherein said feature set further includes a sub-set of content features, said sub-set of content features including overall content features and segmental content features; and wherein said segmental content features are selected from a group including a mean intensity for the foreground segments, a mean intensity for the background segments, a variance for the foreground segments, a variance for the background segments and an area function expressing an area distribution between different segments. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25)
-
-
26. A control device comprising:
-
a receiver configured to receive image data depicting a sample when in a sample position, a processor, in association with a memory, configured to determine a difference between said sample position and an in-focus position based on a single image by determining foreground segments and background segments in said image data, segmenting said foreground segments into segments of object classes, at least one of said object classes selected from a group including a red blood cell (RBC) object class and a white blood cell (WBC) object class, extracting a feature set from said image data, said feature set including a plurality of contrast feature values, the plurality of contrast feature values including at least a wavelet function based on said image data and a Vollath'"'"'s F4 function, which is an auto-correlation function, based on said image data, providing the feature set including the plurality of contrast feature values as input to a machine learning algorithm that is trained to associate feature sets including pluralities of contrast feature values with respective position difference values, and using the machine learning algorithm to classify said feature set including the plurality of contrast feature values into a single position difference value corresponding to said difference between the sample position and the in-focus position; and a transmitter configured to transmit said difference;
whereinsaid feature set further includes a sub-set of content features, said sub-set of content features including overall content features and segmental content features, and said segmental content features are selected from a group including a mean intensity for the foreground segments, a mean intensity for the background segments, a variance for the foreground segments, a variance for the background segments and an area function expressing an area distribution between different segments. - View Dependent Claims (27)
-
Specification