Gray scale vision method and system utilizing same
First Claim
1. Method for automatically locating an object at a vision station, said method comprising the steps of:
- generating reference data relating to at least two features of an ideal object, the data including at least two edge points, one of said points being on each of a pair of non-parallel edge segments wherein the reference data includes directional data for each of the edge points;
generating a gray-scale digital image containing the objects to be located at the vision station;
processing the reference data and the ditital image together to obtain an accumulator image, said processing step including the steps of performing an edge-detecting convolution on at least a portion of the digital image for each edge point and shifting the convoluted data by an amount and in a direction related to the directional data for each of said edge points to obtain a shifted image; and
determining the location of at least one localized bright region in the accumulator image, the location of said region corresponding to the location of one of the features within the digital image.
1 Assignment
0 Petitions
Accused Products
Abstract
A method and system are provided for automatically locating an object at a vision station by performing an edge-detecting algorithm on at least a portion of the gray-scale digitized image of the object. Preferably, the algorithm comprises an implementation of the Hough transform which includes the iterative application of a direction-sensitive, edge-detecting convolution to the digital image. Each convolution is applied with a different convolution mask or filter, each of which is calculated to give maximum response to an edge of the object in a different direction. The method and system have the ability to extract edges from low contrast images. Also, preferably, a systolic array processor applies the convolutions. The implementation of the Hough transform also includes the steps of shifting the resulting edge-enhanced images by certain amounts in the horizontal and vertical directions, summing the shifted images together into an accumulator buffer to obtain an accumulator image and detecting the maximum response in the accumulator image which corresponds to the location of an edge. If the object to be found is permitted to rotate, at least one other feature, such as another edge, must be located in order to specify the location and orientation of the object. The location of the object when correlated with the nominal position of the object at the vision station provides the position and attitude of the object. The resultant data may be subsequently transformed into the coordinate frame of a peripheral device, such as a robot, programmable controller, numerical controlled machine, etc. for subsequent use by a controller of the peripheral device.
-
Citations
13 Claims
-
1. Method for automatically locating an object at a vision station, said method comprising the steps of:
-
generating reference data relating to at least two features of an ideal object, the data including at least two edge points, one of said points being on each of a pair of non-parallel edge segments wherein the reference data includes directional data for each of the edge points; generating a gray-scale digital image containing the objects to be located at the vision station; processing the reference data and the ditital image together to obtain an accumulator image, said processing step including the steps of performing an edge-detecting convolution on at least a portion of the digital image for each edge point and shifting the convoluted data by an amount and in a direction related to the directional data for each of said edge points to obtain a shifted image; and determining the location of at least one localized bright region in the accumulator image, the location of said region corresponding to the location of one of the features within the digital image. - View Dependent Claims (3, 4, 5, 6, 7)
-
-
2. Method of automatically generating offset data for use by a programmed robot controlled to enable a robot controlled by the controller to move to an object at the vision station including a camera, the offset data relating the position of the object to the coordinate frame of the robot, the method comprising the step of:
-
generating calibration data relating the camera to the coordinate frame of the robot; generating reference data relating to at least two features of an ideal object, the data including at least two edge points one of said points being on each of a pair of non-parallel edge segments wherein the reference data includes directional data for each of the edge points; generating a gray-scale digital image containing the object to be located at the vision station; processing the reference data and the digital image together to obtain an accumulator image, said processing step including the steps of performing an edge-detecting convolution on at least a portion of the digital image for each edge point and shifting the convoluted data by an amount and in a direction related to the directional data for each of said edge points to obtain a shifted image; determining the location of at least one localized bright region in the accumulator image, the location of said region corresponding to the location of one of the features within the digital image; correlating the location of the features within the digital image of an offset of the object from the camera; and transforming the offset of the object from the camera to the robot frame.
-
-
8. A gray-scale vision system for automatically locating an object at a vision station comprising:
-
means for storing reference data relating to at least two features of an ideal object, the data including at least two edge points one of said edge points being on each of a pair of non-parallel edge segments wherein the reference data includes directional data for each of the edge points; means for generating a gray-scale digital image containing the object to be located at the vision station, said means for generating including a television camera; means for processing the reference data and the digital image together to obtain an accumulator image, said processing means including means for performing an edge-detecting convolution on at least a portion of the digital image for each edge point and shifting the convoluted data by an amount and in a direction related to the directional data for each of said edge points to obtain a shifted image; and means for determining the location of at least one localized bright region in the accumulator image, the location of said region corresponding to the location of one of the features within the digital image. - View Dependent Claims (9)
-
-
10. A system for automatically generating offset data for use by a programmed robot controller to enable a robot controlled by the controller to move to an object at a vision station, the offset data relating the position of the object to the coordinate frame of the robot, the system comprising:
-
means for storing reference data relating to at least two features of an ideal object, the data including at least two edge points, one of said edge points being on each of a pair of non-parallel edge segments wherein the reference data includes directional data for each of the edge points; means for generating a gray-scale digital image containing the object to be located at the vision station, said means for generating including a television camera; means for storing calibration data relating the camera to the coordinate frame of the robot; means for processing the reference data and the digital image together to obtain an accumulator image, said processing means including means for performing an edge detecting convolution on at least a portion of the digital image for each edge point and shifting the convoluted data by an amount and in a direction related to the directional data for each of said edge points to obtain a shifted image; means for determining the location of at least one localized bright region in the accumulator image, the location of said region corresponding to the location of one of the features within the digital image; means for correlating the location of the features within the digital image to an offset of the object from the camera; and means for transforming the offset of the object from the camera to the robot frame. - View Dependent Claims (11, 12, 13)
-
Specification