Method and system for automatically determining the position and orientation of an object in 3-D space
First Claim
1. Method for automatically determining the position and orientation of an object in 3-D space from as few as one digital image generated by as few as one 2-D sensor, the method comprising the steps of:
- generating sensor calibration data relating the position and orientation of the 2-D sensor to the 3-D space;
generating reference data relating to at least three non-collinear geometric features of an ideal object;
generating the digital image containing the at least three-non-collinear geometric features of the object wherein at least two of the features reside in the image generated by the one 2-D sensor;
locating each of the features in the digital image;
computing at least three non-parallel 3-D lines as a function of the feature locations and the sensor calibration data, each of the 3-D lines passing through its respective feature of the object at a reference point;
determining a starting point on each of the 3-D lines in the vicinity of its associated reference point;
determining the actual 3-D location of each feature on its associated 3-D line utilizing the reference data and its respective starting point on the 3-D line; and
utilizing the reference data and the actuaI 3-D location of each of the features to determine the position and orientation of the object.
2 Assignments
0 Petitions
Accused Products
Abstract
A method and system for automatically determining the position and orientation of an object by utilizing as few as a single digital image generated by as few as a single camera without the use of a structured light. The digital image contains at least three non-colinear geometric features of the object. The three features may be either coplanar or non-coplanar. The features or targets are viewed such that perspective information is present in the digital image. In a single camera system the geometric features are points, and in a multi-camera system, the features are typically combinations of points and lines. The location of the features are determined and processed within a programmed computer together with reference data and camera calibration data to provide at least three non-parallel 3-D lines. The 3-D lines are utilized by an iterative algorithm to obtain data relating to the position and orientation of the object in 3-D space. The resultant data is subsequently utilized to calculate an offset of the object from the camera. The offset is then transformed into the coordinate system or frame of a peripheral device such as a robot, programmable controller, numerically controlled machine, etc. Finally, the programmed computer transfers the transformed offset to the peripheral device which utilizes the transformed offset to modify its preprogrammed path.
240 Citations
41 Claims
-
1. Method for automatically determining the position and orientation of an object in 3-D space from as few as one digital image generated by as few as one 2-D sensor, the method comprising the steps of:
-
generating sensor calibration data relating the position and orientation of the 2-D sensor to the 3-D space; generating reference data relating to at least three non-collinear geometric features of an ideal object; generating the digital image containing the at least three-non-collinear geometric features of the object wherein at least two of the features reside in the image generated by the one 2-D sensor; locating each of the features in the digital image; computing at least three non-parallel 3-D lines as a function of the feature locations and the sensor calibration data, each of the 3-D lines passing through its respective feature of the object at a reference point; determining a starting point on each of the 3-D lines in the vicinity of its associated reference point; determining the actual 3-D location of each feature on its associated 3-D line utilizing the reference data and its respective starting point on the 3-D line; and utilizing the reference data and the actuaI 3-D location of each of the features to determine the position and orientation of the object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. Method for automatically generating offset data for use by a programmable peripheral device having a coordinate frame such as a robot to move to an object in 3-D space, the method utilizing as few as one digital image generated by as few as one 2-D sensor, the method comprising the steps of:
-
generating sensor calibration data relating the position and orientation of the 2-D sensor to the 3-D space; generating reference data relating to at least three non-collinear geometric features of an ideal object; generating the digital image containing the at least three non-collinear geometric features of the object wherein at least two of the features reside in the image generated by the one 2-D sensor; locating each of the features in the digital image; computing at least three non-parallel 3-D lines as a function of the feature locations and the sensor calibration data, each of the 3-D lines passing through its respective feature of the object at a reference point; determining a starting point on each of the 3-D lines in the vicinity of its assoicated reference point; determining the actual 3-D location of each feature on its associated 3-D line utilizing the reference data and its respective starting point on the 3-D line; utilizing the reference data and the actual 3-D location of each of the features to determine the position and orientation of the object; calculating an offset of the object from the 2-D sensor as a function of the position and orientation of the object; and transforming the offset of the object to the coordinate frame of the peripheral device. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20, 21)
-
-
22. A vision system for automatically determining the position and orientation of an object in 3-D D space from as few as one digital image generated by as few as one 2-D sensor, the system comprising:
-
first means for storing reference data relating to at least three non-collinear geometric features of an ideal object; 2-D sensor means for generating the digital image containing the at least three non-collinear geometric features of the object, the 2-D sensor means including the one 2-D sensor wherein at least two of the features reside in the image generated by the one 2-D sensor; second means for storing sensor calibration data relating the position and orientation of the 2-D sensor means to the 3-D space; means for locating each of the features in the digital image; means for computing at least three non-parallel 3-D lines as a function of the feature locations and the sensor calibration data, each of the 3-D lines passing through its respective feature of the object at a reference point; determining a starting point on each of the 3-D lines in the vicintiy of its associated reference point; determining the actual 3-D location of each feature on its associated 3-D line utilizing the reference data and its respective starting point on the 3-D line; and means for utilzing the reference data and the actual 3-D locations of each of the features to determine the position and orientation of the object. - View Dependent Claims (23, 24, 25, 26, 27, 28, 29, 30, 31)
-
-
32. A vision system for automatically generating offset data for use by a programmble peripheral device having a coordinate frame such as a robot to enable the peripheral device to move to an object in 3-D space from as few as one digital image generated by as few as one 2-D sensor, the system comprising:
-
first means for storing sensor calibration data relating to at least three non-collinear geometric features of an ideal object; 2-D sensor means for generating the digital image containing the at least three non-collinear geometric features of the object, the 2-D sensor means including the one 2-D sensor wherein at least two of the features reside in the image generated by the one 2-D sensor; second means for storing sensor calibration data relating the position and orientation of the 2-D sensor means to the 3-D space; means for locating each of the features in the digital image; means for computing at least three non-parallel 3-D lines as a function of the feature locations and the sensor calibration data, each of the 3-D lines passing through its respective features of the object at a reference point; determining a starting point on each of the 3-D lines in the vicinity of its associated reference point; determining the actual 3-D location of each feature on its associated 3-D line utilizing the reference data and its respective starting point on the 3-D line; means for utilizing the reference data and the actual 3-D locations of each of the features to determine the position and orientation of the object; means for calculating an offset of the object from the 2-D sensor means as a function of the position and orientation of the object; and means for transforming the offset of the object to the coordinate frame of the peripheral device. - View Dependent Claims (33, 34, 35, 36, 37, 38, 39, 40, 41)
-
Specification