System and method for tying together machine vision coordinate spaces in a guided assembly environment
First Claim
1. A method for calibrating a vision system in an environment in which a first workpiece at a first location is transferred by a manipulator to a second location, wherein an operation performed on the first workpiece relies upon tying together coordinate spaces of the first location and the second location, the method comprising the steps of:
- arranging at least one vision system camera to image the first workpiece when positioned at the first location and to image the first workpiece when positioned at the second location;
calibrating at least one vision system camera with respect to the first location to derive first calibration data which defines a first coordinate space and at least one vision system camera with respect to the second location to derive second calibration data which defines a second coordinate space;
identifying features of at least the first workpiece at the first location from a first image of the first workpiece;
based on the identified features in the first image locating the first workpiece with respect to the first coordinate space relative to the first location;
gripping and moving, with the manipulator, at least one time, the first workpiece to a predetermined manipulator position at the second location;
acquiring a second image of the first workpiece at the second location; and
based upon the identified features in the second image, locating the first workpiece with respect to the second coordinate space relative to the second location and tying together the first coordinate space and the second coordinate space.
1 Assignment
0 Petitions
Accused Products
Abstract
This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
-
Citations
20 Claims
-
1. A method for calibrating a vision system in an environment in which a first workpiece at a first location is transferred by a manipulator to a second location, wherein an operation performed on the first workpiece relies upon tying together coordinate spaces of the first location and the second location, the method comprising the steps of:
-
arranging at least one vision system camera to image the first workpiece when positioned at the first location and to image the first workpiece when positioned at the second location; calibrating at least one vision system camera with respect to the first location to derive first calibration data which defines a first coordinate space and at least one vision system camera with respect to the second location to derive second calibration data which defines a second coordinate space; identifying features of at least the first workpiece at the first location from a first image of the first workpiece; based on the identified features in the first image locating the first workpiece with respect to the first coordinate space relative to the first location; gripping and moving, with the manipulator, at least one time, the first workpiece to a predetermined manipulator position at the second location; acquiring a second image of the first workpiece at the second location; and based upon the identified features in the second image, locating the first workpiece with respect to the second coordinate space relative to the second location and tying together the first coordinate space and the second coordinate space. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method for calibrating a vision system in an environment in which a first workpiece at a first location is transferred by a manipulator to a second location, wherein an operation performed on the first workpiece relies upon tying together coordinate spaces of the first location and the second location, the method comprising the steps of:
-
(a) arranging at least one vision system camera to image the first workpiece at the first location and to image the second location; (b) hand-eye calibrating at least one vision system camera with respect to the first location to derive first calibration data; (c) positioning the first workpiece at the first location; (d) moving the first workpiece from the first location to the second location; (e) acquiring an image and locating features on the first workpiece; (f) moving the first workpiece to the first location from the second location and changing a pose of the first workpiece at the first location by moving the motion rendering device to a new known pose; (g) iterating steps (d-f) until feature location and other data relevant to hand-eye calibration is accumulated; and (h) using the accumulated data to hand-eye calibrate at least one vision system camera with respect to the second location, and tying together the first coordinate space and the second coordinate space by the common coordinate space relative to the motion rendering device obtained from the hand-eye calibration. - View Dependent Claims (13, 14, 15)
-
-
16. A system for calibrating a vision system in an environment in which a first workpiece at a first location is transferred by a manipulator to a second location, wherein an operation performed on the first workpiece relies upon tying together coordinate spaces of the first location and the second location, comprising:
-
at least one vision system camera arranged to image the first workpiece when positioned at the first location and to image the first workpiece when positioned at the second location; a calibration process that calibrates at least one vision system camera with respect to the first location to derive first calibration data and the at least one vision system camera with respect to the second location to derive second calibration data; a feature extraction process that identifies features of at least the first workpiece at the first location from a first image of the first workpiece, and based on the identified features in the first image, that locates the first workpiece with respect to a first coordinate space relative to the first location, and based upon the identified features in a second image at a second location, that locates the first workpiece with respect to a second coordinate space relative to the second location; and a calibration process that ties together the first coordinate space and the second coordinate space. - View Dependent Claims (17, 18, 19, 20)
-
Specification