SYSTEM AND METHOD FOR ROBUST CALIBRATION BETWEEN A MACHINE VISION SYSTEM AND A ROBOT
First Claim
Patent Images
1. A method for determining calibration between a machine vision system and a robot, the method comprising:
- obtaining, using a camera fixed to an effector of the robot, and initial image of a calibration object fixed in space as the robot occupies an initial pose within a workplace for the robot, and a subsequent image of the calibration object as the robot occupies a subsequent post within the workplace for the robot, the initial pose and the subsequent pose are different, the calibration object comprising a first object feature and a second object feature, the first object feature and the second object feature located at a fixed, known distance relative to one another, the robot having a robot coordinate system, the camera having a camera coordinate system, the effector having an effector coordinate system, the calibration object having an object coordinate system;
identifying an object-robot transform between the object coordinate system and the robot coordinate system;
identifying a camera-effector transform between the object coordinate system and the effector coordinate system;
locating a first initial image feature and a second initial image feature in the initial image and a first subsequent image feature and a second subsequent image feature in the subsequent image, the first initial image feature and the first subsequent image feature corresponding to the first object feature and the second initial image feature and the second subsequent image feature corresponding to the second object feature;
calculating, using the fixed, known distance, the initial pose, the subsequent pose, the object-robot transform, and the camera-effector transform, a predicted first initial image feature and a predicted second initial image feature for the initial image and a predicted first subsequent image feature and a predicted second subsequent image feature for the subsequent image, the predicted first initial image feature and the predicted first subsequent image feature corresponding to the first object feature and the predicted second initial image feature and the predicted second subsequent image feature corresponding to the second object feature;
minimizing, by varying the object-robot transform or the camera-effector transform, a discrepancy between at least one of the first initial image feature and the predicted first initial image feature, the second initial image feature and the predicted second initial image feature, the first subsequent image feature and the predicted first subsequent image feature, and the second subsequent image feature and the predicted second subsequent image feature, thereby producing optimized transforms;
calibrating the machine vision system and the robot using the identified transforms.
1 Assignment
0 Petitions
Accused Products
Abstract
A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.
-
Citations
20 Claims
-
1. A method for determining calibration between a machine vision system and a robot, the method comprising:
-
obtaining, using a camera fixed to an effector of the robot, and initial image of a calibration object fixed in space as the robot occupies an initial pose within a workplace for the robot, and a subsequent image of the calibration object as the robot occupies a subsequent post within the workplace for the robot, the initial pose and the subsequent pose are different, the calibration object comprising a first object feature and a second object feature, the first object feature and the second object feature located at a fixed, known distance relative to one another, the robot having a robot coordinate system, the camera having a camera coordinate system, the effector having an effector coordinate system, the calibration object having an object coordinate system; identifying an object-robot transform between the object coordinate system and the robot coordinate system; identifying a camera-effector transform between the object coordinate system and the effector coordinate system; locating a first initial image feature and a second initial image feature in the initial image and a first subsequent image feature and a second subsequent image feature in the subsequent image, the first initial image feature and the first subsequent image feature corresponding to the first object feature and the second initial image feature and the second subsequent image feature corresponding to the second object feature; calculating, using the fixed, known distance, the initial pose, the subsequent pose, the object-robot transform, and the camera-effector transform, a predicted first initial image feature and a predicted second initial image feature for the initial image and a predicted first subsequent image feature and a predicted second subsequent image feature for the subsequent image, the predicted first initial image feature and the predicted first subsequent image feature corresponding to the first object feature and the predicted second initial image feature and the predicted second subsequent image feature corresponding to the second object feature; minimizing, by varying the object-robot transform or the camera-effector transform, a discrepancy between at least one of the first initial image feature and the predicted first initial image feature, the second initial image feature and the predicted second initial image feature, the first subsequent image feature and the predicted first subsequent image feature, and the second subsequent image feature and the predicted second subsequent image feature, thereby producing optimized transforms; calibrating the machine vision system and the robot using the identified transforms. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method for determining calibration between a machine vision system and a robot, the method comprising:
-
obtaining, using a primary camera fixed to an effector of a robot, a primary initial image of a calibration object fixed in space as the robot occupies a primary initial pose within a workplace for the robot, and a primary subsequent image of the calibration object as the robot occupies a primary subsequent pose within the workplace for the robot, the primary initial pose and the primary subsequent pose are different, the calibration object comprising a first object feature and a second object feature, the first object feature and the second object feature located at a fixed, known distance relative to one another, the robot having a robot coordinate system, the primary camera having a primary camera coordinate system, the effector having an effector coordinate system, the calibration object having an object coordinate system; obtaining, using a secondary camera fixed in to the effector of the robot, a secondary initial image of the calibration object as the robot occupies a secondary initial pose within the workplace for the robot, and a secondary subsequent image of the calibration object as the robot occupies a secondary subsequent pose within the workplace for the robot, the secondary initial pose and the secondary subsequent pose are different, the secondary camera having a secondary camera coordinate system; identifying camera-effector transforms from each of the primary and secondary camera coordinate systems into the effector coordinate system; identifying an object-robot transform between the object coordinate system and the robot coordinate system; locating a first primary initial image feature and a second primary initial image feature in the primary initial image, a first primary subsequent image feature and a second primary subsequent image feature in the primary subsequent image, a first secondary initial image feature and a second secondary initial image feature in the secondary initial image, and a first secondary subsequent image feature, and a second secondary subsequent image feature in the secondary subsequent image, the first primary initial image feature, the first primary subsequent image feature, the first secondary initial image feature, and the first secondary subsequent image feature corresponding to the first object feature, and the second primary initial image feature, the second primary subsequent image feature, the second secondary initial image feature, and the second secondary subsequent image feature corresponding to the second object feature; calculating, using the fixed, known distance, the primary initial pose, the primary subsequent pose, the secondary initial pose, the secondary subsequent pose, the camera-effector transforms, and the object-robot transform, a predicted first primary initial image feature and a predicted second primary initial feature for the primary initial image, a predicted first primary subsequent image feature and a predicted second primary subsequent image feature for the primary subsequent image, a predicted first secondary initial image feature and a predicted second secondary initial feature for the secondary initial image, a predicted first secondary subsequent image feature and a predicted second secondary subsequent image feature for the secondary subsequent image, the predicted first primary initial image feature, the predicted first primary subsequent image feature, the predicted first secondary initial image feature, and the predicted first secondary subsequent image feature corresponding to the first object feature, and the predicted second primary initial image feature, the predicted second primary subsequent image feature, the predicted second secondary initial image feature, and the predicted second secondary subsequent image feature corresponding to the second object feature; minimizing, by varying the camera-effector transforms and the object-robot transform, a discrepancy between at least one of the first primary initial image feature and the predicted first primary initial image feature, the second primary initial image feature and the predicted second primary initial image feature, the first primary subsequent image feature and the predicted first primary subsequent image feature, and the second primary subsequent image feature and the predicted second primary subsequent image feature, the first secondary initial image feature and the predicted first secondary initial image feature, the second secondary initial image feature and the predicted second secondary initial image feature, the first secondary subsequent image feature and the predicted first secondary subsequent image feature, and the second secondary subsequent image feature and the predicted second secondary subsequent image feature, thereby producing optimized transforms; and calibrating the machine vision system and the robot using the identified transforms. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15)
-
-
16. A method for determining calibration between a machine vision system and a robot, the method comprising:
-
obtaining, using two or more cameras fixed to an effector of the robot, a plurality of images of a calibration object fixed in space as the robot moves the two or more cameras to a plurality of different poses within the workplace for the robot, the calibration object comprising a first object feature and a second object feature, the first object feature and the second object feature located at a fixed, known distance relative to one another, the robot having a robot coordinate system, the two or more cameras each having a camera coordinate system, the effector having an effector coordinate system, the calibration object having an object coordinate system; identifying a camera-effector transform between each of the two or more camera coordinate systems and the effector coordinate system; identifying an object-robot transform between the object coordinate system and the robot coordinate system; identifying image features in the plurality of images, the image features corresponding to the first object feature and the second object feature; calculating, using the fixed, known distance, the plurality of poses, the two or more camera-effector transforms, and the object-robot transforms, predicted features in the plurality of images, the predicted features corresponding to the first object feature and the second object feature; minimizing, by simultaneously varying the two or more camera-effector transforms and the object-robot transform, a discrepancy between the predicted features and the image features, thereby producing optimized transforms; and calibrating the machine vision system and the robot using the optimized transforms.
-
-
17. A method for calibration between a machine vision system and a robot, the method comprising:
-
obtaining a set of pairs of camera poses and robot poses; analyzing corresponding robot motions and camera motions among the obtained set of pairs; detecting outliers based on the analysis; re-ordering the set of pairs, with the detected outliers removed, to obtain a set of pairs of camera poses and robot poses with suitable motions; utilizing the set of pairs of camera poses and robot poses with suitable motions to perform calibration by obtaining a plurality of images of a calibration object in at least two pairs of camera poses and robot poses and minimizing discrepancies between features of the calibration object identified in the plurality of images and features of the calibration object predicted by the robot poses using a camera coordinate system for the camera, a calibration object coordinate system for the calibration object, a robot coordinate system for a portion of the robot that is stationary, and an effector coordinate system for a portion of the robot that moves to occupy the robot poses. - View Dependent Claims (18, 19, 20)
-
Specification