System and method for robust calibration between a machine vision system and a robot
First Claim
Patent Images
1. A method for determining calibration between a machine vision system and a robot, the method comprising:
- obtaining, using a camera fixed in space, an initial image of a calibration object fixed to an effector of the robot as the robot occupies an initial pose within a workplace for the robot, and a subsequent image of the calibration object as the robot occupies a subsequent pose within the workplace for the robot, the initial pose and the subsequent pose are different, the calibration object comprising a first object feature and a second object feature, the first object feature and the second object feature located at a fixed, known distance relative to one another, the robot having a robot coordinate system, the camera having a camera coordinate system, the effector having an effector coordinate system, the calibration object having an object coordinate system;
identifying a camera-robot transform between the camera coordinate system and the robot coordinate system;
identifying an object-effector transform between the object coordinate system and the effector coordinate system;
locating a first initial image feature and a second initial image feature in the initial image and a first subsequent image feature and a second subsequent image feature in the subsequent image, the first initial image feature and the first subsequent image feature corresponding to the first object feature and the second initial image feature and the second subsequent image feature corresponding to the second object feature;
calculating, using the fixed, known distance, the initial pose, the subsequent pose, the camera-robot transform, and the object-effector transform, a predicted first initial image feature and a predicted second initial image feature for the initial image and a predicted first subsequent image feature and a predicted second subsequent image feature for the subsequent image, the predicted first initial image feature and the predicted first subsequent image feature corresponding to the first object feature and the predicted second initial image feature and the predicted second subsequent image feature corresponding to the second object feature;
minimizing, by varying the camera-robot transform or the object-effector transform, a discrepancy between at least one of the first initial image feature and the predicted first initial image feature, the second initial image feature and the predicted second initial image feature, the first subsequent image feature and the predicted first subsequent image feature, and the second subsequent image feature and the predicted second subsequent image feature, thereby producing optimized transforms; and
calibrating the machine vision system and the robot using the optimized transforms.
1 Assignment
0 Petitions
Accused Products
Abstract
A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.
96 Citations
16 Claims
-
1. A method for determining calibration between a machine vision system and a robot, the method comprising:
-
obtaining, using a camera fixed in space, an initial image of a calibration object fixed to an effector of the robot as the robot occupies an initial pose within a workplace for the robot, and a subsequent image of the calibration object as the robot occupies a subsequent pose within the workplace for the robot, the initial pose and the subsequent pose are different, the calibration object comprising a first object feature and a second object feature, the first object feature and the second object feature located at a fixed, known distance relative to one another, the robot having a robot coordinate system, the camera having a camera coordinate system, the effector having an effector coordinate system, the calibration object having an object coordinate system; identifying a camera-robot transform between the camera coordinate system and the robot coordinate system; identifying an object-effector transform between the object coordinate system and the effector coordinate system; locating a first initial image feature and a second initial image feature in the initial image and a first subsequent image feature and a second subsequent image feature in the subsequent image, the first initial image feature and the first subsequent image feature corresponding to the first object feature and the second initial image feature and the second subsequent image feature corresponding to the second object feature; calculating, using the fixed, known distance, the initial pose, the subsequent pose, the camera-robot transform, and the object-effector transform, a predicted first initial image feature and a predicted second initial image feature for the initial image and a predicted first subsequent image feature and a predicted second subsequent image feature for the subsequent image, the predicted first initial image feature and the predicted first subsequent image feature corresponding to the first object feature and the predicted second initial image feature and the predicted second subsequent image feature corresponding to the second object feature; minimizing, by varying the camera-robot transform or the object-effector transform, a discrepancy between at least one of the first initial image feature and the predicted first initial image feature, the second initial image feature and the predicted second initial image feature, the first subsequent image feature and the predicted first subsequent image feature, and the second subsequent image feature and the predicted second subsequent image feature, thereby producing optimized transforms; and calibrating the machine vision system and the robot using the optimized transforms. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method for determining calibration between a machine vision system and a robot, the method comprising:
-
obtaining, using a primary camera fixed in space, a primary initial image of a calibration object fixed to an effector of the robot as the robot occupies a primary initial pose within a workplace for the robot, and a primary subsequent image of the calibration object as the robot occupies a primary subsequent pose within the workplace for the robot, the primary initial pose and the primary subsequent pose are different, the calibration object comprising a first object feature and a second object feature, the first object feature and the second object feature located at a fixed, known distance relative to one another, the robot having a robot coordinate system, the primary camera having a primary camera coordinate system, the effector having an effector coordinate system, the calibration object having an object coordinate system; obtaining, using a secondary camera fixed in space, a secondary initial image of the calibration object as the robot occupies a secondary initial pose within the workplace for the robot, and a secondary subsequent image of the calibration object as the robot occupies a secondary subsequent pose within the workplace for the robot, the secondary initial pose and the secondary subsequent pose are different, the secondary camera having a secondary camera coordinate system, identifying camera-robot transforms between each of the primary and secondary camera coordinate systems and the robot coordinate system; identifying an object-effector transform between the object coordinate system and the effector coordinate system; locating a first primary initial image feature and a second primary initial image feature in the primary initial image, a first primary subsequent image feature and a second primary subsequent image feature in the primary subsequent image, a first secondary initial image feature and a second secondary initial image feature in the secondary initial image, and a first secondary subsequent image feature, and a second secondary subsequent image feature in the secondary subsequent image, the first primary initial image feature, the first primary subsequent image feature, the first secondary initial image feature, and the first secondary subsequent image feature corresponding to the first object feature, and the second primary initial image feature, the second primary subsequent image feature, the second secondary initial image feature, and the second secondary subsequent image feature corresponding to the second object feature; calculating, using the fixed, known distance, the primary initial pose, the primary subsequent pose, the secondary initial pose, the secondary subsequent pose, the camera-robot transforms, and the object-effector transform, a predicted first primary initial image feature and a predicted second primary initial feature for the primary initial image, a predicted first primary subsequent image feature and a predicted second primary subsequent image feature for the primary subsequent image, a predicted first secondary initial image feature and a predicted second secondary initial feature for the secondary initial image, a predicted first secondary subsequent image feature and a predicted second secondary subsequent image feature for the secondary subsequent image, the predicted first primary initial image feature, the predicted first primary subsequent image feature, the predicted first secondary initial image feature, and the predicted first secondary subsequent image feature corresponding to the first object feature, and the predicted second primary initial image feature, the predicted second primary subsequent image feature, the predicted second secondary initial image feature, and the predicted second secondary subsequent image feature corresponding to the second object feature; minimizing, by varying the camera-robot transforms and the object-effector transform, a discrepancy between at least one of the first primary initial image feature and the predicted first primary initial image feature, the second primary initial image feature and the predicted second primary initial image feature, the first primary subsequent image feature and the predicted first primary subsequent image feature, and the second primary subsequent image feature and the predicted second primary subsequent image feature, the first secondary initial image feature and the predicted first secondary initial image feature, the second secondary initial image feature and the predicted second secondary initial image feature, the first secondary subsequent image feature and the predicted first secondary subsequent image feature, and the second secondary subsequent image feature and the predicted second secondary subsequent image feature, thereby producing optimized transforms; and calibrating the machine vision system and the robot using the optimized transforms. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15)
-
-
16. A method for determining calibration between a machine vision system and a robot, the method comprising:
-
obtaining, using two or more cameras fixed in space, a plurality of images of a calibration object fixed to an effector of the robot as the robot moves the calibration object to a plurality of different poses within the workplace for the robot, the calibration object comprising a first object feature and a second object feature, the first object feature and the second object feature located at a fixed, known distance relative to one another, the robot having a robot coordinate system, the two or more cameras each having a camera coordinate system, the effector having an effector coordinate system, the calibration object having an object coordinate system; identifying a camera-robot transform between each of the two or more camera coordinate systems and the robot coordinate system; identifying an object-effector transform between the object coordinate system and the effector coordinate system; identifying image features in the plurality of images, the image features corresponding to the first object feature and the second object feature; calculating, using the fixed, known distance, the plurality of poses, the two or more camera-robot transforms, and the object-effector transforms, predicted features in the plurality of images, the predicted features corresponding to the first object feature and the second object feature; minimizing, by simultaneously varying the two or more camera-robot transforms and the object-effector transform, a discrepancy between the predicted features and the image features, thereby producing optimized transforms; calibrating the machine vision system and the robot using the optimized transforms.
-
Specification