×

Method and apparatus for single camera 3D vision guided robotics

  • US 20030144765A1
  • Filed: 05/24/2002
  • Published: 07/31/2003
  • Est. Priority Date: 01/31/2002
  • Status: Active Grant
First Claim
Patent Images

1. A method of three-dimensional handling of an object by a robot using a tool and one camera mounted on the robot, comprising:

  • i) calibrating the camera by finding a) the camera intrinsic parameters;

    b) the position of the camera relative to the tool of the robot (“

    hand-eye”

    calibration);

    c) the position of the camera in a space rigid to the place where the object will be trained (“

    Training Space”

    );

    ii) teaching the object features by a) putting the object in the “

    Training Space” and

    capturing an image of the object with the robot in the calibration position where the “

    Camera→

    Training Space”

    transformation was calculated;

    b) selecting at least 6 visible features from the image;

    c) calculating the 3D position of each feature in “

    Training Space”

    ;

    d) defining an “

    Object Space”

    aligned with the “

    Training Space”

    but connected to the object and transposing the 3D coordinates of the features into the “

    Object Space”

    ;

    e) computing the “

    Object Space→

    Camera”

    transformation using the 3D position of the features inside the “

    Object Space” and

    the positions of the features in the image;

    f) defining an “

    Object Frame”

    inside “

    Object Space”

    to be used for teaching the intended operation path;

    g) computing the Object Frame position and orientation in “

    Tool Frame”

    using the transformation from “

    Object Frame→

    Camera” and



    Camera→

    Tool”

    ;

    h) sending the “

    Object Frame”

    to the robot;

    i) training the intended operation path relative to the “

    Object Frame”

    using the robot;

    iii) carrying out object finding and positioning by a) positioning the robot in a predefined position above the bin containing the object and capturing an image of the object;

    b) if an insufficient number of selected features are in the field of view, moving the robot until at least 6 features can be located;

    c) with the positions of features from the image and their corresponding positions in “

    Object Space”

    as calculated in the training step, computing the object location as the transformation between the “

    Object Space” and



    Camera Space”

    ;

    d) using the said transformation to calculate the movement of the robot to position the camera so that it appears orthogonal to the object;

    e) moving the robot to the position calculated in step d);

    f) finding the “

    Object Space→

    Camera Space”

    transformation in the same way as in step c);

    g) computing the object frame memorized at training using the found transformation and “

    Camera→

    Tool”

    transformation;

    h) sending the computed “

    Object Frame”

    to the robot; and

    i) using the “

    Tool”

    position to define the frame in “

    Robot Space” and

    performing the intended operation path on the object inside the “

    Robot Space”

    .

View all claims
  • 4 Assignments
Timeline View
Assignment View
    ×
    ×