×

Method and apparatus for single image 3D vision guided robotics

  • US 20040172164A1
  • Filed: 08/06/2003
  • Published: 09/02/2004
  • Est. Priority Date: 01/31/2002
  • Status: Active Grant
First Claim
Patent Images

1. A method of three-dimensional object location and guidance to allow robotic manipulation of an object with variable position and orientation by a robot using a sensor array, comprising:

  • (a) calibrating the sensor array to provide a Robot—

    Eye Calibration by finding the intrinsic parameters of said sensor array and the position of the sensor array relative to a preferred robot coordinate system (“

    Robot Frame”

    ) by placing a calibration model in the field of view of said sensor array;

    (b) training object features by;

    (i) positioning the object and the sensor array such that the object is located in the field of view of the sensor array and acquiring and forming an image of the object;

    (ii) selecting at least 5 visible object features from the image;

    (iii) creating a 3D model of the object (“

    Object Model”

    ) by calculating the 3D position of each feature relative to a coordinate system rigid to the object (“

    Object Space”

    );

    (c) training a robot operation path by;

    (i) computing the “

    Object Space→

    Sensor Array Space”

    transformation using the “

    Object Model” and

    the positions of the features in the image;

    (ii) computing the “

    Object Space”

    position and orientation in “

    Robot Frame”

    using the transformation from “

    Object Space→

    Sensor Array Space” and



    Robot—

    Eye Calibration”

    ;

    (iii) coordinating the desired robot operation path with the “

    Object Space”

    ;

    (d) carrying out object location and robot guidance by;

    (i) acquiring and forming an image of the object using the sensor array, searching for and finding said at least 5 trained features;

    (ii) with the positions of features in the image and the corresponding “

    Object Model”

    as determined in the training step, computing the object location as the transformation between the “

    Object Space” and

    the “

    Sensor Array” and

    the transformation between the “

    Object Space” and



    Robot Frame”

    ;

    (iii) communicating said computed object location to the robot and modifying robot path points according to said computed object location.

View all claims
  • 5 Assignments
Timeline View
Assignment View
    ×
    ×