Method and apparatus for single camera 3D vision guided robotics
DCFirst Claim
Patent Images
1. A method of three-dimensional handling of an object by a robot using a tool and one camera mounted on the robot, comprising:
- i) calibrating the camera by finding a) the camera intrinsic parameters;
b) the position of the camera relative to the tool of the robot (“
hand-eye”
calibration);
c) the position of the camera in a space rigid to the place where the object will be trained (“
Training Space”
);
ii) teaching the object features by a) putting the object in the “
Training Space” and
capturing an image of the object with the robot in the calibration position where the “
Camera to Training Space”
transformation was calculated;
b) selecting at least 6 visible features from the image;
c) calculating the 3D position of each feature in “
Training Space”
;
d) defining an “
Object Space”
aligned with the “
Training Space”
but connected to the object and transposing the 3D coordinates of the features into the “
Object Space”
;
e) computing the “
Object Space to Camera”
transformation using the 3D position of the features inside the “
Object Space” and
the positions of the features in the image;
f) defining an “
Object Frame”
inside “
Object Space”
to be used for teaching the intended operation pat;
g) computing the Object Frame position and orientation in “
Tool Frame”
using the transformation from “
Object Frame to Camera” and
“
Camera to Tool”
;
h) sending the “
Object Frame”
to the robot;
i) training the intended operation path relative to the “
Object Frame”
using the robot;
iii) carrying out object finding and positioning by a) positioning the robot in a predefined position above the bin containing the object and capturing an image of the object;
b) if an insufficient number of selected features are in the field of view, moving the robot until at least 6 features can be located;
c) with the positions of features from the image and their corresponding positions in “
Object Space”
as calculated in the training step, computing the object location as the transformation between the “
Object Space” and
“
Camera Space”
;
d) using the said transformation to calculate the movement of the robot to position the camera so that it appears orthogonal to the object;
e) moving the robot to the position calculated in step d);
f) finding the “
Object Space to Camera Space”
transformation in the same way a in step c);
g) computing the object frame memorized at training using the found transformation and “
Camera to Tool”
transformation;
h) sending the computed “
Object Frame”
to the robot; and
i) using the “
Tool”
position to define the frame in “
Robot Space” and
performing the intended operation path on the object inside the “
Robot Space”
.
4 Assignments
Litigations
0 Petitions
Accused Products
Abstract
A method of three-dimensional handling of an object by a robot uses a tool and one camera mounted on the robot and at least six target features which are normal features of the object are selected on the object. The features are used to train the robot in the frame of reference of the object so that when the same object is subsequently located, the robot'"'"'s path of operation can be quickly transformed into the frame of reference of the object.
-
Citations
19 Claims
-
1. A method of three-dimensional handling of an object by a robot using a tool and one camera mounted on the robot, comprising:
-
i) calibrating the camera by finding a) the camera intrinsic parameters;
b) the position of the camera relative to the tool of the robot (“
hand-eye”
calibration);
c) the position of the camera in a space rigid to the place where the object will be trained (“
Training Space”
);
ii) teaching the object features by a) putting the object in the “
Training Space” and
capturing an image of the object with the robot in the calibration position where the “
Camera to Training Space”
transformation was calculated;
b) selecting at least 6 visible features from the image;
c) calculating the 3D position of each feature in “
Training Space”
;
d) defining an “
Object Space”
aligned with the “
Training Space”
but connected to the object and transposing the 3D coordinates of the features into the “
Object Space”
;
e) computing the “
Object Space to Camera”
transformation using the 3D position of the features inside the “
Object Space” and
the positions of the features in the image;
f) defining an “
Object Frame”
inside “
Object Space”
to be used for teaching the intended operation pat;
g) computing the Object Frame position and orientation in “
Tool Frame”
using the transformation from “
Object Frame to Camera” and
“
Camera to Tool”
;
h) sending the “
Object Frame”
to the robot;
i) training the intended operation path relative to the “
Object Frame”
using the robot;
iii) carrying out object finding and positioning by a) positioning the robot in a predefined position above the bin containing the object and capturing an image of the object;
b) if an insufficient number of selected features are in the field of view, moving the robot until at least 6 features can be located;
c) with the positions of features from the image and their corresponding positions in “
Object Space”
as calculated in the training step, computing the object location as the transformation between the “
Object Space” and
“
Camera Space”
;
d) using the said transformation to calculate the movement of the robot to position the camera so that it appears orthogonal to the object;
e) moving the robot to the position calculated in step d);
f) finding the “
Object Space to Camera Space”
transformation in the same way a in step c);
g) computing the object frame memorized at training using the found transformation and “
Camera to Tool”
transformation;
h) sending the computed “
Object Frame”
to the robot; and
i) using the “
Tool”
position to define the frame in “
Robot Space” and
performing the intended operation path on the object inside the “
Robot Space”
.- View Dependent Claims (2, 3, 4, 5, 6, 7)
i) 3D pose estimation using non linear optimization refinement based on maximum likelihood criteria;
ii) 3D pose estimation from lines correspondence in which selected features are edges using image Jacobian;
iii) 3D pose estimation using “
orthogonal iteration”
;
iv) 3D pose approximation under weak perspective conditions;
orv) 3D pose approximation using Direct Linear Transformation (DLT).
-
-
8. A method of three-dimensional handling of an object by a robot using a tool and one camera mounted on the robot, comprising:
-
i) calibrating the camera by finding a) the camera intrinsic parameters;
b) the position of the camera relative to the tool of the robot (“
hand-eye”
calibration);
ii) teaching the object features by a) putting the object in the field of view of the camera and capturing an image of the object;
b) selecting at least 6 visible features from the image;
c) calculating the 3D position in real world coordinates of said selected features inside a space connected to the object (“
Object Space”
);
d) computing the “
Object Space to Camera”
transformation using the 3D position of the features inside this space and the position in the image;
e) defining an “
Object Frame,”
inside “
Object Space”
to be used for teaching the handling path;
f) computing the “
Object Frame”
position and orientation in “
Tool Frame”
using the transformation from “
Object Frame to Camera” and
“
Camera to Tool”
;
g) sending the computed “
Object Frame”
to the robot; and
h) training the intended operation path inside the “
Object Frame”
;
iii) carrying out object finding and positioning by a) positioning the robot in a predefined position above the bin containing the target object;
b) if an insufficient number of selected features are in the field of view, moving the robot until at least 6 features can be located;
c) with the positions of features from the image and their corresponding position in “
Object Space”
as calculated in the training session, computing the object location as the transformation between the “
Object Space” and
“
Camera Space”
;
d) using the said transformation to calculate the movement of the robot to position the camera so that it appears orthogonal to the object;
e) finding the “
Object Space to Camera Space”
transformation in the same way as in step d);
f) computing the object frame memorized at training using the found transformation and “
Camera to Tool”
transformation;
g) sending the computed “
Object Frame”
to the robot,h) using the “
Tool”
position to define the frame in “
Robot Space” and
performing the intended operation path on the object inside the “
Robot Space”
.- View Dependent Claims (9, 10, 11, 12, 13, 14, 15, 16, 17)
i) 3D pose estimation using non linear optimization refinement based on maximum likelihood criteria;
ii) 3D pose estimation from lines correspondence in which selected features are edges using image Jacobian;
iii) 3D pose estimation using “
orthogonal iteration”
;
iv) 3D pose an approximation under weak perspective conditions;
orv) 3D pose approximation using Direct Linear Transformation (DLT).
-
-
18. A system for three-dimensional handling of an object by a robot using a tool and one camera mounted on the robot, comprising:
-
i) calibration means for calibrating the camera by finding a) the camera intrinsic parameters;
b) the position of the camera relative to the tool of the robot (“
hand-eye”
calibration);
c) the position of the camera in a space rigid to the place where the object will be trained (“
Training Space”
);
ii) means for teaching the object features by a) putting the object in the “
Training Space” and
capturing an image of the object with the robot in the calibration position where the “
Camera to Training Space”
transformation was calculated;
b) selecting at least 6 visible features from the image;
c) calculating the 3D position of each feature in “
Training Space”
;
d) defining an “
Object Space”
aligned with the “
Training Space”
but connected to the object and transposing the 3D coordinates of the features into the “
Object Space”
;
e) computing the “
Object Space to Camera”
transformation using the 3D position of the features inside the “
Object Space” and
the positions of the features in the image;
f) defining an “
Object Frame”
inside “
Object Space”
to be used for teaching the intended operation path;
g) computing the Object Frame position and orientation in “
Tool Frame”
using the transformation from “
Object Frame to Camera” and
“
Camera to Too)”
;
h) sending the “
Object Frame”
to the robot;
i) training the intended operation path relative to the “
Object Frame”
using the robot;
iii) means for carrying out object finding and positioning by a) positioning the robot in a predefined position above the bin containing the object and capturing an image of the object;
b) if an insufficient number of selected features are in the field of view, moving the robot until at least 6 features can be located;
c) with the positions of features from the image and their corresponding positions in “
Object Space”
as calculated in the training step, computing the object location as the transformation between the “
Object Space” and
“
Camera Space”
;
d) using the said transformation to calculate the movement of the robot to position the camera so that it appears orthogonal to the object;
e) moving the robot to the position calculated in step d);
t) finding the “
Object Space to Camera Space”
transformation in the same way as in step c);
g) computing the object frame memorized at training using the found transformation and “
Camera to Tool”
transformation;
h) sending the computed “
Object Frame”
to the robot; and
i) using the “
Tool”
position to define the frame in “
Robot Space” and
performing the intended operation path on the object inside the “
Robot Space”
.
-
-
19. A system for three-dimensional handling of an object by a robot using a tool and one camera mounted on the robot, comprising:
-
i) calibration means for calibrating the camera by finding a) the camera intrinsic parameters;
b) the position of the camera relative to the tool of the robot (“
hand-eye”
calibration);
ii) means for teaching the object features by a) putting the object in the field of view of the camera and capturing an image of the object;
b) selecting at least 6 visible features from the image;
c) calculating the 3D position in real world co-ordinates of said selected features inside a space connected to the object (“
Object Space”
);
d) computing the “
Object Space to Camera”
transformation using the 3D position of the features inside this space and the position in the image;
e) defining an “
Object Frame”
inside “
Object Space”
to be used for teaching the handling path;
f) computing the “
Object Frame”
position and orientation in “
Tool Frame”
using the transformation from “
Object Frame to Camera” and
“
Camera to Tool”
;
g) sending the computed “
Object Frame”
to the robot; and
h) training the intended operation path inside the “
Object Frame”
;
iii) means for carrying out object finding and positioning by a) positioning the robot in a predefined position above the bin containing the target object;
b) if an insufficient number of selected features are in the field of view, moving the robot until at least 6 features can be located;
c) with the positions of features from the image and their corresponding position in “
Object Space”
as calculated in the training session, computing the object location as the transformation between the “
Object Space” and
“
Camera Space”
;
d) using the said transformation to calculate the movement of the robot to position the camera so that it appears orthogonal to the object;
e) finding the “
Object Space to Camera Space”
transformation in the same way as in step d);
f) computing the object frame memorized at training using the found transformation and “
Camera to Tool”
transformation;
g) sending the computed “
Object Frame”
to the robot;
h) using the “
Tool”
position to define the frame in “
Robot Space” and
performing the intended operation path on the object inside the “
Robot Space”
.
-
Specification