Vision-guided robots and methods of training them
First Claim
Patent Images
1. A method of training a robot to complete a specified task comprising performing an action on an object, the robot having at least one appendage for manipulating the object and a machine-vision system for capturing images within a camera field of view, the method comprising:
- (a) selecting, by the robot, an object in the field of view and displaying the selection on a robot-controlled visual display, the displayed selection including a camera view of the selected object;
(b) overlaying, by the robot, a representation of an end-effector onto the camera view and displaying, on the display, performance of an action on the object by the end-effector;
(c) based on the displayed performance, providing feedback to the robot indicating whether the action is performed correctly, the robot modifying the performance of the action in response to the feedback; and
(d) repeating steps (b) through (c) until the action is performed correctly on the display.
1 Assignment
0 Petitions
Accused Products
Abstract
Via intuitive interactions with a user, robots may be trained to perform tasks such as visually detecting and identifying physical objects and/or manipulating objects. In some embodiments, training is facilitated by the robot'"'"'s simulation of task-execution using augmented-reality techniques.
-
Citations
11 Claims
-
1. A method of training a robot to complete a specified task comprising performing an action on an object, the robot having at least one appendage for manipulating the object and a machine-vision system for capturing images within a camera field of view, the method comprising:
-
(a) selecting, by the robot, an object in the field of view and displaying the selection on a robot-controlled visual display, the displayed selection including a camera view of the selected object; (b) overlaying, by the robot, a representation of an end-effector onto the camera view and displaying, on the display, performance of an action on the object by the end-effector; (c) based on the displayed performance, providing feedback to the robot indicating whether the action is performed correctly, the robot modifying the performance of the action in response to the feedback; and (d) repeating steps (b) through (c) until the action is performed correctly on the display. - View Dependent Claims (2, 3, 4)
-
-
5. A vision-guided robot trainable, via interactions with a human trainer, to complete a specified task comprising performing an action on an object, the robot comprising:
-
at least one movable appendage comprising an end-effector for performing the action on the object; a machine-vision system comprising at least one camera for capturing images within a camera field of view and an image-processing module for identifying objects within the captured images; a user interface comprising (i) a robot-controlled visual display for visually displaying performance of the action on the object in a camera view thereof using a robot-generated graphic representation of the end-effector, and (ii) at least one input device for receiving feedback from the trainer indicating whether the action is performed correctly on the display; and a control system for modifying performance of the action on the display in response to the feedback until the action is performed correctly on the display. - View Dependent Claims (6, 7, 8)
-
-
9. A robot-implemented method of manipulating objects based on visual recognition thereof, the method comprising causing a robot to execute steps comprising:
-
(a) selecting an object in a camera field of view; (b) computationally selecting a visual model from among a plurality of computer-implemented visual models, each of the visual models including an associated algorithm, the associated algorithm for each visual model being distinct from the other algorithms; (c) using the selected visual model to computationally identify the object based on a stored representation thereof associated with the model; (d) determining whether an object-specific manipulation routine is stored in association with the stored representation; and (e) if so, executing the object-specific manipulation routine, and if not, executing a generic manipulation routine.
-
-
10. A vision-guided robot for manipulating objects based on visual recognition thereof, the robot comprising:
-
at least one movable appendage for manipulating objects; a camera for capturing an image within a camera field of view; and a control system for (i) computationally selecting a visual model from among a plurality of computer-implemented visual models, each of the visual models including an associated algorithm, the associated algorithm for each visual model being distinct from the other algorithms, (ii) computationally identifying an object in the image based on a representation thereof stored in association with the selected visual model, (iii) determining whether an object-specific manipulation routine is stored in association with the stored representation, and (iv) if so, causing the at least one movably appendage to execute the object-specific manipulation routine, and if not, causing the at least one movable appendage to execute a generic manipulation routine. - View Dependent Claims (11)
-
Specification