VISION-GUIDED ROBOTS AND METHODS OF TRAINING THEM
1 Assignment
0 Petitions
Accused Products
Abstract
Via intuitive interactions with a user, robots may be trained to perform tasks such as visually detecting and identifying physical objects and/or manipulating objects. In some embodiments, training is facilitated by the robot'"'"'s simulation of task-execution using augmented-reality techniques.
9 Citations
35 Claims
-
1-24. -24. (canceled)
-
25. A method of training a robot to complete a specified task comprising performing an action on an object, the robot having at least one appendage for manipulating the object and a machine-vision system for capturing images within a camera field of view, the method comprising:
-
(a) selecting, by the robot, an object in the field of view and displaying the selection on a robot-controlled visual display, the displayed selection including a camera view of the selected object; (b) overlaying, by the robot, a representation of an end-effector onto the camera view and display performance of an action on the object by the end-effector; (c) based on the display, providing feedback to the robot indicating whether the action is performed correctly on the display; and (d) repeating steps (b) through (c) until the action is performed correctly on the display. - View Dependent Claims (26, 27, 28)
-
-
29. A vision-guided robot trainable, via interactions with a human trainer, to complete a specified task comprising performing an action on an object, the robot comprising:
-
at least one movable appendage comprising an end-effector for performing the action on the object; a machine-vision system comprising at least one camera for capturing images within a camera field of view and an image-processing module for identifying objects within the captured images; a user interface comprising (i) a robot-controlled visual display for visually displaying performance of the action on the object in a camera view thereof using a robot-generated graphic representation of the end-effector, and (ii) at least one input device for receiving feedback from the trainer indicating whether the action is performed correctly on the display; and a control system for modifying performance of the action on the display in response to the feedback until the action is performed correctly on the display. - View Dependent Claims (30, 31, 32)
-
-
33. A robot-implemented method of manipulating objects based on visual recognition thereof, the method comprising causing a robot to execute steps comprising:
-
(a) selecting an object in a camera field of view; (b) using a visual model to computationally identify the object based on a stored representation thereof associated with the model; (c) determining whether an object-specific manipulation routine is stored in association with the stored representation; and (d) if so, executing the object-specific manipulation routine, and if not, executing a generic manipulation routine.
-
-
34. A vision-guided robot for manipulating objects based on visual recognition thereof, the robot comprising:
-
at least one movable appendage for manipulating objects; a camera for capturing an image within a camera field of view; and a control system for (i) computationally identifying an object in the image based on a representation thereof stored in association with a visual model, (ii) determining whether an object-specific manipulation routine is stored in association with the stored representation, and (iii) if so, causing the at least one movably appendage to execute the object-specific manipulation routine, and if not, causing the at least one movable appendage to execute a generic manipulation routine. - View Dependent Claims (35)
-
Specification