VISION-GUIDED ROBOTS AND METHODS OF TRAINING THEM
First Claim
Patent Images
1. A method of training a robot to complete a specified task comprising selection of an object, the robot having a machine-vision system for capturing images within a camera field of view, the method comprising:
- (a) selecting, by the robot, an object in the field of view and displaying the selection on a robot-controlled visual display, the displayed selection including a camera view of the selected object and a robot-generated object outline overlaid thereon;
(b) based on the display, providing feedback to the robot indicating whether the selection is correct; and
(c) repeating steps (a) and (b) until the selection is correct.
1 Assignment
0 Petitions
Accused Products
Abstract
Via intuitive interactions with a user, robots may be trained to perform tasks such as visually detecting and identifying physical objects and/or manipulating objects. In some embodiments, training is facilitated by the robot'"'"'s simulation of task-execution using augmented-reality techniques.
-
Citations
24 Claims
-
1. A method of training a robot to complete a specified task comprising selection of an object, the robot having a machine-vision system for capturing images within a camera field of view, the method comprising:
-
(a) selecting, by the robot, an object in the field of view and displaying the selection on a robot-controlled visual display, the displayed selection including a camera view of the selected object and a robot-generated object outline overlaid thereon; (b) based on the display, providing feedback to the robot indicating whether the selection is correct; and (c) repeating steps (a) and (b) until the selection is correct. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A vision-guided robot trainable, via interactions with a human trainer, to complete a specified task comprising selection of an object, the robot comprising:
-
a machine-vision system comprising at least one camera for capturing images within a camera field of view and an image-processing module for identifying objects within the captured images; a user interface comprising (i) a robot-controlled visual display for displaying the images and, overlaid thereon, robot-generated graphics indicative of a selection of an object by the robot, and (ii) at least one input device for receiving feedback from the trainer indicating whether the selection is correct; and a control system for controlling and, in response to the feedback, modifying the selection until the selection is correct. - View Dependent Claims (8, 9, 10)
-
-
11. A robot-implemented method of learning to visually recognize objects, the method comprising:
-
(a) establishing a visual outline of an object in a camera view thereof; (b) based at least in part on an analysis of pixels within the outline, selecting a visual model from among a plurality of computer-implemented visual models and generating a representation of the object in accordance with the selected model; (c) based on the model, identifying the object in a camera view thereof; (d) receiving user feedback related to the identification, the feedback either confirming or rejecting the identification; and (e) repeating steps (b) through (d) until the identification is confirmed. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20, 21)
-
-
22. A vision-guided robot trainable, via interactions with a human trainer, to complete a specified task comprising visual recognition of an object, the robot comprising:
-
a camera for capturing camera views of the object; a control system for (i) selecting, based at least in part on an analysis of pixels within a visual outline of the object in a camera view thereof, a visual model from among a plurality of computer-implemented visual models and generating a representation of the object in accordance with the selected model, (ii) identifying the object in a camera view thereof based on the model; and a user interface comprising (i) a robot-controlled visual display for displaying a camera view of the object and, overlaid thereon, robot-generated graphics indicative of the identification, and (ii) at least one input device for receiving feedback from the trainer indicating whether the identification is correct. - View Dependent Claims (23, 24)
-
Specification