METHOD AND SYSTEM FOR TRAINING A ROBOT USING HUMAN-ASSISTED TASK DEMONSTRATION
First Claim
1. A method for training a robot to execute a robotic task in a work environment, the method comprising:
- moving the robot across its configuration space through multiple states of the robotic task to thereby demonstrate the robotic task to the robot, wherein the configuration space is the set of all possible configurations for the robot;
recording motor schema describing a sequence of behavior of the robot;
recording sensory data describing performance and state values of the robot while moving the robot across its configuration space;
detecting perceptual features of objects located in the environment;
assigning virtual deictic markers to the detected perceptual features; and
using the assigned virtual deictic markers and the recorded motor schema to subsequently control the robot in an automated execution of another robotic task.
3 Assignments
0 Petitions
Accused Products
Abstract
A method for training a robot to execute a robotic task in a work environment includes moving the robot across its configuration space through multiple states of the task and recording motor schema describing a sequence of behavior of the robot. Sensory data describing performance and state values of the robot is recorded while moving the robot. The method includes detecting perceptual features of objects located in the environment, assigning virtual deictic markers to the detected perceptual features, and using the assigned markers and the recorded motor schema to subsequently control the robot in an automated execution of another robotic task. Markers may be combined to produce a generalized marker. A system includes the robot, a sensor array for detecting the performance and state values, a perceptual sensor for imaging objects in the environment, and an electronic control unit that executes the present method.
-
Citations
20 Claims
-
1. A method for training a robot to execute a robotic task in a work environment, the method comprising:
-
moving the robot across its configuration space through multiple states of the robotic task to thereby demonstrate the robotic task to the robot, wherein the configuration space is the set of all possible configurations for the robot; recording motor schema describing a sequence of behavior of the robot; recording sensory data describing performance and state values of the robot while moving the robot across its configuration space; detecting perceptual features of objects located in the environment; assigning virtual deictic markers to the detected perceptual features; and using the assigned virtual deictic markers and the recorded motor schema to subsequently control the robot in an automated execution of another robotic task. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method for training a robot to execute a robotic task in a work environment, the method comprising:
-
capturing data sequences of positions of a manipulator of the robot from user-controlled movements of the robot, wherein the user-controlled movements move the robot across its configuration space, and wherein the configuration space is the set of all possible configurations for the robot; extracting data segments from the captured data sequences that represent actions of the robot during execution of a robotic skill; detecting objects in the work environment of the robot; assigning a virtual deictic marker to at least some of the detected objects to thereby associate an observed object'"'"'s spatial orientation in the work environment with movements performed by the robot relative to the object; combining the deictic markers to produce a generalized marker, wherein the generalized marker maintains a record of visual features that are common between the deictic markers along with rotational and translational offsets required for the markers to match each other; and using the generalized marker and the recorded motor schema to subsequently control the robot in an automated execution of another robotic task. - View Dependent Claims (13, 14, 15, 16)
-
-
17. A system comprising:
-
a robot having an arm and a manipulator connected to the arm; a sensor array which measures sensory data describing performance and state values of the robot; a perceptual sensor which collects images of objects located in the environment; and an electronic control unit (ECU) in communication with the robot, the sensor array, and the perceptual sensors, and which includes recorded motor schema describing a sequence of behavior of the robot, wherein the ECU is configured to; record the sensory data when the arm and the manipulator are moved across the configuration space of the robot by a human operator through multiple states of a robotic task; detect perceptual features in the collected images from the perceptual sensor; assign virtual deictic markers to the detected perceptual features; and use the assigned virtual deictic markers and the recorded motor schema to control the robot in an automated execution of another robotic task. - View Dependent Claims (18, 19, 20)
-
Specification