Natural human to robot remote control
First Claim
1. In a computing environment, a system comprising:
- a control program implemented in a remote device relative to a robotic device and configured to receive image data and depth data captured at a location remote from the robotic device, convert the image data and the depth data into skeletal data, process the skeletal data into one or more action commands that when processed by the robotic device operate one or more components of the robotic device, and transmit the one or more action commands to the robotic device over a connection to control the robotic device.
2 Assignments
0 Petitions
Accused Products
Abstract
The subject disclosure is directed towards controlling a robot based upon sensing a user'"'"'s natural and intuitive movements and expressions. User movements and/or facial expressions are captured by an image and depth camera, resulting in skeletal data and/or image data that is used to control a robot'"'"'s operation, e.g., in a real time, remote (e.g., over the Internet) telepresence session. Robot components that may be controlled include robot “expressions” (e.g., audiovisual data output by the robot), robot head movements, robot mobility drive operations (e.g., to propel and/or turn the robot), and robot manipulator operations, e.g., an arm-like mechanism and/or hand-like mechanism.
41 Citations
20 Claims
-
1. In a computing environment, a system comprising:
a control program implemented in a remote device relative to a robotic device and configured to receive image data and depth data captured at a location remote from the robotic device, convert the image data and the depth data into skeletal data, process the skeletal data into one or more action commands that when processed by the robotic device operate one or more components of the robotic device, and transmit the one or more action commands to the robotic device over a connection to control the robotic device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
9. In a computing environment, a method performed at least in part on at least one processor, comprising:
-
receiving image data and depth data; identifying head movement of a user via the image data and the depth data; detecting changes to the head movement of the user; and transmitting data corresponding to the detected changes, including transmitting data representing the head movement, to a robot that uses the data corresponding to the changes detected to control operation of one or more components of the robot to match head movement of the robot with the head movement of the user. - View Dependent Claims (10, 11, 12, 13, 14, 15)
-
-
16. One or more computer storage devices having computer-executable instructions, which in response to execution by a computer, cause the computer to perform steps comprising:
-
receiving information at a robot corresponding to skeletal data converted from image data and depth data, the image data and the depth data captured from a user via a control device remote from the robot, the control device including user interaction technology and configured to convert the image data and the depth data to the skeletal data; processing the information to control operation of one or more components of the robot; and transmitting other information corresponding to the operation of the one or more components of the robot to the control device, the other information including sensory data captured by one or more sensors coupled to the robot as the one or more components of the robot are controlled using the received information. - View Dependent Claims (17, 18, 19, 20)
-
Specification