Gesture enabled telepresence robot and system
First Claim
1. A telepresence robot comprising:
- a mobile platform;
a first robot camera; and
a computer in communication with the platform and the first robot camera, said computer having a processor and computer-readable memory,wherein said computer is configured to receive video information from at least one remotely located camera and sensed 3D information from at least one remotely located 3D sensor, the video information and sensed 3D information signifying a physical body gesture made by a remotely-located human operator, and wherein said telepresence robot is configured to perform an imitating action of the physical body gesture made by the remotely-located human operator, andwherein said computer is further configured to receive video information from the first robot camera comprising a video image of a person interacting with the telepresence robot and to predict an emotional state of the person based on the received video information.
3 Assignments
0 Petitions
Accused Products
Abstract
A telepresence robot includes a mobile platform, a camera, and a computer in communication with the platform and the camera. The computer receives video information from a remotely located camera relating to a gesture made by a remote human operator, and the robot is configured to operate based on the gesture. The computer may further predict an emotional state of a person interacting with the robot, based on video information received from the robot camera. A telepresence robot system includes: a telepresence robot having a mobile platform, a robot camera, a robot display, and a robot computer; and a control station, located remote to the robot, having a control station camera, a control station display, and a control station computer.
114 Citations
23 Claims
-
1. A telepresence robot comprising:
-
a mobile platform; a first robot camera; and a computer in communication with the platform and the first robot camera, said computer having a processor and computer-readable memory, wherein said computer is configured to receive video information from at least one remotely located camera and sensed 3D information from at least one remotely located 3D sensor, the video information and sensed 3D information signifying a physical body gesture made by a remotely-located human operator, and wherein said telepresence robot is configured to perform an imitating action of the physical body gesture made by the remotely-located human operator, and wherein said computer is further configured to receive video information from the first robot camera comprising a video image of a person interacting with the telepresence robot and to predict an emotional state of the person based on the received video information. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A telepresence robot system comprising:
-
a telepresence robot having a mobile platform, a robot camera, a robot display, and a robot computer in communication with the platform and the robot camera, said robot computer having a processor and computer-readable memory; and a control station located remote to the telepresence robot, said control station having a control station camera, a control station 3D sensor, a control station display, and a control station computer in communication with the control station camera and control station display, said control station computer having a processor and computer-readable memory; wherein said control station camera and said control station 3D sensor are configured to detect video information and 3D sensed information signifying physical body gesture made by a human operator, and wherein said telepresence robot is configured to perform an imitating action of the detected physical body gesture, and wherein said control station computer is further configured to receive video and/or biometric information from the control station camera and/or a biometric sensor relating to the human operator and to predict an emotional state of the operator based on the received information, and wherein the video and/or biometric information used to predict the emotional state of the operator comprises at least one of the following;
pupillary dilation, retinal patterns, blood flow, body fluid distribution, respiration, nostril movement, heart rate, and skin brightness. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22)
-
-
23. A telepresence robotic system comprising:
-
a telepresence robot having a mobile platform, a robot camera, a robot display, and a robot computer in communication with the platform and the robot camera, said robot computer having a processor and computer-readable memory; and a control station located remote to the telepresence robot, said control station having a control station camera, a control station display, and a control station computer in communication with the control station camera and control station display, said control station computer having a processor and computer-readable memory; wherein said control station computer receives video information from the robot camera and displays the received video information as a video image on the control station display, wherein said control station computer is configured to receive an input signal relating to a selection of a particular location in the video image on the control station display, and wherein said telepresence robot is configured to navigate to a particular location based on said selection.
-
Specification