SEMI-AUTONOMOUS ROBOT THAT SUPPORTS MULTIPLE MODES OF NAVIGATION
First Claim
1. A method that is executable by a processor residing in a mobile robot, the method comprising:
- causing video data captured by a video camera resident upon the robot to be transmitted to a remote computing device by way of a communications channel established between the robot and the remote computing device, wherein the video camera is capturing video data at a first point of view;
receiving a first command by way of the communications channel from the remote computing device to alter a point of view of the video camera from the first point of view to a second point of view;
responsive to receiving the first command, causing the point of view of the video camera to be altered from the first point of view to the second point of view while continuing to transmit video data to the remote computing device by way of the communications channel;
subsequent to the point of view of the robot being altered from the first point of view to the second point of view, receiving a second command by way of the communications channel to drive the robot in a direction that corresponds to a center of the second point of view;
causing a motor to drive the robot in the direction that corresponds to the second point of view until either
1) a command is received from the remote computing device to discontinue driving the robot in the direction that corresponds to the center of the second point of view;
or
2) data is received from a sensor on the robot that indicates that the robot is unable to continue travelling in the direction that corresponds to the center of the second point of view, wherein the robot travels autonomously in the direction that corresponds to the second point of view.
2 Assignments
0 Petitions
Accused Products
Abstract
Described herein are technologies pertaining to robot navigation. The robot includes a video camera that is configured to transmit a live video feed to a remotely located computing device. A user interacts with the live video feed, and the robot navigates in its environment based upon the user interaction. In a first navigation mode, the user selects a location, and the robot autonomously navigates to the selected location. In a second navigation mode, the user causes the point of view of the video camera on the robot to change, and thereafter causes the robot to semi-autonomously drive in a direction corresponding to the new point of view of the video camera. In a third navigation mode, the user causes the robot to navigate to a selected location in the live video feed.
105 Citations
20 Claims
-
1. A method that is executable by a processor residing in a mobile robot, the method comprising:
-
causing video data captured by a video camera resident upon the robot to be transmitted to a remote computing device by way of a communications channel established between the robot and the remote computing device, wherein the video camera is capturing video data at a first point of view; receiving a first command by way of the communications channel from the remote computing device to alter a point of view of the video camera from the first point of view to a second point of view; responsive to receiving the first command, causing the point of view of the video camera to be altered from the first point of view to the second point of view while continuing to transmit video data to the remote computing device by way of the communications channel; subsequent to the point of view of the robot being altered from the first point of view to the second point of view, receiving a second command by way of the communications channel to drive the robot in a direction that corresponds to a center of the second point of view; causing a motor to drive the robot in the direction that corresponds to the second point of view until either
1) a command is received from the remote computing device to discontinue driving the robot in the direction that corresponds to the center of the second point of view;
or
2) data is received from a sensor on the robot that indicates that the robot is unable to continue travelling in the direction that corresponds to the center of the second point of view, wherein the robot travels autonomously in the direction that corresponds to the second point of view. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A robot, comprising:
-
a processor; and a memory, wherein the memory comprises a plurality of components that are executable by the processor, the components comprising; a direct and drive component that is configured to direct the robot along a path in a first direction based at least in part upon commands received from a remote computing device, wherein the remote computing device is in communication with the robot by way of a network, wherein the commands comprise; a first command that causes the direct and drive component to change a point of view of a video camera on the robot to change from a first point of view to a second point of view; and a second command that causes the direct and drive component to drive the robot to drive along the path in the first direction subsequent to the point of view of the video camera changing from the first point of view to the second point of view, wherein the first direction corresponds to a center point of the second point of view; an obstacle detector component that receives data from a sensor that indicates that an obstacle resides in the path of the robot and outputs an indication that the obstacle resides in the path of the robot; and a direction modifier component that receives the indication, and responsive to receipt of the indication, causes the robot to automatically change direction from the first direction to a second direction to avoid the obstacle. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19)
-
-
20. A robot, comprising:
-
a processor; and a memory that comprises a plurality of components that are executable by the processor, wherein the plurality of components comprises; a drive and direct component that is configured to drive a robot in a direction specified by way of a first command from a remote computing device, wherein the drive and direct component is configured to autonomously cause the robot to avoid obstacles while driving in the direction specified in the first command; a location direction component that is configured to drive the robot to a tagged location in a map of an environment that is being experienced by the robot, wherein the map is retained in the memory of the processor, wherein the location direction component is configured to drive the robot to the tagged location responsive to receipt of a second command from the remote computing device, wherein the second command indicates the tagged location, and wherein the location direction component is configured to autonomously cause the robot to avoid obstacles while driving to the tagged location; and a drag and direct component that is configured to drive the robot to a particular location that is in a field of view of a video camera in the robot responsive to a third command from the remote computing device, wherein the third command indicates the location in the field of view of the video camera, and wherein the drag and direct component is configured to autonomously cause the robot to avoid obstacles while driving to the particular location.
-
Specification