Method and system for providing remote robotic control
First Claim
1. A method of providing mixed-initiative robotic control, including:
- at a computing device having one or more processors and memory, wherein the computing device is communicably coupled to a robot and is configured to generate a planned path for the robot in accordance with a first set of preprogrammed path-planning instructions, and the robot is configured to navigate within a physical environment in accordance with the planned path received from the computing device and locally-stored path-execution instructions;
displaying a control user interface via a display generation component coupled to the computing device, including displaying a virtualized environment corresponding to a first physical environment currently surrounding the robot, wherein the virtualized environment is generated and updated in accordance with streaming environment data received from a first set of sensors collocated with the robot;
while displaying the virtualized environment, detecting a first user input inserting a first virtual object at a first location in the virtualized environment;
in response to detecting the first user input, modifying the first virtualized environment in accordance with the insertion of the first virtual object at the first location, wherein the first virtual object at the first location causes the robot to execute a first navigation path in the physical environment that is generated in accordance with the first set of pre-programmed path-planning instructions;
while displaying the first virtual object at the first location in the virtualized environment and while the robot is executing the first navigation path in the physical environment, detecting a second user input, including detecting a first movement input directed to the first virtual object via a haptic-enabled input device; and
in response to detecting the second user input;
moving the first virtual object along a first movement path to a second location in the virtualized environment in accordance with the first movement input, wherein the first movement path is constrained by one or more simulated surfaces in the virtualized environment, wherein the first virtual object at the second location causes the robot to execute a modified navigation path in the physical environment that is generated in accordance with the first set of pre-programmed path-planning instructions.
1 Assignment
0 Petitions
Accused Products
Abstract
A virtualized environment corresponding to a physical environment currently surrounding a robot is displayed. The virtualized environment is updated in accordance with streaming environment data received from sensors collocated with the robot. A first user input inserting a first virtual object at a first location in the virtualized environment is detected. The virtualized environment is modified in accordance with the insertion of the first virtual object at the first location. The first virtual object at the first location causes the robot to execute a first navigation path in the physical environment. A second user input is detected that moves the first virtual object along a movement path to a second location in the virtualized environment. The movement path is constrained by simulated surfaces in the virtualized environment, and the first virtual object at the second location causes the robot to execute a modified navigation path in the physical environment.
14 Citations
20 Claims
-
1. A method of providing mixed-initiative robotic control, including:
at a computing device having one or more processors and memory, wherein the computing device is communicably coupled to a robot and is configured to generate a planned path for the robot in accordance with a first set of preprogrammed path-planning instructions, and the robot is configured to navigate within a physical environment in accordance with the planned path received from the computing device and locally-stored path-execution instructions; displaying a control user interface via a display generation component coupled to the computing device, including displaying a virtualized environment corresponding to a first physical environment currently surrounding the robot, wherein the virtualized environment is generated and updated in accordance with streaming environment data received from a first set of sensors collocated with the robot; while displaying the virtualized environment, detecting a first user input inserting a first virtual object at a first location in the virtualized environment; in response to detecting the first user input, modifying the first virtualized environment in accordance with the insertion of the first virtual object at the first location, wherein the first virtual object at the first location causes the robot to execute a first navigation path in the physical environment that is generated in accordance with the first set of pre-programmed path-planning instructions; while displaying the first virtual object at the first location in the virtualized environment and while the robot is executing the first navigation path in the physical environment, detecting a second user input, including detecting a first movement input directed to the first virtual object via a haptic-enabled input device; and in response to detecting the second user input; moving the first virtual object along a first movement path to a second location in the virtualized environment in accordance with the first movement input, wherein the first movement path is constrained by one or more simulated surfaces in the virtualized environment, wherein the first virtual object at the second location causes the robot to execute a modified navigation path in the physical environment that is generated in accordance with the first set of pre-programmed path-planning instructions. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
8. A computing device for providing mixed-initiative robotic control, including:
-
one or more processors; and memory storing instructions, wherein; the computing device is communicably coupled to a robot and is configured to generate a planned path for the robot in accordance with a first set of preprogrammed path-planning instructions, the robot is configured to navigate within a physical environment in accordance with the planned path received from the computing device and locally-stored path-execution instructions, and the instructions, when executed by the one or more processors, cause the processors, to perform operations comprising; displaying a control user interface via a display generation component coupled to the computing device, including displaying a virtualized environment corresponding to a first physical environment currently surrounding the robot, wherein the virtualized environment is generated and updated in accordance with streaming environment data received from a first set of sensors collocated with the robot; while displaying the virtualized environment, detecting a first user input inserting a first virtual object at a first location in the virtualized environment; in response to detecting the first user input, modifying the first virtualized environment in accordance with the insertion of the first virtual object at the first location, wherein the first virtual object at the first location causes the robot to execute a first navigation path in the physical environment that is generated in accordance with the first set of pre-programmed path-planning instructions; while displaying the first virtual object at the first location in the virtualized environment and while the robot is executing the first navigation path in the physical environment, detecting a second user input, including detecting a first movement input directed to the first virtual object via a haptic-enabled input device; and in response to detecting the second user input; moving the first virtual object along a first movement path to a second location in the virtualized environment in accordance with the first movement input, wherein the first movement path is constrained by one or more simulated surfaces in the virtualized environment, wherein the first virtual object at the second location causes the robot to execute a modified navigation path in the physical environment that is generated in accordance with the first set of pre-programmed path-planning instructions. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A computer-readable storage medium storing instructions, the instructions, when executed by one or more processors of a computing device, cause the computing device to perform operations, wherein:
-
the computing device is communicably coupled to a robot and is configured to generate a planned path for the robot in accordance with a first set of preprogrammed path-planning instructions, the robot is configured to navigate within a physical environment in accordance with the planned path received from the computing device and locally-stored path-execution instructions, and the operations include; displaying a control user interface via a display generation component coupled to the computing device, including displaying a virtualized environment corresponding to a first physical environment currently surrounding the robot, wherein the virtualized environment is generated and updated in accordance with streaming environment data received from a first set of sensors collocated with the robot; while displaying the virtualized environment, detecting a first user input inserting a first virtual object at a first location in the virtualized environment; in response to detecting the first user input, modifying the first virtualized environment in accordance with the insertion of the first virtual object at the first location, wherein the first virtual object at the first location causes the robot to execute a first navigation path in the physical environment that is generated in accordance with the first set of pre-programmed path-planning instructions; while displaying the first virtual object at the first location in the virtualized environment and while the robot is executing the first navigation path in the physical environment, detecting a second user input, including detecting a first movement input directed to the first virtual object via a haptic-enabled input device; and in response to detecting the second user input; moving the first virtual object along a first movement path to a second location in the virtualized environment in accordance with the first movement input, wherein the first movement path is constrained by one or more simulated surfaces in the virtualized environment, wherein the first virtual object at the second location causes the robot to execute a modified navigation path in the physical environment that is generated in accordance with the first set of pre-programmed path-planning instructions. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification