Robot motion replanning based on user motion
First Claim
1. A method comprising:
- determining a current position of a robot with a processor of the robot;
receiving sensor readings on positions, directions, and velocities of a visually-impaired user and other users who are not visually-impaired in an environment where the visually-impaired user is known from the other users;
generating a model of motions of the visually-impaired user and the other users, the model including a user path for the visually-impaired user and a robot path for the robot;
generating a collision prediction map to predict collisions between at least one of the robot, the visually-impaired user, and the other users;
determining whether there is a risk of collision for either the visually-impaired user or the robot; and
responsive to the risk of collision, updating at least one of the user path and the robot path based on the risk of collision;
wherein generating the model of the motions further comprises;
receiving torso-directed movement data for the visually-impaired user;
determining a face direction for the visually-impaired user;
determining a walking direction for the visually-impaired user;
determining whether the visually-impaired user exhibits low-consistency movement or high-consistency movement based on the torso direction and the walking direction; and
determining human motion uncertainty based on the consistency of movement.
1 Assignment
0 Petitions
Accused Products
Abstract
The disclosure includes a system and method for determining a robot path based on user motion by determining a current position of a robot with a processor-based computing device programmed to perform the determining, receiving sensor readings on positions, directions, and velocities of a visually-impaired user and other users, generating a model of the motions of the visually-impaired user and the other users, the model including a user path for the visually-impaired user and a robot path for the robot, generating a collision prediction map to predict collisions between at least one of the robot, the visually-impaired user, and the other users, determining whether there is a risk of collision for either the visually-impaired user or the robot, and responsive to the risk of collision, updating at least one of the user path and the robot path.
15 Citations
20 Claims
-
1. A method comprising:
-
determining a current position of a robot with a processor of the robot; receiving sensor readings on positions, directions, and velocities of a visually-impaired user and other users who are not visually-impaired in an environment where the visually-impaired user is known from the other users; generating a model of motions of the visually-impaired user and the other users, the model including a user path for the visually-impaired user and a robot path for the robot; generating a collision prediction map to predict collisions between at least one of the robot, the visually-impaired user, and the other users; determining whether there is a risk of collision for either the visually-impaired user or the robot; and responsive to the risk of collision, updating at least one of the user path and the robot path based on the risk of collision; wherein generating the model of the motions further comprises; receiving torso-directed movement data for the visually-impaired user; determining a face direction for the visually-impaired user; determining a walking direction for the visually-impaired user; determining whether the visually-impaired user exhibits low-consistency movement or high-consistency movement based on the torso direction and the walking direction; and determining human motion uncertainty based on the consistency of movement. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A computer program product comprising a non-transitory computer-usable medium including a computer-readable program, wherein the computer-readable program when executed on a computer causes the computer to:
-
determine a current position of a robot; receive sensor readings on positions, directions, and velocities of a visually-impaired user and other users in an environment where the visually-impaired user is known from the other users, wherein the other users are not visually-impaired; generate a model of motions of the visually-impaired user and the other users, the model including a user path for the visually-impaired user and a robot path for the robot; generate a collision prediction map to predict collisions between at least one of the robot, the visually-impaired user, and the other users; determine whether there is a risk of collision for either the visually-impaired user or the robot; and responsive to the risk of collision, update at least one of the user path and the robot path based on the risk of collision; wherein generating the model of the motions further comprises; receiving torso-directed movement data for the visually-impaired user; determining a face direction for the visually-impaired user; determining a walking direction for the visually-impaired user; determining whether the visually-impaired user exhibits low-consistency movement or high-consistency movement based on the torso direction and the walking direction; and determining human motion uncertainty based on the consistency of movement. - View Dependent Claims (10, 11, 12, 13, 14)
-
-
15. A system comprising:
-
a processor; and a non-transitory memory storing instructions that, when executed, cause the system to; determine a current position of a robot; receive sensor readings on positions, directions, and velocities of a visually-impaired user and other users in an environment where the visually-impaired user is known from the other users, wherein the other users are not visually-impaired; generate a model of motions of the visually-impaired user and the other users, the model including a user path for the visually-impaired user and a robot path for the robot; generate a collision prediction map to predict collisions between at least one of the robot, the visually-impaired user, and the other users; determine whether there is a risk of collision for either the visually-impaired user or the robot; and responsive to the risk of collision, update at least one of the user path and the robot path based on the risk of collision; wherein generating the model of the motions further comprises; receiving torso-directed movement data for the visually-impaired user; determining a face direction for the visually-impaired user; determining a walking direction for the visually-impaired user; determining whether the visually-impaired user exhibits low-consistency movement or high-consistency movement based on the torso direction and the walking direction; and determining human motion uncertainty based on the consistency of movement. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification