Simultaneous localization and mapping for a mobile robot
First Claim
Patent Images
1. A method comprising:
- maneuvering a robot about a scene;
emitting light onto the scene;
capturing images of the scene using a depth-perceptive imaging sensor, the images comprising an active illumination image and an ambient illumination image, each image comprising three-dimensional depth data and brightness data;
executing a particle filter having a set of particles, each particle having an associated occupancy grid map, an associated feature map, and a robot location hypothesis;
updating the occupancy grid map associated with each particle based on the images;
for each image;
instantiating an image pyramid comprising a set of scaled images, each scaled image having a scale relative to the image;
identifying at least one feature point in the scaled images; and
updating the corresponding feature map of each particle with the identified at least one feature point;
determining a location of an object in the scene based on the images and at least one particle of the particle filter;
assigning a confidence level for the location of the object based on the three-dimensional depth data and the brightness data of the images; and
maneuvering the robot in the scene based on the location of the object and the corresponding confidence level.
4 Assignments
0 Petitions
Accused Products
Abstract
A method of localizing a mobile robot includes receiving sensor data of a scene about the robot and executing a particle filter having a set of particles. Each particle has associated maps representing a robot location hypothesis. The method further includes updating the maps associated with each particle based on the received sensor data, assessing a weight for each particle based on the received sensor data, selecting a particle based on its weight, and determining a location of the robot based on the selected particle.
114 Citations
19 Claims
-
1. A method comprising:
-
maneuvering a robot about a scene; emitting light onto the scene; capturing images of the scene using a depth-perceptive imaging sensor, the images comprising an active illumination image and an ambient illumination image, each image comprising three-dimensional depth data and brightness data; executing a particle filter having a set of particles, each particle having an associated occupancy grid map, an associated feature map, and a robot location hypothesis; updating the occupancy grid map associated with each particle based on the images; for each image; instantiating an image pyramid comprising a set of scaled images, each scaled image having a scale relative to the image; identifying at least one feature point in the scaled images; and updating the corresponding feature map of each particle with the identified at least one feature point; determining a location of an object in the scene based on the images and at least one particle of the particle filter; assigning a confidence level for the location of the object based on the three-dimensional depth data and the brightness data of the images; and maneuvering the robot in the scene based on the location of the object and the corresponding confidence level. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
-
Specification