Systems and methods for correction of drift via global localization with a visual landmark
First Claim
1. A method of autonomous localization and mapping, the method comprising:
- visually observing an environment via a visual sensor;
maintaining a map of landmarks in a data store, where the map of landmarks is based at least in part on visual observations of the environment;
receiving data from a dead reckoning sensor, where the dead reckoning sensor relates to movement of the visual sensor within the environment;
using data from the dead reckoning sensor and a prior pose estimate to predict a new device pose in a global reference frame at least partly in response to a determination that a known landmark at least recently has not at least recently been encountered; and
using data from the visual sensor to predict a new device pose in the global reference frame at least partly in response to a determination that a known landmark has been recognized, where the new device pose estimate is based at least in part on a previous pose estimate associated with the known landmark and using the visual. sensor data to update one or more maps.
7 Assignments
0 Petitions
Accused Products
Abstract
The invention is related to methods and apparatus that use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation. Advantageously, such visual techniques can be used to autonomously generate and update a map. Unlike with laser rangefinders, the visual techniques are economically practical in a wide range of applications and can be used in relatively dynamic environments, such as environments in which people move. One embodiment further advantageously uses multiple particles to maintain multiple hypotheses with respect to localization and mapping. Further advantageously, one embodiment maintains the particles in a relatively computationally-efficient manner, thereby permitting the SLAM processes to be performed in software using relatively inexpensive microprocessor-based computer systems.
125 Citations
61 Claims
-
1. A method of autonomous localization and mapping, the method comprising:
-
visually observing an environment via a visual sensor;
maintaining a map of landmarks in a data store, where the map of landmarks is based at least in part on visual observations of the environment;
receiving data from a dead reckoning sensor, where the dead reckoning sensor relates to movement of the visual sensor within the environment;
using data from the dead reckoning sensor and a prior pose estimate to predict a new device pose in a global reference frame at least partly in response to a determination that a known landmark at least recently has not at least recently been encountered; and
using data from the visual sensor to predict a new device pose in the global reference frame at least partly in response to a determination that a known landmark has been recognized, where the new device pose estimate is based at least in part on a previous pose estimate associated with the known landmark and using the visual. sensor data to update one or more maps. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A computer program embodied in a tangible medium for autonomous localization and mapping, the computer program comprising:
-
a module with instructions configured to visually observe an environment via a visual sensor;
a module with instructions configured to maintain a map of landmarks in a data store, where the map of landmarks is based at least in part on visual observations of the environment;
a module with instructions configured to receive data from a dead reckoning sensor, where the dead reckoning sensor relates to movement of the visual sensor within the environment;
a module with instructions configured to use data from the dead reckoning sensor and a prior pose estimate to predict a new device pose in a global reference frame at least partly in response to a determination that a known landmark has not at least recently been encountered; and
a module with instructions configured to use data from the visual sensor to predict a new device pose in the global reference frame at least partly in response to a determination that a known landmark has been recognized, where the new device pose estimate is based at least in part on a previous pose estimate associated with the known landmark, and using the visual sensor data to update one or more maps. - View Dependent Claims (10)
-
-
11. A method of localization and mapping in a mobile device that travels in an environment, the method comprising:
-
receiving images of the environment from a visual sensor coupled to the mobile device as the mobile device travels in the environment;
extracting visual features from one or more images;
matching at least a portion of the visual features to previously observed features;
estimating one or more poses of the mobile device relative to the previously-observed sets of features based at least in part on matches found between features observed in the image and features previously observed;
using the one or more estimated relative poses to localize the mobile device within one or more maps; and
updating the one or more maps. - View Dependent Claims (12, 13, 14, 15, 16)
-
-
17. A circuit for localization and mapping in a mobile device that travels in an environment, the circuit comprising:
-
a circuit configured to receive images of the environment from a visual sensor coupled to the mobile device as the mobile device travels in the environment;
a circuit configured to extract visual features from one or more images;
a circuit configured to match at least a portion of the visual features to previously-observed features;
a circuit configured to estimate one or more poses of the mobile device. relative to the previously-observed sets of features based at least in part on matches found between features observed in the image and features previously observed;
a circuit configured to use the one or more estimated relative poses to localize the mobile device within one or more maps; and
a circuit configured to update the one or more maps. - View Dependent Claims (18, 19, 20, 21)
-
-
22. A computer program embodied in a tangible medium for localization and mapping in a mobile device that travels in an environment, the computer program comprising:
-
a module with instructions configured to receive images of the environment from a visual sensor coupled to the mobile device as the mobile device travels in the environment;
a module with instructions configured to extract visual features from one or more images;
a module with instructions configured to match at least a portion of the visual features to previously-observed features;
a module with instructions configured to estimate one or more poses of the mobile device relative to the previously-observed sets of features based at least in part on matches found between features observed in the image and features previously observed;
a module with instructions configured to use the one or more estimated relative poses to localize the mobile device within one or more maps; and
a module with instructions configured to update the one or more maps. - View Dependent Claims (23, 24)
-
-
25. A method of autonomous localization, the method comprising:
-
using dead reckoning data for navigation between observations of visually-identifiable landmarks; and
using a visual observation of a landmark with a reference in a global reference frame to adjust an estimate of a pose so as to reduce an amount of drift in a pose later estimated with the dead reckoning data. - View Dependent Claims (26, 27, 28, 29, 30, 31)
-
-
32. A circuit for autonomous localization, the circuit comprising:
-
a circuit configured to use dead reckoning data for navigation between observations of visually-identifiable landmarks; and
a circuit configured to use a visual observation of a landmark with a reference in a global reference frame to adjust an estimate of a pose so as to reduce an amount of drift in a pose later estimated with the dead reckoning data. - View Dependent Claims (33, 34, 35)
-
-
36. A computer program embodied in a tangible medium for autonomous localization, the computer program comprising:
-
a module with instructions configured to use dead reckoning data for navigation between observations of visually-identifiable landmarks; and
a module with instructions configured to use a visual observation of a landmark with a reference in the global reference frame to adjust an estimate of a pose so as to reduce an amount of drift in a pose later estimated with the dead reckoning data. - View Dependent Claims (37, 38)
-
-
39. A circuit for autonomous localization, the circuit comprising:
-
a means for using dead reckoning data between observations of visually-identifiable landmarks; and
a means for using a visual observation of a landmark with a reference in the global reference frame to adjust an estimate of a pose such that an amount of drift in a pose later estimated with the dead reckoning data is substantially reduced. - View Dependent Claims (40, 41)
-
-
42. A method of autonomous localization and mapping, the method comprising:
-
receiving images from a visual sensor;
receiving data from a dead reckoning sensor;
generating a map based on landmarks observed in the images, where a landmark is associated with a device pose as at least partly determined by data from the dead reckoning sensor, where the landmarks are identified by visual features of an unaltered or unmodified environment and not by detection of artificial navigational beacons; and
localizing within the map by using a combination of recognition of visual features of the environment and dead reckoning data.
-
-
43. The method as defined in 42, further comprising using the localization and mapping for a mobile robot.
-
44. The method as defined in 42, wherein the visual sensor corresponds to a single camera.
-
45. The method as defined in 44, wherein the visual sensor is coupled to a mobile robot, further comprising having the mobile robot move to provide images with different perspective views.
-
46. The method as defined in 42, wherein the visual sensor corresponds to multiple cameras.
-
47. The method as defined in 42, wherein the dead reckoning data corresponds to data from at least one of an odometer and a pedometer.
-
48. The method as defined in 42, wherein generating the map and localizing within the map are performed in real time.
-
49. The method as defined in 42, further comprising updating the map by using a combination of recognition of visual features of the environment and dead reckoning data.
-
50. A computer program embodied in a tangible medium for autonomous localization and mapping, the computer program comprising:
-
a module with instructions configured to receive images from a visual sensor;
a module with instructions configured to receive data from a dead reckoning sensor;
a module with instructions configured to generate a map based on landmarks observed in the images, where a landmark is associated with a device pose as at least partly determined by data from the dead reckoning sensor, where the landmarks are identified by visual features of an unaltered or unmodified environment and not by detection of artificial navigational beacons; and
a module with instructions configured to localize within the map by using a combination of recognition of visual features of the environment and dead reckoning data.
-
-
51. The computer program as defined in 50, wherein the visual sensor is coupled to a mobile robot, further comprising a module with instructions configured to have the mobile robot move to provide images with different perspective views.
-
52. The computer program as defined in 50, wherein the dead reckoning data corresponds to data from at least one of an odometer and a pedometer.
-
53. A method of adding a landmark to a map of landmarks, the method comprising:
-
using visual features observed in an environment as landmarks;
referencing poses for landmarks in a map of landmarks in a global reference frame;
storing one or more coordinates of the landmark'"'"'s 3-D features in the landmark reference frame; and
storing an initial estimate of landmark pose.
-
-
54. The method as defined in 53, further comprising altering the initial estimate of landmark pose by a subsequent measurement.
-
55. The method as defined in 53, wherein storing one or more coordinates further comprises measuring 3-dimensional displacements from a visual sensor coupled to a mobile robot.
-
56. The method as defined in 53, wherein the observed visual features correspond to scale-invariant features (SIFT).
-
57. The method as defined in 53, wherein the method is performed in real time.
-
58. The method as defined in 53, further comprising using images from a single camera to detect the visual features.
-
59. A computer program embodied in a tangible medium for adding a landmark to a map of landmarks, the computer program comprising:
-
a module with instructions configured to use visual features observed in an environment as landmarks;
a module with instructions configured to reference poses for landmarks in a map of landmarks in a global reference frame;
a module with instructions configured to store one or more coordinates of the landmark'"'"'s 3-D features in the landmark reference frame; and
a module with instructions configured to store an initial estimate of landmark pose.
-
-
60. The computer program as defined in 59, wherein the module with instructions configured to store one or more coordinates further comprises instructions configured to measure 3-dimensional displacements from a visual sensor coupled to a mobile robot.
-
61. The computer program as defined in 59, wherein the observed visual features correspond to scale-invariant features (SIFT).
Specification