Fault tolerance to provide robust tracking for autonomous positional awareness
First Claim
Patent Images
1. A system of guiding a mobile platform in an unmapped area and without producing a map of the unmapped area, the system including:
- a mobile platform;
a sensory interface coupling to one or more sensors including at least some visual sensors and at least some inertial sensors, wherein the one or more sensors are configured to sense one or more of position, motion or environment of the mobile platform;
a processor coupled to the sensory interface and the mobile platform to provide guidance and control, and further coupled to a computer readable storage medium storing computer instructions configured for performing;
maintaining a set of time dependent tracking states that include;
(i) a pose and (ii) one or more frames of sensor readings including at least some frames from visual sensors wherein at least some frames include sets of 2D feature points located using image information from the visual sensors;
wherein each frame of sensor readings can include sensory information received from any one of a plurality of sensors available to the mobile platform;
selecting two or more tracking states from the set of time dependent tracking states;
creating a 2D feature correspondences set comprising of common 2D feature points among the visual sensor frames from the selected tracking states;
triangulating 2D feature points from the 2D feature correspondences set to form a 3D point set;
selecting a subset of the 3D point set that includes 3D points having re-projection error within a threshold;
updating current poses for at least two of the tracking states within the set of time dependent tracking states subject to a criterion including reduction of a total of re-projection errors of the selected 3D points; and
guiding the mobile platform using the updated current pose determined for that at least two tracking states.
2 Assignments
0 Petitions
Accused Products
Abstract
The described positional awareness techniques employing visual-inertial sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy.
-
Citations
21 Claims
-
1. A system of guiding a mobile platform in an unmapped area and without producing a map of the unmapped area, the system including:
-
a mobile platform; a sensory interface coupling to one or more sensors including at least some visual sensors and at least some inertial sensors, wherein the one or more sensors are configured to sense one or more of position, motion or environment of the mobile platform; a processor coupled to the sensory interface and the mobile platform to provide guidance and control, and further coupled to a computer readable storage medium storing computer instructions configured for performing; maintaining a set of time dependent tracking states that include;
(i) a pose and (ii) one or more frames of sensor readings including at least some frames from visual sensors wherein at least some frames include sets of 2D feature points located using image information from the visual sensors;wherein each frame of sensor readings can include sensory information received from any one of a plurality of sensors available to the mobile platform; selecting two or more tracking states from the set of time dependent tracking states; creating a 2D feature correspondences set comprising of common 2D feature points among the visual sensor frames from the selected tracking states; triangulating 2D feature points from the 2D feature correspondences set to form a 3D point set; selecting a subset of the 3D point set that includes 3D points having re-projection error within a threshold; updating current poses for at least two of the tracking states within the set of time dependent tracking states subject to a criterion including reduction of a total of re-projection errors of the selected 3D points; and guiding the mobile platform using the updated current pose determined for that at least two tracking states. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A non-transitory computer readable medium having instructions stored thereon for performing a method of guiding a mobile platform in an unmapped area and without producing a map of the unmapped area, including:
-
maintaining a set of time dependent tracking states that include;
(i) a pose and (ii) one or more frames of sensor readings including at least some frames from visual sensors wherein at least some frames include sets of 2D feature points located using image information from the visual sensors;wherein each frame of sensor readings can include sensory information received from any one of a plurality of sensors available to the mobile platform; selecting two or more tracking states from the set of time dependent tracking states; creating a 2D feature correspondences set comprising of common 2D feature points among the visual sensor frames from the selected tracking states; triangulating 2D feature points from the 2D feature correspondences set to form a 3D point set; selecting a subset of the 3D point set that includes 3D points having re-projection error within a threshold; updating current poses for at least two of the tracking states within the set of time dependent tracking states subject to a criterion including reduction of a total of re-projection errors of the selected 3D points; and guiding the mobile platform using the updated current pose determined for that at least two tracking states.
-
-
21. A method of guiding a mobile platform in an unmapped area and without producing a map of the unmapped area, including:
-
maintaining a set of time dependent tracking states that include;
(i) a pose and (ii) one or more frames of sensor readings including at least some frames from visual sensors wherein at least some frames include sets of 2D feature points located using image information from the visual sensors;wherein each frame of sensor readings can include sensory information received from any one of a plurality of sensors available to the mobile platform; selecting two or more tracking states from the set of time dependent tracking states; creating a 2D feature correspondences set comprising of common 2D feature points among the visual sensor frames from the selected tracking states; triangulating 2D feature points from the 2D feature correspondences set to form a 3D point set; selecting a subset of the 3D point set that includes 3D points having re-projection error within a threshold; updating current poses for at least two of the tracking states within the set of time dependent tracking states subject to a criterion including reduction of a total of re-projection errors of the selected 3D points; and guiding the mobile platform using the updated current pose determined for that at least two tracking states.
-
Specification