Efficient vision-aided inertial navigation using a rolling-shutter camera with inaccurate timestamps
First Claim
1. A vision-aided inertial navigation system (VINS) comprising:
- an image source configured to produce image data at a first set of time instances along a trajectory within a three-dimensional (3D) environment, wherein the image data captures feature observations within the 3D environment at each of the first set of time instances;
an inertial measurement unit (IMU) configured to produce IMU data for the VINS along the trajectory at a second set of time instances that is misaligned in time with the first set of time instances, wherein the IMU data indicates a motion of the VINS along the trajectory; and
a processing unit comprising an estimator configured to process the IMU data and the image data to compute state estimates for poses of the IMU at each of the first set of time instances and poses of the image source at each of the second set of time instances along the trajectory,wherein the estimator is configured to compute each of the poses for the image source as an interpolation from a subset of the poses for the IMU along the trajectory.
1 Assignment
0 Petitions
Accused Products
Abstract
Vision-aided inertial navigation techniques are described. In one example, a vision-aided inertial navigation system (VINS) comprises an image source to produce image data at a first set of time instances along a trajectory within a three-dimensional (3D) environment, wherein the image data captures features within the 3D environment at each of the first time instances. An inertial measurement unit (IMU) to produce IMU data for the VINS along the trajectory at a second set of time instances that is misaligned with the first set of time instances, wherein the IMU data indicates a motion of the VINS along the trajectory. A processing unit comprising an estimator that processes the IMU data and the image data to compute state estimates for 3D poses of the IMU at each of the first set of time instances and 3D poses of the image source at each of the second set of time instances along the trajectory. The estimator computes each of the poses for the image source as a linear interpolation from a subset of the poses for the IMU along the trajectory.
-
Citations
28 Claims
-
1. A vision-aided inertial navigation system (VINS) comprising:
-
an image source configured to produce image data at a first set of time instances along a trajectory within a three-dimensional (3D) environment, wherein the image data captures feature observations within the 3D environment at each of the first set of time instances; an inertial measurement unit (IMU) configured to produce IMU data for the VINS along the trajectory at a second set of time instances that is misaligned in time with the first set of time instances, wherein the IMU data indicates a motion of the VINS along the trajectory; and a processing unit comprising an estimator configured to process the IMU data and the image data to compute state estimates for poses of the IMU at each of the first set of time instances and poses of the image source at each of the second set of time instances along the trajectory, wherein the estimator is configured to compute each of the poses for the image source as an interpolation from a subset of the poses for the IMU along the trajectory. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A method for computing state estimates for a vision-aided inertial navigation system (VINS) comprising:
-
receiving image data associated with each of a first set of time instances along a trajectory within a three-dimensional (3D) environment, wherein the image data captures features within the 3D environment at each of the first set of time instances time instances; receiving, from an inertial measurement unit (IMU), IMU data indicative of a motion of the VINS along the trajectory at a second set of time instances that is misaligned in time with the first set of time instances; and processing the IMU data and the image data to compute state estimates for poses of the IMU at each of the first set of time instances and poses of the image source at each of the second set of time instances along the trajectory, wherein each of the poses for the image source is computed as a linear interpolation from a subset of the poses for the IMU along the trajectory. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22)
-
-
23. A non-transitory computer-readable storage device comprising program code to cause a processor to:
-
receive image data associated with each of a first set of time instances along a trajectory within a three-dimensional (3D) environment, wherein the image data captures features within the 3D environment at each of the first set of time instances; receive, from an inertial measurement unit (IMU), IMU data indicative of a motion of the VINS along the trajectory at a second set of time instances that is misaligned in time with the first set of time instances; process the IMU data and the image data to compute state estimates for 3D poses of the IMU at each of the first set of time instances and 3D poses of the image source at each of the second set of time instances along the trajectory, wherein each of the poses for the image source is computed as a linear interpolation from a subset of the poses for the IMU along the trajectory.
-
-
24. An inertial navigation system comprising:
-
a first sensor to produce a first set of spatial data at a first set of time instances along a trajectory within a three-dimensional (3D) environment, wherein the first set of spatial data captures features within the 3D environment at each of the first set of time instances; a second sensor to produce a second set of spatial data for the inertial navigation system along the trajectory at a second set of time instances that is misaligned in time with the first set of time instances, wherein the second set of spatial data indicates a motion of the inertial navigation system along the trajectory; and a processing unit comprising an estimator that processes the first set of spatial data and the second set of spatial data to compute state estimates for poses of the second sensor at each of the first set of time instances and poses of the first sensor at each of the second set of time instances along the trajectory, wherein the estimator computes each of the poses for the second sensor as an interpolation from a subset of the poses for the first sensor along the trajectory.
-
-
25. A vision-aided inertial navigation system (VINS) comprising:
-
an image source to produce image data at a first set of time instances along a trajectory within a three-dimensional (3D) environment, wherein the image data captures features within the 3D environment at each of the first set of time instances; an inertial measurement unit (IMU) to produce IMU data for the VINS along the trajectory at a second set of time instances that is misaligned in time with the first set of time instances, wherein the IMU data indicates a motion of the VINS along the trajectory; and a processing unit comprising an estimator that processes the IMU data, the image data, or a set of features data to compute state estimates for poses of the IMU at each of the first set of time instances and poses of the image source at each of the second set of time instances along the trajectory, wherein the estimator computes each of the poses for the IMU as an interpolation from a subset of the poses for the image source along the trajectory.
-
-
26. A vision-aided inertial navigation system (VINS) comprising:
-
an image source configured to produce image data that captures feature observations along a trajectory within a three-dimensional (3D) environment, wherein the image source comprises a sensor having a plurality of rows of image data, and wherein the sensor is configured to read the image data row-by-row so that each of the rows of image data corresponds to a different one of a first set of time instances along the trajectory; an inertial measurement unit (IMU) configured to produce IMU data for the VINS along the trajectory at a second set of time instances that is misaligned in time with the first set of time instances; and a processing unit comprising an estimator configured to process the IMU data and the image data to compute state estimates for poses of the IMU at each of the first set of time instances corresponding to the different rows of image data. - View Dependent Claims (27, 28)
-
Specification