Radar aided visual inertial odometry outlier removal
First Claim
1. A device comprising:
- one or more processors configured to;
obtain one or more images from at least one camera;
translate a radio detection and ranging (RADAR) velocity map in at least one image plane of at least one camera, to form a three-dimensional RADAR velocity image, wherein the 3D RADAR velocity image includes a relative velocity of each pixel in the one or more images, wherein the relative velocity of each pixel is based on a RADAR velocity estimate in the three-dimensional RADAR velocity map;
detect visual features in the one or more images;
determine whether the visual features correspond to a moving object based on the relative velocity of each pixel determined;
remove the visual features that correspond to a moving object, prior to providing them as an input into a state updater, in a RADAR-aided visual inertial odometer; and
refine an estimated position, based on the removal of the visual features that correspond to the moving object.
1 Assignment
0 Petitions
Accused Products
Abstract
Various embodiments disclose a device with one or more processors which may be configured to translate a RADAR velocity map in at least one image plane of at least one camera, to form a three-dimensional RADAR velocity image. The 3D RADAR velocity image includes a relative velocity of each pixel in the one or more images, and the relative velocity of each pixel is based on a RADAR velocity estimate in the three-dimensional RADAR velocity map. The one or more processors may be configured to determine whether visual features correspond to a moving object based on the relative velocity of each pixel determined, and may be configured to remove the visual features that correspond to a moving object, prior to providing them as an input into a state updater, in a RADAR-aided visual inertial odometer.
-
Citations
26 Claims
-
1. A device comprising:
one or more processors configured to; obtain one or more images from at least one camera; translate a radio detection and ranging (RADAR) velocity map in at least one image plane of at least one camera, to form a three-dimensional RADAR velocity image, wherein the 3D RADAR velocity image includes a relative velocity of each pixel in the one or more images, wherein the relative velocity of each pixel is based on a RADAR velocity estimate in the three-dimensional RADAR velocity map; detect visual features in the one or more images; determine whether the visual features correspond to a moving object based on the relative velocity of each pixel determined; remove the visual features that correspond to a moving object, prior to providing them as an input into a state updater, in a RADAR-aided visual inertial odometer; and refine an estimated position, based on the removal of the visual features that correspond to the moving object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
15. A method comprising:
-
obtaining one or more images from at least one camera; translating a radio detection and ranging (RADAR) velocity map in at least one image plane of at least one camera, to form a three-dimensional RADAR velocity image, wherein the 3D RADAR velocity image includes a relative velocity of each pixel in the one or more images, wherein the relative velocity of each pixel is based on a RADAR velocity estimate in the three-dimensional RADAR velocity map; detecting visual features in the one or more images; determining whether the visual features correspond to a moving object based on the relative velocity of each pixel determined; removing the visual features that correspond to a moving object, prior to providing them as an input into a state updater, in a RADAR-aided visual inertial odometer; and refining at least one of;
(a) an estimated position, (b) an estimated orientation, and (c) velocity of a device, based on the removal of the visual features that correspond to the moving object. - View Dependent Claims (16, 17, 18, 19, 20, 21, 22, 23, 24)
-
-
25. An apparatus comprising:
-
means for obtaining one or more images from at least one camera; means for translating a radio detection and ranging (RADAR) velocity map in at least one image plane of the at least one camera, to form a three-dimensional RADAR velocity image, wherein the 3D RADAR velocity image includes a depth estimate of each pixel in the one or more images, and a relative velocity of each pixel in the one or more images, where the relative velocity of each pixel is based on a RADAR velocity estimate in the three-dimensional RADAR velocity map; means for detecting visual features in the one or more images; means for determining whether the visual features correspond to a moving object based on the relative velocity of each pixel determined; means for removing the visual features that correspond to a moving object, prior to providing them as an input into a state updater, in a RADAR-aided visual inertial odometer; and means for refining at least one of;
(a) an estimated position, (b) an estimated orientation, and (c) velocity of the device, based on the removal of the visual features that correspond to the moving object.
-
-
26. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause one or more processors of a device to:
-
obtain one or more images from at least one camera; translate a radio detection and ranging (RADAR) velocity map in at least one image plane of the at least one camera, to form a three-dimensional RADAR velocity image, wherein the 3D RADAR velocity image includes a depth estimate of each pixel in the one or more images, and a relative velocity of each pixel in the one or more images, where the relative velocity of each pixel is based on a RADAR velocity estimate in the three-dimensional RADAR velocity map; detect visual features in the one or more images; determine whether the visual features correspond to a moving object based on the relative velocity of each pixel determined; remove the visual features that correspond to a moving object, prior to providing them as an input into a state updater, and refine at least on of (a) an estimated position, (b) an estimated orientation, and (c) velocity of the device, based on the removal of the visual features that correspond to the moving object.
-
Specification