System and method for vehicle position and velocity estimation based on camera and lidar data
First Claim
1. A system comprising:
- a data processor; and
a vehicle position and velocity estimation module, executable by the data processor, the vehicle position and velocity estimation module being configured to perform a proximate object position and velocity estimation operation for an autonomous vehicle, the proximate object position and velocity estimation operation being configured to;
receive input object data from a subsystem of the autonomous vehicle, the input object data including image data from an image generating device and distance data from a distance measuring device, the distance measuring device being one or more light imaging, detection, and ranging (LIDAR) sensors;
determine a two-dimensional (2D) position of a proximate object near the autonomous vehicle using the image data received from the image generating device and semantic segmentation processing of the image data;
track a three-dimensional (3D) position of the proximate object using the distance data received from the distance measuring device over a plurality of cycles and generate tracking data;
correlate the proximate object identified from the image data with the proximate object identified and tracked from the distance data, the correlation being configured to match the 2D position of the proximate object detected in the image data with the 3D position of the same proximate object detected in the distance data;
determine a 3D position of the proximate object using the 2D position, the distance data received from the distance measuring device, and the tracking data;
determine a velocity of the proximate object using the 3D position and the tracking data; and
output the 3D position and velocity of the proximate object relative to the autonomous vehicle.
2 Assignments
0 Petitions
Accused Products
Abstract
A vehicle position and velocity estimation based on camera and LIDAR data are disclosed. A particular embodiment includes: receiving input object data from a subsystem of an autonomous vehicle, the input object data including image data from an image generating device and distance data from a distance measuring device; determining a two-dimensional (2D) position of a proximate object near the autonomous vehicle using the image data received from the image generating device; tracking a three-dimensional (3D) position of the proximate object using the distance data received from the distance measuring device over a plurality of cycles and generating tracking data; determining a 3D position of the proximate object using the 2D position, the distance data received from the distance measuring device, and the tracking data; determining a velocity of the proximate object using the 3D position and the tracking data; and outputting the 3D position and velocity of the proximate object relative to the autonomous vehicle.
-
Citations
17 Claims
-
1. A system comprising:
-
a data processor; and a vehicle position and velocity estimation module, executable by the data processor, the vehicle position and velocity estimation module being configured to perform a proximate object position and velocity estimation operation for an autonomous vehicle, the proximate object position and velocity estimation operation being configured to; receive input object data from a subsystem of the autonomous vehicle, the input object data including image data from an image generating device and distance data from a distance measuring device, the distance measuring device being one or more light imaging, detection, and ranging (LIDAR) sensors; determine a two-dimensional (2D) position of a proximate object near the autonomous vehicle using the image data received from the image generating device and semantic segmentation processing of the image data; track a three-dimensional (3D) position of the proximate object using the distance data received from the distance measuring device over a plurality of cycles and generate tracking data; correlate the proximate object identified from the image data with the proximate object identified and tracked from the distance data, the correlation being configured to match the 2D position of the proximate object detected in the image data with the 3D position of the same proximate object detected in the distance data; determine a 3D position of the proximate object using the 2D position, the distance data received from the distance measuring device, and the tracking data; determine a velocity of the proximate object using the 3D position and the tracking data; and output the 3D position and velocity of the proximate object relative to the autonomous vehicle. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method comprising:
-
receiving input object data from a subsystem of an autonomous vehicle, the input object data including image data from an image generating device and distance data from a distance measuring device, the distance measuring device being one or more light imaging, detection, and ranging (LIDAR) sensors; determining a two-dimensional (2D) position of a proximate object near the autonomous vehicle using the image data received from the image generating device and semantic segmentation processing of the image data; tracking a three-dimensional (3D) position of the proximate object using the distance data received from the distance measuring device over a plurality of cycles and generating tracking data; correlating the proximate object identified from the image data with the proximate object identified and tracked from the distance data, the correlating including matching the 2D position of the proximate object detected in the image data with the 3D position of the same proximate object detected in the distance data; determining a 3D position of the proximate object using the 2D position, the distance data received from the distance measuring device, and the tracking data; determining a velocity of the proximate object using the 3D position and the tracking data; and outputting the 3D position and velocity of the proximate object relative to the autonomous vehicle. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory machine-useable storage medium embodying instructions which, when executed by a machine, cause the machine to:
-
receive input object data from a subsystem of an autonomous vehicle, the input object data including image data from an image generating device and distance data from a distance measuring device, the distance measuring device being one or more light imaging, detection, and ranging (LIDAR) sensors; determine a two-dimensional (2D) position of a proximate object near the autonomous vehicle using the image data received from the image generating device and semantic segmentation processing of the image data; track a three-dimensional (3D) position of the proximate object using the distance data received from the distance measuring device over a plurality of cycles and generate tracking data; correlate the proximate object identified from the image data with the proximate object identified and tracked from the distance data, the correlation being configured to match the 2D position of the proximate object detected in the image data with the 3D position of the same proximate object detected in the distance data; determine a 3D position of the proximate object using the 2D position, the distance data received from the distance measuring device, and the tracking data; determine a velocity of the proximate object using the 3D position and the tracking data; and output the 3D position and velocity of the proximate object relative to the autonomous vehicle. - View Dependent Claims (16, 17)
-
Specification