METHOD FOR REGISTRATION OF RANGE IMAGES FROM MULTIPLE LiDARS
First Claim
1. A method for fusing sensor signals from at least two LiDAR sensors having over-lapping field-of-views so as to track objects detected by the sensors, said method comprising:
- defining a transformation value for at least one of the LiDAR sensors that identifies an orientation angle and position of the sensor;
providing target scan points from the objects detected by the sensors where the target scan points for each sensor provide a separate target point map;
projecting the target point map from the at least one sensor to another one of the LiDAR sensors using a current transformation value to overlap the target scan points from the sensors;
determining a plurality of weighting values using the current transformation value where each weighting value identifies a positional change of one of the scan points for the at least one sensor to a location of a scan point for the another one of the sensors;
calculating a new transformation value using the weighting values;
comparing the new transformation value to the current transformation value to determine a difference therebetween; and
revising the plurality of weighting values based on the difference between the new transformation value and the current transformation value until the new transformation value matches the current transformation value.
3 Assignments
0 Petitions
Accused Products
Abstract
A system and method for registering range images from objects detected by multiple LiDAR sensors on a vehicle. The method includes aligning frames of data from at least two LiDAR sensors having over-lapping field-of-views in a sensor signal fusion operation so as to track objects detected by the sensors. The method defines a transformation value for at least one of the LiDAR sensors that identifies an orientation angle and position of the sensor and provides target scan points from the objects detected by the sensors where the target scan points for each sensor provide a separate target point map. The method projects the target point map from the at least one sensor to another one of the LiDAR sensors using a current transformation value to overlap the target scan points from the sensors.
53 Citations
19 Claims
-
1. A method for fusing sensor signals from at least two LiDAR sensors having over-lapping field-of-views so as to track objects detected by the sensors, said method comprising:
-
defining a transformation value for at least one of the LiDAR sensors that identifies an orientation angle and position of the sensor; providing target scan points from the objects detected by the sensors where the target scan points for each sensor provide a separate target point map; projecting the target point map from the at least one sensor to another one of the LiDAR sensors using a current transformation value to overlap the target scan points from the sensors; determining a plurality of weighting values using the current transformation value where each weighting value identifies a positional change of one of the scan points for the at least one sensor to a location of a scan point for the another one of the sensors; calculating a new transformation value using the weighting values; comparing the new transformation value to the current transformation value to determine a difference therebetween; and revising the plurality of weighting values based on the difference between the new transformation value and the current transformation value until the new transformation value matches the current transformation value. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A method for fusing sensor signals from at least two LiDAR sensors having over-lapping field-of-views so as to track objects detected by the sensors, said LiDAR sensors being on a vehicle, said method comprising:
-
defining a current transformation value for at least one of the LiDAR sensors that identifies an orientation angle and position of the sensor at one sample time; providing target scan points from the objects detected by the sensors where the target scan points for each sensor provide a separate target point map; projecting the target point map from the at least one sensor to another one of the LiDAR sensors using the current transformation value to overlap the target scan points from the sensors; calculating a new transformation value at a next sample time; comparing the new transformation value to the current transformation value to determine a difference therebetween; and updating the current transformation value based on the difference. - View Dependent Claims (15, 16, 17, 18, 19)
-
Specification