Autonomous vehicle: object-level fusion
First Claim
Patent Images
1. A method comprising:
- converting sensor data of a plurality of detected objects to a common coordinate frame, the sensor data of each detected object collected from a given sensor of a plurality of heterogeneous sensors at a current measurement time, each detected object including at least one of kinematic information, geometric information, and object classification information based on the converted sensor data, the plurality of heterogeneous sensors being mounted on a highly-automated vehicle;
predicting position, velocity, orientation and bounding boxes of existing object tracks at the current measurement time, the predicting resulting in a given predicted object track associated with a given existing object track, the given predicted object track including at least one of kinematic information, geometric information, and object classification information;
associating the detected objects to existing object tracks by determining a similarity of a given detected object and a given predicted object track, the information of the given detected object being a different type than the information of the given predicted object track;
updating the kinematic, geometric and object classification information for existing object tracks by updating the given existing object track with the information of the given detected object determined to be similar to the predicted object track; and
reporting a fused object list having a resulting set of updated object tracks.
4 Assignments
0 Petitions
Accused Products
Abstract
Previous self-driving car systems can detect objects separately with either vision systems, RADAR systems or LIDAR systems. In an embodiment of the present invention, an object fusion module normalizes sensor output from vision, RADAR, and LIDAR systems into a common format. Then, the system fuses the object-level sensor data across all systems by associating all objects detected and predicting tracks for all objects. The present system improves over previous systems by using the data from all sensors combined to develop a single set of knowledge about the objects around the self-driving car, instead of each sensor operating separately.
-
Citations
20 Claims
-
1. A method comprising:
-
converting sensor data of a plurality of detected objects to a common coordinate frame, the sensor data of each detected object collected from a given sensor of a plurality of heterogeneous sensors at a current measurement time, each detected object including at least one of kinematic information, geometric information, and object classification information based on the converted sensor data, the plurality of heterogeneous sensors being mounted on a highly-automated vehicle; predicting position, velocity, orientation and bounding boxes of existing object tracks at the current measurement time, the predicting resulting in a given predicted object track associated with a given existing object track, the given predicted object track including at least one of kinematic information, geometric information, and object classification information; associating the detected objects to existing object tracks by determining a similarity of a given detected object and a given predicted object track, the information of the given detected object being a different type than the information of the given predicted object track; updating the kinematic, geometric and object classification information for existing object tracks by updating the given existing object track with the information of the given detected object determined to be similar to the predicted object track; and reporting a fused object list having a resulting set of updated object tracks. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A system comprising:
-
a preprocessing module configured to convert sensor data of a plurality of detected objects to a common coordinate frame, the sensor data of each detected object collected from a given sensor of a plurality of heterogeneous sensors at a current measurement time, each detected object including at least one of kinematic information, geometric information, and object classification information based on the converted sensor data, the plurality of heterogeneous sensors being mounted on a highly-automated vehicle; a track prediction module configured to predict position, velocity, orientation and bounding boxes of existing object tracks at the current measurement time, the predicting resulting in a given predicted object track associated with a given existing object track, the given predicted object track including at least one of kinematic information, geometric information, and object classification information; a data association module configured to associate the detected objects to existing object tracks by determining a similarity of a given detected object and a given predicted object track, the information of the given detected object being a different type than the information of the given predicted object track; a track update module configured to update the kinematic, geometric and object classification information for existing object tracks by updating the given existing object track with the information of the given detected object determined to be similar to the predicted object track; and a reporting module configured to report a fused object list having a resulting set of updated object tracks. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory computer-readable medium configured to store instructions for operating an autonomous vehicle, the instructions, when loaded and executed by a processor, causes the processor to:
-
convert sensor data of a plurality of detected objects to a common coordinate frame, the sensor data of each detected object collected from a given sensor of a plurality of heterogeneous sensors at a current measurement time, each detected object including at least one of kinematic information, geometric information, and object classification information based on the converted sensor data, the plurality of heterogeneous sensors being mounted on a highly-automated vehicle; predict position, velocity, orientation and bounding boxes of existing object tracks at the current measurement time, the predicting resulting in a given predicted object track associated with a given existing object track, the given predicted object track including at least one of kinematic information, geometric information, and object classification information; associate the detected objects to existing object tracks by determining a similarity of a given detected object and a given predicted object track, the information of the given detected object being a different type than the information of the given predicted object track; and update the kinematic, geometric and object classification information for existing object tracks by updating the given existing object track with the information of the given detected object determined to be similar to the predicted object track; report a fused object list having a resulting set of updated object tracks. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification