AUTONOMOUS VEHICLE: OBJECT-LEVEL FUSION
First Claim
Patent Images
1. A method comprising:
- converting sensor data of detected objects from a plurality of heterogeneous sensors to a common coordinate frame;
predicting position, velocity, orientation and bounding boxes of existing object tracks at a current measurement time;
associating detected objects to existing object tracks by determining a similarity of at least two of kinematic information, geometric information, and object classification information based on the converted sensor data; and
updating the kinematic, geometric and object classification information for object tracks that are associated to detected objects;
reporting a fused object list having a resulting set of updated object tracks.
4 Assignments
0 Petitions
Accused Products
Abstract
Previous self-driving car systems can detect objects separately with either vision systems, RADAR systems or LIDAR systems. In an embodiment of the present invention, an object fusion module normalizes sensor output from vision, RADAR, and LIDAR systems into a common format. Then, the system fuses the object-level sensor data across all systems by associating all objects detected and predicting tracks for all objects. The present system improves over previous systems by using the data from all sensors combined to develop a single set of knowledge about the objects around the self-driving car, instead of each sensor operating separately.
53 Citations
20 Claims
-
1. A method comprising:
-
converting sensor data of detected objects from a plurality of heterogeneous sensors to a common coordinate frame; predicting position, velocity, orientation and bounding boxes of existing object tracks at a current measurement time; associating detected objects to existing object tracks by determining a similarity of at least two of kinematic information, geometric information, and object classification information based on the converted sensor data; and updating the kinematic, geometric and object classification information for object tracks that are associated to detected objects; reporting a fused object list having a resulting set of updated object tracks. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A system comprising:
-
a preprocessing module configured to convert sensor data of detected objects from a plurality of heterogeneous sensors to a common coordinate frame; a track prediction module configured to predict position, velocity, orientation and bounding boxes of existing object tracks at a current measurement time; a data association module configured to associate detected objects to existing object tracks by determining a similarity of at least two of kinematic information, geometric information, and object classification information using the converted sensor data; a track update module configured to update the kinematic, geometric and object classification information for object tracks that are associated to detected objects; and a reporting module configured to report a fused object list having a resulting set of updated object tracks. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory computer-readable medium configured to store instructions for operating an autonomous vehicle, the instructions, when loaded and executed by a processor, causes the processor to:
-
convert sensor data of detected objects from a plurality of heterogeneous sensors to a common coordinate frame; predict position, velocity, orientation and bounding boxes of existing object tracks at a current measurement time; associate detected objects to existing object tracks by determining a similarity of at least two of kinematic information, geometric information, and object classification information based on the converted sensor data; and update the kinematic, geometric and object classification information for object tracks that are associated to detected objects; report a fused object list having a resulting set of updated object tracks. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification