Enhanced traffic detection by fusing multiple sensor data
First Claim
Patent Images
1. A method, comprising:
- receiving input data representing a field of view of a roadway at or near a traffic intersection, the input data including data collected from a first sensor, and data collected from a second sensor;
analyzing the input data within a computing environment in one or more data processing modules executed in conjunction with at least one specifically-configured processor, the one or more data processing modules configured to fuse the data from the first sensor with the data from the second sensor, byidentifying a first object detection zone in the field of view from the data collected from the first sensor, and a second object detection zone in the field of view from the data collected from the second sensor,selecting a plurality of boundary points in the first object detection zone, and a plurality of points in the second object detection zone that correspond to the boundary points in the first detection zone,estimating a transformation matrix for a planar area comprising the field of view by mapping the plurality of points in a plane of the road roadway as seen by the second sensor, to the plurality of boundary points in a plane of the roadway as seen by the first sensor to correct a perspective distortion between the data collected by the first sensor and the data collected by the second sensor to transpose the first object detection zone onto the second object detection zone to create a combined detection zone, by applying the selected points that correspond to the plurality of boundary points to calibrate arbitrary planar coordinates between the first object detection zone and the second object detection zone, and mapping a plurality of vectors representing common data points to transform the arbitrary planar coordinates to find an equivalent zone location and an equivalent shape between the first object detection zone and the second object detection zone; and
evaluating one or more attributes of the combined detection zone to detect an object in the field of view of the roadway.
3 Assignments
0 Petitions
Accused Products
Abstract
A framework for precision traffic analysis combines traffic sensor data from multiple sensor types, combining the strengths of each sensor type within various conditions in an intersection or roadway in which traffic activity occurs. The framework calibrates coordinate systems in images taken of the same area by multiple sensors, so that fields of view from one sensor system are transposed onto fields of view from other sensor systems to fuse the images taken into a combined detection zone, and so that objects are properly detected and classified for enhanced traffic signal control.
12 Citations
27 Claims
-
1. A method, comprising:
-
receiving input data representing a field of view of a roadway at or near a traffic intersection, the input data including data collected from a first sensor, and data collected from a second sensor; analyzing the input data within a computing environment in one or more data processing modules executed in conjunction with at least one specifically-configured processor, the one or more data processing modules configured to fuse the data from the first sensor with the data from the second sensor, by identifying a first object detection zone in the field of view from the data collected from the first sensor, and a second object detection zone in the field of view from the data collected from the second sensor, selecting a plurality of boundary points in the first object detection zone, and a plurality of points in the second object detection zone that correspond to the boundary points in the first detection zone, estimating a transformation matrix for a planar area comprising the field of view by mapping the plurality of points in a plane of the road roadway as seen by the second sensor, to the plurality of boundary points in a plane of the roadway as seen by the first sensor to correct a perspective distortion between the data collected by the first sensor and the data collected by the second sensor to transpose the first object detection zone onto the second object detection zone to create a combined detection zone, by applying the selected points that correspond to the plurality of boundary points to calibrate arbitrary planar coordinates between the first object detection zone and the second object detection zone, and mapping a plurality of vectors representing common data points to transform the arbitrary planar coordinates to find an equivalent zone location and an equivalent shape between the first object detection zone and the second object detection zone; and evaluating one or more attributes of the combined detection zone to detect an object in the field of view of the roadway. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A system, comprising:
-
a computing environment including at least one non-transitory computer-readable storage medium having program instructions stored therein and a computer processor operable to execute the program instructions within one or more data processing modules configured to fuse input data representing a field of view of a roadway at or near a traffic intersection, the input data including data collected from a first sensor, and data collected from a second sensor, the one or more data processing modules including; one or more data processing modules configured to identify a first object detection zone in the field of view from the data collected from the first sensor, and a second object detection zone in the field of view from the data collected from the second sensor, select a plurality of boundary points in the first object detection zone, and a plurality of points in the second object detection zone that correspond to the plurality of boundary points in the first detection zone, estimate a transformation matrix for a planar area comprising the field of view, by mapping the plurality of points in a plane of the road roadway as seen by the second sensor, to the plurality of boundary points in a plane of the roadway as seen by the first sensor, to correct a perspective distortion between the data collected by the first sensor and the data collected by the second sensor to transpose the first object detection zone onto the second object detection zone to create a combined detection zone, by applying the selected points that correspond to the at least four boundary points to calibrate arbitrary planar coordinates between the first object detection zone and the second object detection zone, and mapping a plurality of vectors representing common data points to linearly transform the arbitrary planar coordinates to find an equivalent zone location and an equivalent shape between the first object detection zone and the second object detection zone; and an object detection module configured to evaluate one or more attributes of the combined detection zone to detect an object in the field of view of the roadway. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A method of detecting an object in a traffic detection zone using a plurality of different sensors, comprising:
-
configuring a first object detection zone in a field of view of a roadway at or near a traffic intersection from data collected from a first sensor, and a second object detection zone in the field of view of the roadway at or near the traffic intersection from data collected from a second sensor; calibrating arbitrary planar coordinates between the first object detection zone and the second object detection zone to create a combined detection zone for the field of view, by selecting a plurality of boundary points in the first object detection zone, and a plurality of points in the second object detection zone that correspond to the plurality of boundary points in the first object detection zone, correcting a perspective distortion in the field of view between the data collected by the first sensor and the data collected by the second sensor from the selected plurality of boundary points, by estimating a transformation matrix for a planar area comprising the field of view to map the plurality of points in a plane of the road roadway as seen by the second sensor, to the plurality of boundary points in a plane of the roadway as seen by the first sensor, finding an equivalent zone location and an equivalent shape between the first detection zone and the second detection zone, by mapping a plurality vectors representing common data points to linearly transform the arbitrary polar coordinates, and transposing the plurality of vectors to fuse the first detection zone and the second detection zone into the combined traffic detection zone; and detecting an object in the roadway by analyzing one or more attributes of the combined traffic detection zone. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27)
-
Specification