Enhanced traffic detection by fusing multiple sensor data
First Claim
Patent Images
1. A method, comprising:
- receiving input data representing a field of view of a roadway at or near a traffic intersection, the input data including data collected from a first sensor, and data collected from a second sensor;
analyzing the input data within a computing environment in one or more data processing modules executed in conjunction with at least one specifically-configured processor, the one or more data processing modules configured to detect an object in the field of view of the roadway, byevaluating an accuracy of the data obtained from the first sensor, by relating one or more attributes representing object characteristics in the data obtained from the first sensor to known attributes representing one or more objects, andfusing the data from the first sensor with the data from the second sensor to create a combined detection zone in the field of view of the roadway where the one or more attributes representing object characteristics are below a threshold level for accurate detection, relative to the known attributes, byidentifying a first object detection zone in the field of view from the data collected from the first sensor, and a second object detection zone in the field of view from the data collected from the second sensor,selecting a plurality of boundary points in the first object detection zone, and a plurality of points in the second object detection zone that correspond to the boundary points in the first detection zone,estimating a transformation matrix for a planar area comprising the field of view by mapping the plurality of points in a plane of the road roadway as seen by the second sensor, to the plurality of boundary points in a plane of the roadway as seen by the first sensor to correct a perspective distortion between the data collected by the first sensor and the data collected by the second sensor to transpose the first object detection zone onto the second object detection zone, by applying the selected points that correspond to the plurality of boundary points to calibrate arbitrary planar coordinates between the first object detection zone and the second object detection zone, and mapping a plurality of vectors representing common data points to transform the arbitrary planar coordinates to find an equivalent zone location and an equivalent shape between the first object detection zone and the second object detection zone; and
generating an output signal to a traffic signal controller where an object is present in the combined detection zone.
3 Assignments
0 Petitions
Accused Products
Abstract
A framework for precision traffic analysis combines traffic sensor data from multiple sensor types, combining the strengths of each sensor type within various conditions in an intersection or roadway in which traffic activity occurs. The framework calibrates coordinate systems in images taken of the same area by multiple sensors, so that fields of view from one sensor system are transposed onto fields of view from other sensor systems to fuse the images taken into a combined detection zone, and so that objects are properly detected and classified for enhanced traffic signal control.
-
Citations
1 Claim
-
1. A method, comprising:
-
receiving input data representing a field of view of a roadway at or near a traffic intersection, the input data including data collected from a first sensor, and data collected from a second sensor; analyzing the input data within a computing environment in one or more data processing modules executed in conjunction with at least one specifically-configured processor, the one or more data processing modules configured to detect an object in the field of view of the roadway, by evaluating an accuracy of the data obtained from the first sensor, by relating one or more attributes representing object characteristics in the data obtained from the first sensor to known attributes representing one or more objects, and fusing the data from the first sensor with the data from the second sensor to create a combined detection zone in the field of view of the roadway where the one or more attributes representing object characteristics are below a threshold level for accurate detection, relative to the known attributes, by identifying a first object detection zone in the field of view from the data collected from the first sensor, and a second object detection zone in the field of view from the data collected from the second sensor, selecting a plurality of boundary points in the first object detection zone, and a plurality of points in the second object detection zone that correspond to the boundary points in the first detection zone, estimating a transformation matrix for a planar area comprising the field of view by mapping the plurality of points in a plane of the road roadway as seen by the second sensor, to the plurality of boundary points in a plane of the roadway as seen by the first sensor to correct a perspective distortion between the data collected by the first sensor and the data collected by the second sensor to transpose the first object detection zone onto the second object detection zone, by applying the selected points that correspond to the plurality of boundary points to calibrate arbitrary planar coordinates between the first object detection zone and the second object detection zone, and mapping a plurality of vectors representing common data points to transform the arbitrary planar coordinates to find an equivalent zone location and an equivalent shape between the first object detection zone and the second object detection zone; and generating an output signal to a traffic signal controller where an object is present in the combined detection zone.
-
Specification