SYSTEM AND METHOD FOR REAL-TIME 3-D OBJECT TRACKING AND ALERTING VIA NETWORKED SENSORS
First Claim
Patent Images
1. An object detection and tracking system comprising:
- two or more spatially separated two-dimensional (2-D) sensors with overlapping fields of view for detecting and three-dimensional (3-D) tracking of moving objects through observations;
at least one image processor for converting each sensor'"'"'s observations into 2-D spatial characteristics and extracting features of the different objects; and
at least one tracking processor for fusing the 2-D spatial characteristics and feature data from the respective sensors into 3-D kinematic and feature data of the objects.
1 Assignment
0 Petitions
Accused Products
Abstract
The invention is an integrated system consisting of a network of two-dimensional sensors (such as cameras) and processors along with target detection, data fusion, tracking, and alerting algorithms to provide three-dimensional, real-time, high accuracy target tracks and alerts in a wide area of interest. The system uses both target kinematics (motion) and features (e.g., size, shape, color, behavior, etc.) to detect, track, display, and alert users to potential objects of interest.
-
Citations
20 Claims
-
1. An object detection and tracking system comprising:
-
two or more spatially separated two-dimensional (2-D) sensors with overlapping fields of view for detecting and three-dimensional (3-D) tracking of moving objects through observations; at least one image processor for converting each sensor'"'"'s observations into 2-D spatial characteristics and extracting features of the different objects; and at least one tracking processor for fusing the 2-D spatial characteristics and feature data from the respective sensors into 3-D kinematic and feature data of the objects. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. An object detection and tracking method comprising:
-
detecting and 3-D tracking of moving objects through observations from two or more spatially separated 2-D sensors with overlapping fields of view; converting each sensor'"'"'s observations into 2-D spatial characteristics and extracting features of the different objects; and fusing the 2-D spatial characteristics and feature data from the respective sensors into 3-D kinematic and feature data of the objects. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
Specification