Sensor fusion architecture for vision-based occupant detection
First Claim
1. A method of object detection comprising the steps of:
- capturing images of an area occupied by at least one object;
extracting image features from said images;
classifying said image features to produce object class confidence data; and
performing data fusion on said object class confidence data to produce a detected object estimate.
1 Assignment
0 Petitions
Accused Products
Abstract
A vision-based system for automatically detecting the type of object within a specified area, such as the type of occupant within a vehicle. Determination of the type of occupant can then be used to determine whether an airbag deployment system should be enabled or not. The system extracts different features from images captured by image sensors. These features are then processed by classification algorithms to produce occupant class confidences for various occupant types. The occupant class confidences are then fused and processed to determine the type of occupant. In a preferred embodiment, image features derived from image edges, motion, and range are used. Classification algorithms may be implemented by using trained C5 decision trees, trained Nonlinear Discriminant Analysis networks, Hausdorff template matching and trained Fuzzy Aggregate Networks. In an exemplary embodiment, class confidences are provided for a rear-facing infant seat, a front-facing infant seat, an adult out of position, and an adult in a normal or twisted position. Fusion of these class confidences derived from multiple image features increases the accuracy of the system and provides for correct determination of an airbag deployment decision.
185 Citations
64 Claims
-
1. A method of object detection comprising the steps of:
-
capturing images of an area occupied by at least one object;
extracting image features from said images;
classifying said image features to produce object class confidence data; and
performing data fusion on said object class confidence data to produce a detected object estimate. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
detecting edges of said at least one object within said images;
masking said edges with a background mask to find important edges;
calculating edge pixels from said important edges; and
producing edge density maps from said important edges, said edge density map providing said image features, and wherein said step of classifying said image features comprises processing said edge density map with at least one of said one or more classification algorithms to produce object class confidence data.
-
-
9. The method of claim 2, wherein said step of extracting image features comprises the steps of:
-
detecting motion of said at least one object within said images;
calculating motion pixels from said motion; and
producing motion density maps from said notion pixels, said motion density map providing said image features, and wherein said step of classifying said image features comprises processing said motion density map with at least one of said one or more classification algorithms to produce object class confidence data.
-
-
10. The method of claim 2, wherein said step of extracting image features comprises the steps of:
-
detecting edges of said at least one object within said images;
masking said edges with a background mask to find important edges;
calculating edge pixels from said important edges;
detecting a change of at least one object;
creating an object change trigger, and wherein said step of classifying said image features comprises; monitoring said object change trigger; and
performing Hausdorff template matching upon a change in said object change trigger.
-
-
11. The method of claim 2, wherein said step of extracting image features comprises the steps of:
-
calculating a range to an area in said images;
detecting motion of said at least one object within said images;
calculating motion pixels from said motion; and
producing motion density maps from said motion pixels, said motion density map and said range providing said image features, and wherein said step of classifying said image features comprises processing said motion density map and range with at least one of said one or more classification algorithms to produce object class confidence data.
-
-
12. The method of claim 2, wherein the step of extracting image features comprises the steps of:
-
detecting edges of said at least one object within said images;
masking said edges with a background mask to find important edges;
calculating edge pixels from said important edges;
producing edge density maps from said edge pixels;
detecting motion of said at least one object within said images;
calculating motion pixels from said motion;
producing motion density maps from said motion pixels;
detecting a change of at least one object; and
creating an object change trigger, wherein said object change trigger, said edge pixels, said edge density map, and said motion density map comprise said image features, wherein said step of classifying said image features comprises; processing said edge density map with one of said one or more classification algorithms to produce a first subset of object class confidence data;
processing said motion density map with one of said one or more classification algorithms to produce a second subset of object class confidence data;
monitoring said object change trigger, and performing Hausdorff template matching upon a change in said object change trigger to produce a third subset of object class confidence data. and wherein said step of performing data fusion on said object class confidence data comprises; processing said first subset, said second subset, and said third subset with a Fuzzy Aggregation Network to produce a detected object estimate.
-
-
13. The method of claim 12, wherein the step of extracting image features further comprises the step of calculating a range to an area in said images, and wherein said step of classifying said image features further comprises tracking said range to produce equivalent rectangle features, and wherein said step of performing data fusion on said object class confidence data further comprises processing said equivalent rectangle features, said fist subset, said second subset, and said third subset with a Fuzzy Aggregation Network to produce a detected object estimate.
-
14. The method of claim 2, wherein at least one classification algorithm of said one or more classification algorithms is trained by providing training patterns to said at least one classification algorithm.
-
15. The method of claim 1 wherein said object comprises a vehicle occupant and said area comprises a vehicle occupancy area and further comprising the step of processing said detected object estimate to provide signals to vehicle systems.
-
16. The method of claim 15, wherein said signals comprise an airbag enable and disable signal.
-
17. The method of claim 1, wherein said images are captured by one or more image sensors operating in a visible region of the optical spectrum.
-
18. The method of claim 1, wherein said images are captured by one or more image sensors producing a two-dimensional pixel representation of said captured images.
-
19. The method of claim 1, wherein said images are captured by one or more image sensors having a logarithmic response.
-
20. A system for classifying objects, said system comprising
means for capturing images of an area occupied by at least one object; -
means for extracting features from said images to provide feature data;
means for classifying object status based on said feature data to produce object class confidences; and
means for processing said object class confidences to produce system output controls. - View Dependent Claims (21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 44, 45)
an edge detector module providing an edge density map and edge pixels;
a motion detector module providing a motion density map; and
an object change detection module providing an object change trigger, wherein said edge density map, said edge pixels, said motion density map and said object change trigger comprise said feature data, and said means for classifying object status comprises an edge classifier module using one of said one or more classification algorithms to produce a first subset of class confidences from said edge density map;
a motion classifier module using one of said one or more classification algorithms to produce a second subset of class confidences from said motion density map; and
a Hausdorff template matching module producing a third subset of class confidences from said edge pixels and said object change trigger, wherein said first subset, said second subset, and said third subset comprise said object class confidences.
-
-
32. The system according to claim 31, wherein said means for capturing images comprises a means for capturing stereo images of said area, and said means for extracting features further comprises a range map module providing a range value and said feature data further comprises said range value, and said means for classifying object status further comprises an object-out-of-position tracking module using a tracking algorithm to produce equivalent rectangle features from said motion pixels and said range, and said object class confidences further comprises said equivalent rectangle features.
-
33. The system according to claim 31, wherein said first subset of class confidences comprises values for rear-facing infant seat, front-facing infant seat, adult out-of-position, and adult in normal or twisted position;
- said second subset of class confidences comprises values for rear-facing infant seat, front-facing infant seat, adult out-of-position, and adult in normal or twisted position; and
said third subset of class confidences comprise values for a first rear-facing infant seat, a second rear-facing infant seat, and a front-facing infant seat.
- said second subset of class confidences comprises values for rear-facing infant seat, front-facing infant seat, adult out-of-position, and adult in normal or twisted position; and
-
34. The system of claim 21, wherein at least one classification algorithm of said one or more classification algorithms is trained by providing training patterns to said at least one classification algorithm.
-
35. The system according to claim 20, wherein said means for processing said object class confidences comprises a trained Fuzzy Aggregation Network.
-
36. The system according to claim 20, wherein said means for capturing images comprises at least one CMOS vision sensor.
-
37. The system according to claim 20, wherein said means for capturing images comprises at least one CCD vision sensor.
-
38. The system according to claim 20, wherein said system comprises an airbag deployment control system and wherein said system output controls comprise control signals that enable or disable at least one vehicle airbag.
-
39. The system of claim 20, wherein said means for capturing images comprises one or more image sensors operating in a visible region of the optical spectrum.
-
40. The system of claim 20, wherein said means for capturing images comprises one or more image sensors producing a two-dimensional pixel representation of said captured images.
-
41. The system of claim 20, wherein said means for capturing images comprises one or more image sensors having a logarithmic response.
-
44. The object detection system according to claim 34, wherein a at least one of said one or more classification algorithms comprises a decision tree.
-
45. The object detection system according to claim 44, wherein said decision tree comprises a trained C5 decision tree.
-
42. An object detection system providing control signals, said object detection system comprising:
-
at least one imaging sensor for capturing images of an area and providing digital representations of said images;
at least one image feature extractor module receiving said digital representations and providing image features;
at least one image feature classifier module receiving said image features and providing object class confidences; and
a sensor fusion engine receiving said object class confidences and providing control signals. - View Dependent Claims (43, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64)
an edge detector module producing edge density maps and edge pixels;
a motion detector module producing motion density maps; and
an object change detection module producing an object change trigger, wherein said edge density maps, said edge pixels, said motion density maps and said object change trigger comprise said image features, and said at least one image feature classifier module comprises;
an edge classifier module comprising at least one of said one or more classification algorithms producing a first subset of class confidences from said edge density maps, a motion classifier module comprising at least one of said one or more classification algorithms producing a second subset of class confidences from said motion density maps; and
a Hausdorff template matching module producing a third subset of class confidences from said edge pixels and said object change trigger, wherein said first subset, said second subset, and said third subset comprise said object class confidences.
-
-
54. The object detection system according to claims 53, wherein said at least one imaging sensor comprises a pair of image sensors located a fixed distance a part and viewing a substantially similar portion of said area, and said at least one feature extractor module further comprises a range map module producing range values and said image features further comprise said range values, and said at least one image feature classifier module further comprises an object-out-of-position tracing module comprising a tracking algorithm to produce equivalent rectangle features from said motion pixels and said range, and said object class confidences further comprise said equivalent rectangle features.
-
55. The object detection system according to claim 53, wherein said first subset of class confidences comprises values for rear-facing infant seat, front-facing infant seat, adult out-of-position, and adult in normal or twisted position;
- said second subset of class confidences comprises values for rear-facing infant seat, front-facing infant seat, adult out-of-position, and adult in normal or twisted position; and
said third subset of class confidences comprise values for a first rear-facing infant seat, a second rear-facing infant seat, and a front-facing infant seat.
- said second subset of class confidences comprises values for rear-facing infant seat, front-facing infant seat, adult out-of-position, and adult in normal or twisted position; and
-
56. The system of claim 43, wherein at least one classification algorithm of said one or more classification algorithms is trained by providing training patterns to said at least one classification algorithm.
-
57. The object detection system according to claim 42, wherein said sensor fusion engine comprises a Fuzzy Aggregation Network algorithm.
-
58. The object detection system according to claim 42, wherein said at least one imaging sensor comprises at least one CMOS vision sensor.
-
59. The object detection system according to claim 42, wherein said at least one imaging sensor comprises at least one CCD vision sensor.
-
60. The object detection system according to claim 42, wherein said area comprises an occupancy area of a vehicle and wherein said control signals comprise an airbag enable or disable signal.
-
61. The system of claim 42, wherein said at least one imaging sensor operates in a visible region of the optical spectrum.
-
62. The system of claim 42, wherein said at least one imaging sensor produces a two-dimensional pixel representation of said captured images.
-
63. The system of claim 42, wherein said at least one imaging sensor has a logarithmic response.
-
64. The object detection system according to claim 42, wherein said system comprises a software system having computer-executable instructions executing on a suitable computer system.
Specification