Pixel-level based micro-feature extraction
First Claim
1. A computer-implemented method for extracting pixel-level micro-features from image data captured by a video camera, the method comprising:
- receiving the image data;
identifying a set of pixels in the image data associated with a foreground patch that depicts a foreground object;
evaluating appearance values of the pixels included in the set of pixels to compute a plurality of micro-feature values representing the foreground object, each based on at least one pixel-level characteristic of the foreground patch, wherein the micro-feature values are computed independent of training data that defines a plurality of object types;
generating a micro-feature vector that includes the plurality of micro-feature values;
classifying the foreground object as depicting an object type as based on the micro-feature vector, wherein the object type is determined by mapping the micro-feature vector to a cluster in a self-organizing map (SOM) adaptive resonance theory (ART) network generated from a plurality of micro-feature vectors; and
updating one or more cluster properties associated with the cluster based on the plurality of micro-feature values in the generated micro-feature vector.
69 Assignments
0 Petitions
Accused Products
Abstract
Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
55 Citations
25 Claims
-
1. A computer-implemented method for extracting pixel-level micro-features from image data captured by a video camera, the method comprising:
-
receiving the image data; identifying a set of pixels in the image data associated with a foreground patch that depicts a foreground object; evaluating appearance values of the pixels included in the set of pixels to compute a plurality of micro-feature values representing the foreground object, each based on at least one pixel-level characteristic of the foreground patch, wherein the micro-feature values are computed independent of training data that defines a plurality of object types; generating a micro-feature vector that includes the plurality of micro-feature values; classifying the foreground object as depicting an object type as based on the micro-feature vector, wherein the object type is determined by mapping the micro-feature vector to a cluster in a self-organizing map (SOM) adaptive resonance theory (ART) network generated from a plurality of micro-feature vectors; and updating one or more cluster properties associated with the cluster based on the plurality of micro-feature values in the generated micro-feature vector. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A non-transitory computer-readable storage medium containing a program which, when executed by a processor, performs an operation for extracting pixel-level micro-features from image data captured by a video camera, the operation comprising:
-
receiving the image data; identifying a set of pixels in the image data associated with a foreground patch that depicts a foreground object; evaluating appearance values of the pixels included in the set of pixels to compute a plurality of micro-feature values representing the foreground object, each based on at least one pixel-level characteristic of the foreground patch, wherein the micro-feature values are computed independent of training data that defines a plurality of object types; generating a micro-feature vector that includes the micro-feature value; and classifying the foreground object as depicting an object type as based on the micro-feature vector, wherein the object type is determined by mapping the micro-feature vector to a cluster in a self-organizing map (SOM) adaptive resonance theory (ART) network generated from a plurality of micro-feature vectors; and updating one or more cluster properties associated with the cluster based on the plurality of micro-feature values in the generated micro-feature vector. - View Dependent Claims (14, 15, 16, 17)
-
-
18. A system, comprising:
-
a video input source configured to provide image data; a processor; and a memory containing a program, which, when executed on the processor is configured to perform an operation for extracting pixel-level micro-features from the image data captured by the video input source, the operation comprising; receiving the image data; identifying a set of pixels in the image data associated with a foreground patch that depicts a foreground object; evaluating appearance values of the pixels included in the set of pixels to compute a plurality of micro-feature values representing the foreground object, each based on at least one pixel-level characteristic of the foreground patch, wherein the micro-feature values are computed independent of training data that defines a plurality of object types; generating a micro-feature vector that includes the micro-feature value; classifying the foreground object as depicting an object type as based on the micro-feature vector, wherein the object type is determined by mapping the micro-feature vector to a cluster in a self-organizing map (SOM) adaptive resonance theory (ART) network generated from a plurality of micro-feature vectors; and updating one or more cluster properties associated with the cluster based on the plurality of micro-feature values in the generated micro-feature vector. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25)
-
Specification