PIXEL-LEVEL BASED MICRO-FEATURE EXTRACTION
First Claim
1. A computer-implemented method for extracting pixel-level micro-features from image data captured by a video camera, the method comprising:
- receiving the image data;
identifying a foreground patch that depicts a foreground object;
processing the foreground patch to compute a plurality of micro-feature values based on at least one pixel-level characteristic of the foreground patch, wherein the micro-feature value is computed independent of training data that defines a plurality of object types;
generating a micro-feature vector that includes the plurality of micro-feature values; and
classifying the foreground object as depicting an object type as based on the micro-feature vector.
69 Assignments
0 Petitions
Accused Products
Abstract
Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
-
Citations
32 Claims
-
1. A computer-implemented method for extracting pixel-level micro-features from image data captured by a video camera, the method comprising:
-
receiving the image data; identifying a foreground patch that depicts a foreground object; processing the foreground patch to compute a plurality of micro-feature values based on at least one pixel-level characteristic of the foreground patch, wherein the micro-feature value is computed independent of training data that defines a plurality of object types; generating a micro-feature vector that includes the plurality of micro-feature values; and classifying the foreground object as depicting an object type as based on the micro-feature vector. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A computer-readable storage medium containing a program which, when executed by a processor, performs an operation for extracting pixel-level micro-features from image data captured by a video camera, the operation comprising:
-
receiving the image data; identifying a foreground patch that depicts a foreground object; processing the foreground patch to compute a micro-feature value based on at least one pixel-level characteristic of the foreground patch, wherein the micro-feature value is computed independent of training data that defines a plurality of object types; generating a micro-feature vector that includes the micro-feature value; and classifying the foreground object as depicting an object type as based on the micro-feature vector. - View Dependent Claims (14, 15, 16, 17)
-
-
18. A system, comprising:
-
a video input source configured to provide image data; a processor; and a memory containing a program, which, when executed on the processor is configured to perform an operation for extracting pixel-level micro-features from the image data captured by the video input source, the operation comprising; receiving the image data; identifying a foreground patch that depicts a foreground object; processing the foreground patch to compute a micro-feature value based on at least one pixel-level characteristic of the foreground patch, wherein the micro-feature value is computed independent of training data that defines a plurality of object types; generating a micro-feature vector that includes the micro-feature value; and classifying the foreground object as depicting an object type as based on the micro-feature vector. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25)
-
-
26. A computer-implemented method for analyzing a sequence of video frames depicting a scene captured by a video camera, the method comprising:
-
identifying a plurality of foreground objects depicted in the sequence of video frames; for each foreground object, deriving feature data for the foreground object from each frame of video depicting the foreground object; generating, from the derived feature data of at least a first foreground object, an object type model; correlating the derived feature data for at least a second foreground object with the object type model; and assigning an object type identifier to the second foreground indicating that the second foreground object is an instance of an object type associated with the object type model. - View Dependent Claims (27, 28, 29, 30, 31, 32)
-
Specification