Detecting objects in images with covariance matrices
First Claim
Patent Images
1. A method for detecting objects in an image, wherein the method, comprising the steps of:
- extracting features from an image;
applying a frequency transform to the features to generate transformed features;
constructing a covariance matrix from the transformed features;
classifying the covariance matrix to determine whether the image includes the object, wherein the classifying is performed using a neural network trained with training images stored in a database in a one time preprocessing step; and
outputting a likelihood that the image includes the object, wherein the extracting, applying, constructing, classifying and outputting steps are performed in a processor.
1 Assignment
0 Petitions
Accused Products
Abstract
A method detects objects in an image. First, features are extracted from the image. A frequency transform is applied to the features to generate transformed features. A covariance matrix is constructed from the transformed features, and the covariance matrix is classified to determine whether the image includes the object, or not.
61 Citations
10 Claims
-
1. A method for detecting objects in an image, wherein the method, comprising the steps of:
-
extracting features from an image; applying a frequency transform to the features to generate transformed features; constructing a covariance matrix from the transformed features; classifying the covariance matrix to determine whether the image includes the object, wherein the classifying is performed using a neural network trained with training images stored in a database in a one time preprocessing step; and outputting a likelihood that the image includes the object, wherein the extracting, applying, constructing, classifying and outputting steps are performed in a processor.
-
-
2. The method of claim 1, in which the features are extracted from multiple windows in the image, and the covariance matrix is constructed for each window to detect the object in the window.
-
3. The method of claim 1, in which the covariance matrix is invariant to a rotation of the object.
-
4. The method of claim 1, in which the covariance matrix is sensitive to a rotation of the object.
-
5. The method of claim 1, in which the trained neural network is a feed-forward back-propagation type of neural network with three internal layers.
-
6. The method of claim 1, further comprising:
constructing an integral image from the image, and extracting the features from the integral image.
-
7. The method of claim 1, further comprising:
restructuring the covariance matrix as a vector of unique coefficients, and classifying the vector.
-
8. The method of claim 1, in which the features include spatial features and appearance features, and the frequency transform is according to the spatial features.
-
9. The method of claim 1, in which the features are selected from the group consisting of locations, intensities, gradients, colors, image gradients, edge magnitude, edge orientations, filter responses, histograms, texture scores, radial distances, angles, and temporal differences.
-
10. The method of claim 1, wherein the object is detected in a sequence of images.
Specification