Method for identifying objects and features in an image
First Claim
1. A method of generalizing objects or features in an image, the steps comprising:
- a) retrieving an original image, each pixel of which has a value represented by a predetermined number of bits, n;
b) transforming said original image into at least two distinct bands, each of said pixels in each of the band-images comprising less than n bits;
c) transforming said original image into at least two distinct resolutions by a series of down sampling schemes;
d) projecting each of said transformed, band-images, into a composite domain;
e) expanding each down-sampled image back to previous resolution by doubling, tripling, or quadrupling the pixels in both the x and y direction;
f) creating a composite image from all of said transformed, band-images; and
g) creating edge-based images by a series of comparisons and integrating among all resultant edge-based images.
0 Assignments
0 Petitions
Accused Products
Abstract
The present invention features the use of the fundamental concept of color perception and multi-level resolution to perform scene segmentation and object/feature extraction in the context of self-determining and self-calibration modes. The technique uses only a single image, instead of multiple images as the input to generate segmented images. Moreover, a flexible and arbitrary scheme is incorporated, rather than a fixed scheme of segmentation analysis. The process allows users to perform digital analysis using any appropriate means for object extraction after an image is segmented. First, an image is retrieved. The image is then transformed into at least two distinct bands. Each transformed image is then projected into a color domain or a multi-level resolution setting. A segmented image is then created from all of the transformed images. The segmented image is analyzed to identify objects. Object identification is achieved by matching a segmented region against an image library. A featureless library contains full shape, partial shape and real-world images in a dual library system. The depth contours and height-above-ground structural components constitute a dual library. Also provided is a mathematical model called a Parzen window-based statistical/neural network classifier, which forms an integral part of this featureless dual library object identification system. All images are considered three-dimensional. Laser radar based 3-D images represent a special case.
196 Citations
16 Claims
-
1. A method of generalizing objects or features in an image, the steps comprising:
-
a) retrieving an original image, each pixel of which has a value represented by a predetermined number of bits, n; b) transforming said original image into at least two distinct bands, each of said pixels in each of the band-images comprising less than n bits; c) transforming said original image into at least two distinct resolutions by a series of down sampling schemes; d) projecting each of said transformed, band-images, into a composite domain; e) expanding each down-sampled image back to previous resolution by doubling, tripling, or quadrupling the pixels in both the x and y direction; f) creating a composite image from all of said transformed, band-images; and g) creating edge-based images by a series of comparisons and integrating among all resultant edge-based images. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A self-calibrating, self-determining method of generalizing objects or features in an image, the steps comprising:
-
a) retrieving an original image in pixel form; b) transforming said original image into at least one stable structure band; c) executing a predetermined algorithm with said transformed image to perform iterative region-growing or region-merging by interrogating said image with a set of linearly-increasing color values; d) generating groups having a set of values indicating a number of regions in each segmented image; e) monitoring a slope and slope change of a scene characteristic (SC) curve; f) establishing at least one stopping point; g) projecting each of said transformed, band-images, into a composite domain; and h) creating a composite image from all of said transformed, band-images. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
Specification