Automatic design of morphological algorithms for machine vision

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
28Forward
Citations 
0
Petitions 
3
Assignments
First Claim
1. A method for automated selection of a parameterized operator sequence to achieve a pattern classification task, comprising die steps of:
 inputting a collection of labeled data patterns;
deriving statistical descriptions of the inputted labeled data patterns;
determining a criterion function which is used to derive classifier performances by performing the steps of;
determining a classifier performance for each of a plurality of candidate operator sequences and corresponding parameter values using the derived statistical descriptions by performing the steps of;
for each candidate operator sequence and corresponding parameter values, performing;
constructing an Embeddable Markov Chain (EMC), given the derived statistical descriptions for the input data patterns and output statistic to be calculated; and
calculating ouput statistics using the EMC, the output statistics a function of the derived statistical descriptions for the inputted data patterns and a Boolean transformation, identifying an optimal classifier performance among the determined classifier performances according to specified criteria; and
selecting the operator sequence and corresponding parameter values, associated with the identified optimal classifier performance.
3 Assignments
0 Petitions
Accused Products
Abstract
The present invention provides a technique for automated selection of a parameterized operator sequence to achieve a pattern classification task. A collection of labeled data patterns is input and statistical descriptions of the inputted labeled data patterns are then derived. Classifier performance for each of a plurality of candidate operator/parameter sequences is determined. The optimal classifier performance among the candidate classifier performances is then identified. Performance metric information, including, for example, the selected operator sequence/parameter combination, will be outputted. The operator sequences selected can be chosen from a default set of operators, or may be a userdefined set. The operator sequences may include any morphological operators, such as, erosion, dilation, closing, opening, closeopen, and openclose.
40 Citations
View as Search Results
Method and apparatus for receiving a refinancing offer from an image  
Patent #
US 10,210,417 B2
Filed 01/25/2018

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for verifying vehicle ownership from an image  
Patent #
US 10,204,282 B2
Filed 09/22/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving a broadcast radio service offer from an image  
Patent #
US 10,210,416 B2
Filed 09/22/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for obtaining a vehicle history report from an image  
Patent #
US 10,192,114 B2
Filed 08/21/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving vehicle information from an image and posting the vehicle information to a website  
Patent #
US 10,210,396 B2
Filed 08/21/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for providing loan verification from an image  
Patent #
US 10,242,284 B2
Filed 03/22/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving listings of similar vehicles from an image  
Patent #
US 10,169,675 B2
Filed 03/10/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving an insurance quote from an image  
Patent #
US 10,127,616 B2
Filed 03/06/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving an insurance quote from an image  
Patent #
US 10,176,531 B2
Filed 03/06/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for recovering a vehicle value from an image  
Patent #
US 10,192,130 B2
Filed 03/06/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for recovering a vehicle identification number from an image  
Patent #
US 10,163,026 B2
Filed 02/07/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving a location of a vehicle service center from an image  
Patent #
US 10,163,025 B2
Filed 01/30/2017

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

System and method for electronic processing of vehicle transactions based on image detection of vehicle license plate  
Patent #
US 9,818,154 B1
Filed 11/29/2016

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving a location of a vehicle service center from an image  
Patent #
US 9,558,419 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for recovering a vehicle identification number from an image  
Patent #
US 9,563,814 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for recovering a vehicle value from an image  
Patent #
US 9,589,201 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving an insurance quote from an image  
Patent #
US 9,589,202 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving listings of similar vehicles from an image  
Patent #
US 9,594,971 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving car parts data from an image  
Patent #
US 9,600,733 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for providing loan verification from an image  
Patent #
US 9,607,236 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving vehicle information from an image and posting the vehicle information to a website  
Patent #
US 9,754,171 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for obtaining a vehicle history report from an image  
Patent #
US 9,760,776 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving a broadcast radio service offer from an image  
Patent #
US 9,773,184 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for verifying vehicle ownership from an image  
Patent #
US 9,779,318 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Method and apparatus for receiving a refinancing offer from an image  
Patent #
US 9,892,337 B1
Filed 05/19/2015

Current Assignee
Blinker Inc.

Sponsoring Entity
Blinker Inc.

Concurrent display systems and methods for aerial roof estimation  
Patent #
US 9,135,737 B2
Filed 08/01/2014

Current Assignee
EagleView Technologies Incorporated

Sponsoring Entity
EagleView Technologies Incorporated

Pitch determination systems and methods for aerial roof estimation  
Patent #
US 9,129,376 B2
Filed 07/31/2014

Current Assignee
EagleView Technologies Incorporated

Sponsoring Entity
EagleView Technologies Incorporated

Geometric correction of rough wireframe models derived from photographs  
Patent #
US 9,911,228 B2
Filed 02/01/2011

Current Assignee
EagleView Technologies Incorporated

Sponsoring Entity
EagleView Technologies Incorporated

Scandium containing aluminum alloy firearm  
Patent #
US 6,711,819 B2
Filed 03/26/2003

Current Assignee
American Outdoor Brands Sales Company

Sponsoring Entity
Smith Wesson Corporation

Automatic algorithm generation  
Patent #
US 20020164070A1
Filed 03/13/2002

Current Assignee
Battelle Memorial Institute

Sponsoring Entity
Battelle Memorial Institute

Method for boosting the performance of machinelearning classifiers  
Patent #
US 7,024,033 B2
Filed 03/04/2002

Current Assignee
Zhigu Holdings Limited

Sponsoring Entity
Microsoft Corporation

Method and apparatus for qualitative spatiotemporal data processing  
Patent #
US 20010043722A1
Filed 03/12/2001

Current Assignee
Sarnoff Corporation

Sponsoring Entity
Sarnoff Corporation

Object finder for photographic images  
Patent #
US 20020159627A1
Filed 02/28/2001

Current Assignee
Carnegie Mellon University

Sponsoring Entity
Google LLC

Program classification using object tracking  
Patent #
US 6,754,389 B1
Filed 12/01/1999

Current Assignee
Koninklijke Philips N.V.

Sponsoring Entity
Koninklijke Philips N.V.

Signal interpretation engine  
Patent #
US 6,546,378 B1
Filed 04/24/1997

Current Assignee
BRIGHT IDEAS L.L.C. A LIMITED LIABILITY COMPANY OF UTAH

Sponsoring Entity
BRIGHT IDEAS L.L.C. A LIMITED LIABILITY COMPANY OF UTAH

Method and apparatus for clustering data  
Patent #
US 6,021,383 A
Filed 10/07/1996

Current Assignee
Yeda Research and Development Co. Ltd.

Sponsoring Entity
Yeda Research and Development Co. Ltd.

Unsupervised training of character templates using unsegmented samples  
Patent #
US 5,956,419 A
Filed 04/28/1995

Current Assignee
Xerox Corporation

Sponsoring Entity


System and method for incorporating segmentation boundaries into the calculation of fractal dimension features for texture discrimination  
Patent #
US 5,671,294 A
Filed 09/15/1994

Current Assignee
UNITED STATES OF AMERICA THE AS REPRESENTED BY THE SECRETARY OF THE NAVY

Sponsoring Entity
UNITED STATES OF AMERICA THE AS REPRESENTED BY THE SECRETARY OF THE NAVY

Discriminant neural networks  
Patent #
US 5,926,804 A
Filed 07/01/1994

Current Assignee
BOARD OF GOVERNORS FOR HIGHER EDUCATION THE

Sponsoring Entity


Fuse and antifuse reprogrammable link for integrated circuits  
Patent #
US 5,412,593 A
Filed 01/12/1994

Current Assignee
Texas Instruments Inc.

Sponsoring Entity
Texas Instruments Inc.

21 Claims
 1. A method for automated selection of a parameterized operator sequence to achieve a pattern classification task, comprising die steps of:
 inputting a collection of labeled data patterns;
deriving statistical descriptions of the inputted labeled data patterns;
determining a criterion function which is used to derive classifier performances by performing the steps of;
determining a classifier performance for each of a plurality of candidate operator sequences and corresponding parameter values using the derived statistical descriptions by performing the steps of;
for each candidate operator sequence and corresponding parameter values, performing;
constructing an Embeddable Markov Chain (EMC), given the derived statistical descriptions for the input data patterns and output statistic to be calculated; and
calculating ouput statistics using the EMC, the output statistics a function of the derived statistical descriptions for the inputted data patterns and a Boolean transformation, identifying an optimal classifier performance among the determined classifier performances according to specified criteria; and
selecting the operator sequence and corresponding parameter values, associated with the identified optimal classifier performance.  View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
 inputting a collection of labeled data patterns;
 20. A method for determining optimal classifier performance of a plurality of candidate operator sequences and corresponding parameter values, comprising the steps of:
 for each candidate operator sequence and corresponding parameter values, performing;
(a) constructing an Embeddable Markov Chain(EMC), given statistical descriptions for inputted data patterns and output statistic to be calculated;
(b) calculating the output statistics using the EMC; and
(c) selecting an optimal operator sequence and corresponding parameter values using the output statistics, according to specified criteria.  View Dependent Claims (21)
 for each candidate operator sequence and corresponding parameter values, performing;
1 Specification
This application claims the benefit of U.S. Provisional Application Ser. No. 60/346,995, filed on Jan. 9, 2002, which is incorporated by reference herein in its entirety.
FIELD OF THE INVENTIONThe present invention relates to pattern analysis, and more particularly, to a method for automated selection of operators and their parameters to achieve a pattern classification task.
BACKGROUND OF THE INVENTIONMathematical morphology is a framework to extract information from images. Under this frameworkmachine vision problems may be solved by image transformations that employ a sequence of parameterized operators. Pixel neighborhood level feature detection followed by a region level grouping and/or morphological filtering is a typical operation sequence in a variety of video and image analysis systems (e.g., document image analysis, video surveillance and monitoring, machine vision and inspection). However, the robustness of these systems is often questionable because of the use of operator/parameter combinations that are set by trial and error.
SUMMARY OF THE INVENTIONAccording to an aspect of the invention, there is provided a method for automated selection of a parameterized operator sequence to achieve a pattern classification task. A collection of labeled data patterns is input and statistical descriptions of the inputted labeled data patterns are then derived. Classifier performance for each of a plurality of candidate operator/parameter sequences is determined. Optimal classifier performance among the candidate classifier performances is identified. Performance metric information, including, for example, the selected operator sequence/parameter combination, is outputted. The operator sequences selected can be chosen from a default set of operators, or may be a userdefined set. The operators may include any morphological operators, such as erosion, dilation, closing, opening, closeopen, and openclose. The operators may also include operators that map an input Boolean vector to an output Boolean vector. Additionally, the operators can include an operator that is defined as successive applications of a 1D filter in two orthogonal directions.
The collection of data patterns can include various patterns of interest. They can also include patterns of noninterest. The data patterns can include any graylevel data/color data transformed to a binary representation.
The statistical descriptions of the input patterns can be derived using any suitable probability model, such as, for example, a mixture of Hidden Markov Models (HMMs), a Bayesian network etc. A probability model that employs a nonparametric density representation may also be used.
The operator sequences selected can be chosen from a default set of operators, or may be a userdefined set.
The criteria for determining an optimal classifier performance can be defined by a user. The criteria will preferably relate to maximizing expected classifier performance. This may involve, for example, balancing the tradeoff of false alarm errors and miss detection errors.
The step of determining classifier performance may include, for each candidate operator sequence and corresponding parameter values and a description of the output statistic to the measured, and given derived statistical descriptions for the inputted data patterns,
 1. Constructing an Embedded Markov Chain that uses the steps:
 a) Constructing a state space,
 b) Building a statetransition graph with associated statetransition probabilities statetransition for the candidate operator sequence, and
 c) Partitioning the statespace.
 2. Calculating the distribution of the output statistic using the EMC.
According to another aspect of the invention, a method is provided for determining optimal classifier performance of a plurality of candidate operator sequences and corresponding parameter values. For each candidate operator sequence and corresponding parameter values, the method 1. Constructs an Embedded Markov Chain that uses the steps:
 a) Constructing a state space,
 b) Building a statetransition graph with associated statetransition probabilities statetransition for the candidate operator sequence, and
 c) Partitioning the statespace.
 2. Calculating the distribution of the output statistic is using the EMC. An optimal operator sequence and corresponding parameter values will be selected for output.
Numerous applications of the pattern classification framework can be realized, including any pattern analysis or classification task, e.g. document processing, license plate detection, text analysis from video, machine vision (object detection, localization and recognition tasks).
These and other aspects, features and advantages of the present invention will become apparent from the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram illustrating a system flow of a pattern analysis system;
FIG. 2 shows an example of a run length statistics calculation;
FIGS. 35 show examples of state transition diagrams illustrating the state transitions of a closing operator;
FIG. 6 shows an example of the state transitions for a Hamming distance calculation, after closing; and
FIGS. 78 show the results of license plate detection processing according to an embodiment of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTSAccording to the present invention, characterization of a morphological operator sequence may be viewed as the derivation of output image statistics as a function of the input image statistics, operator sequence and its tuning parameters. The difficulty is in defining the statistical models for the input data and the corresponding derivation of the output statistical models. In this invention, the input to the morphological algorithm is viewed as a binary random series. The output is viewed as another binary random series whose statistics (e.g. Hamming distance or run/gap length distributions) are functions of the input statistics.
For each particular morphological operator and its parameters, the corresponding Embeddable Markov Chain (EMC) model is built, and the EMC approach is applied to analyze the performance of the operator. The analyses gives insights about how one could automate the selection of a morphological operator sequence and its parameters to achieve a given error rate. License Plate detection is provided as a case study to illustrate the utility of the invention.
To facilitate a clear understanding of the present invention, illustrative examples are provided herein which describe certain applications of the invention (e.g., license plate detection). However, it is to be appreciated that these illustrations are not meant to limit the scope of the invention, and are provided herein to illustrate certain concepts associated with the invention.
It is also to be understood that the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. Preferably, the present invention is implemented in embedded code as a program tangibly embodied on a program storage device. In addition, various other peripheral devices may be connected to a computer platform such as an additional data storage device and a printing device, as well as various still and video imaging input devices.
Embeddable Markov Chain Approach
A system diagram for a typical pattern analysis system is illustrated in FIG. 1.
Let f(t) denote the mapping from the pixel index set {1,2, . . . N} to the gray level measurements R. Let b(t) denote the ideal unknown function representing the mapping that assigns the true labels (e.g., foreground/background (1/0)) for each index. The detection algorithm is viewed as a function that maps f(t) to the binary series {circumflex over (b)}<sub>d</sub>(t) by using a decision rule (that could be spatially varying). Define pf(t) as the conditional probability that the detector output {circumflex over (b)}<sub>d</sub>(t)=1 given that the true label b(t)=0. Let pm(t) be the conditional probability that {circumflex over (b)}<sub>d</sub>(t) 0 given b(t)=1.
A natural representation of the statistics {circumflex over (b)}<sub>d </sub>and {circumflex over (b)}<sub>g </sub>is the distribution of the run lengths as a function of the filter parameters used (e.g., structuring element sizes in a morphological operator). This representation is convenient in that it provides a natural way to interpret the results of morphological operations. The size distributions of shapes (granulometrics) have been used in the morphology literature to describe signal statistics. The derivation of the statistics of the run lengths may be viewed as a function of b(t), {circumflex over (b)}<sub>g</sub>(t), and the morphological operator parameter T<sub>g</sub>. Let Ĉ<sub>l </sub>denote the number of runs of length l in the output of the morphological algorithm. An objective is to derive the conditional distribution p(Ĉ<sub>l</sub>;T<sub>g</sub>,N{pm(t)},{pf(t)}). Most research in theory of runs has addressed this problem by using combinatorial analysis. However, the prior art assumed stationary (i.e., pm(t)=pm and pf(t)=pf) while the present invention does not make any assumption about the form of pm(t) and pf(t).
Advantageously, the present invention uses a technique that embeds a discrete random variable into a finite Markov chain to numerically compute the probability mass function (pmf) of the discrete random variable. The pmf is essentially computed as a function of the Nstep transition probabilities of the Embeddable Markov Chain (EMC). The main advantage of using this algorithm is that MonteCarlo simulations are prohibitively slow when probabilities for unlikely events are being estimated.
For a given n, let Γ<sub>n</sub>={0,1, . . . , n} be an index set and Ω={a<sub>l </sub>. . . . a<sub>m</sub>} be a finite state space.
A nonnegative integer random variable X<sub>n </sub>can be embedded into a finite Markov chain if:
 1. there exists a finite Markov chain {Y<sub>t</sub>:tεΓ<sub>n</sub>} defined on the finite state space Ω.
 2. there exists a finite partition {C<sub>x</sub>, x=0,1, . . . , l} on the state space Ω.
 3. for every x=0,1, . . . , l, we have p(X<sub>n</sub>=n)=p(Y<sub>n</sub>εC<sub>x</sub>)
Let Λ<sub>t </sub>be the m×n transition probability matrix of the finite Markov chain ({Y<sub>t</sub>:tεΓ<sub>n</sub>},Ω). Let U<sub>r </sub>be a 1×m unit vector having 1 at the rth coordinate and 0 elsewhere, and let U′<sub>r </sub>be the transpose of U<sub>r</sub>. Finally, for every C<sub>x</sub>, define the 1×m vector
<maths id="MATHUS00001" num="00001"><math overflow="scroll"><mrow><mrow><mi>U</mi><mo></mo><mrow><mo>(</mo><msub><mi>C</mi><mi>x</mi></msub><mo>)</mo></mrow></mrow><mo>=</mo><mrow><munder><mo>∑</mo><mrow><mrow><mi>r</mi><mo></mo><mstyle><mtext>:</mtext></mstyle><mo></mo><msub><mi>a</mi><mi>r</mi></msub></mrow><mo>∈</mo><msub><mi>C</mi><mi>x</mi></msub></mrow></munder><mo></mo><mrow><msub><mi>U</mi><mi>r</mi></msub><mo>.</mo></mrow></mrow></mrow></math></maths>
If X<sub>n </sub>can be embedded into a finite Markov chain, then
<maths id="MATHUS00002" num="00002"><math overflow="scroll"><mrow><mrow><mi>p</mi><mo></mo><mrow><mo>(</mo><mrow><msub><mi>X</mi><mi>n</mi></msub><mo>=</mo><mi>x</mi></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mrow><msub><mi>π</mi><mn>0</mn></msub><mo></mo><mrow><mo>(</mo><mrow><munderover><mo>∏</mo><mrow><mi>t</mi><mo>=</mo><mn>1</mn></mrow><mi>n</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><msub><mi>Λ</mi><mi>t</mi></msub></mrow><mo>)</mo></mrow></mrow><mo></mo><mrow><msup><mi>U</mi><mi>′</mi></msup><mo></mo><mrow><mo>(</mo><msub><mi>C</mi><mi>x</mi></msub><mo>)</mo></mrow></mrow></mrow></mrow></math></maths>where π<sub>0</sub>=[p(Y<sub>0</sub>=a<sub>1</sub>), . . . , p(Y<sub>0</sub>=a<sub>m</sub>)] is the initial probability of the Markov chain.
If the Markov chain is homogenous, that is Λ<sub>t</sub>=Λ for all tεΓ<sub>n</sub>, then ∀<sub>x</sub>=0, . . . l the exact distribution of the random variable X<sub>n </sub>can be expressed by p(X<sub>n</sub>=x)=π<sub>0</sub>Λ<sup>n</sup>U′(C<sub>x</sub>).
In order to find the distribution of any embeddable random variable, one has to construct: (i) a proper state space Ω, (ii) a proper partition {C<sub>x</sub>} for the state space. and (iii) the transition probability matrix Λ<sub>t </sub>associated with the EMC. The exact process by which the state space is defined along with the partitioning is dependent on the nature of the statistic of interest and the operator used.
Statistics Calculation Using EMC Approach
Before the problem of deriving the run length distributions in the output of a morphological algorithm is addressed, the EMC approach is described. This approach can be used to derive the run length distribution of the observation of an uncorrelated random binary series. Here, the problem of calculation of the joint run length distribution, i.e., “What is the probability of having m runs with size M and n runs with size N?” is addressed.
State Space Construction
The state space construction for the computation of the distribution of run lengths is rather straightforward. View the sequence of binary observations up to pixel T as partial observations of the 0 and 1 runs. A variable x<sub>i </sub>is needed to denote the number of observations of runs of a given length i at pixel T and an indicator variable m<sub>i </sub>to denote the situation whether the preceding number of ones is exactly equal to i or not. Thus m<sub>i </sub>takes on value 1 if the last sequence of is 1s exactly equal to length i and 0 otherwise. The pair (X,M),X=[x<sub>1</sub>, . . . , x<sub>n</sub>,x<sup>+</sup>], M=[m<sub>1</sub>, . . . , m<sub>n</sub>,m<sub>m</sub><sup>+</sup>] denotes the states for the problem. Here x<sub>n</sub><sup>+</sup> denotes the number of runs larger than n and m<sub>n</sub><sup>+</sup> in the corresponding indicator variable. Given these states it is easy to see that the graph shown in FIG. 2 constitutes the Markov chain for the run length statistics computation problem. In the graph, focus is on the joint distribution of run length whose size is equal to or less than 3. However, those skilled in the related art will appreciate that the graph shown in FIG. 2 can be extended to meet the requirement of the joint distribution of a longer run length.
The partition of the state space corresponds to the singleton sets of C with assigned count values. The values for the probabilities in Λ<sub>t </sub>are given by the following expressions. For example, the probability of observing a 0 at location t is given by the sum of two terms: the probability that the true value is 0 and there is no false alarm, and the probability that the true value is 1 and there is a misdetection.<FORM>q<sub>t</sub>(0)=p<sub>b(t)</sub>{0}(1−pf(t))+p<sub>b(t)</sub>{1}pm(t) (1)</FORM><FORM>q<sub>t</sub>(l)=p<sub>b(t)</sub>{0}pf(t)+p<sub>b(t)</sub>{1}(1−pm(t)) (2)</FORM>where p<sub>b(t)</sub>{ } is the distribution of the ground truth.
It is clear that a large state space is needed for calculating the joint distribution when n is large. For example, when n=50, more than 10<sup>24 </sup>states are needed, thus requiring a large memory for implementation.
Statistics for OneDimensional Morphology
It was shown that runlength statistics for a binary random series can be derived before the application of the morphological algorithm. The statistics can be derived for an uncorrelated binary series or a correlated series defined in terms of a homogenous or inhomogeneous Markov chain. Now we will use the probability of observing a given number of runs of length greater than equal to S,G<sub>n,S</sub>′ after the closing operation with closing parameter T<sub>g</sub>=L as an example to illustrate how the output statistics of a morphological operator for binary series can be derived by using the EMCs. The trick again is to devise the appropriate EMC. Similar EMCs can be derived for openings, and openings followed by closings, etc.
To construct the state space we have to consider the property of the closing operation. Closing essentially fills gaps of sizes less than a given length L. At any given pixel the output {circumflex over (b)}<sub>g</sub>(t) is a 1 if and only if {circumflex over (b)}<sub>d</sub>(t)=1 or {circumflex over (b)}<sub>d</sub>(t)=0 and there exists two neighbors with indices t−i and t−j,i,j≧1,{circumflex over (b)}<sub>d</sub>(t−1)=1 and {circumflex over (b)}<sub>d</sub>(t+j)=1 with j+i<L. This implies that in addition to the number of runs of length greater than or equal to S, the state space has to include information about the run length of the last 1run observed as well as the length of the last gap (if any 0runs) to identify the partial state. One has to wait until the gap length is greater than L before deciding to terminate a previous run.
Formally, define the state variable to be (x,s,f) where
<maths id="MATHUS00003" num="00003"><math overflow="scroll"><mrow><mrow><mrow><mi>x</mi><mo>=</mo><mrow><mn>0</mn><mo>,</mo><mn>1</mn><mo>,</mo><mstyle><mspace width="0.6em" height="0.6ex"/></mstyle><mo></mo><mi>…</mi></mrow></mrow><mo></mo><mstyle><mspace width="0.6em" height="0.6ex"/></mstyle><mo>,</mo><mrow><mo>[</mo><mfrac><mrow><mi>n</mi><mo>+</mo><mn>1</mn></mrow><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></mfrac><mo>]</mo></mrow></mrow><mo></mo><mstyle><mspace width="0.6em" height="0.6ex"/></mstyle></mrow></math></maths>denotes the number of success runs of size greater than or equal to k,s=−1,0, . . . , L is the number of failures in the last failure run, L is the size of the structuring element. The value −1 for f corresponds to a gap of length of greater than L that cannot be filled, while a value −1 for s corresponds to having a 1run of length greater than or equal to S. The left graph of FIG. 3 corresponds to the initial condition for the state transitions. The right graph of FIG. 3 corresponds to the state transition diagram illustrating that the length of 1run observed before the start of the transitions is already greater than or equal to S (i.e., the overflow condition). FIG. 4 corresponds to the elements of the state transition diagram for the case that the partial 1runs observed have length k<S and k+L+1<S. FIG. 5 corresponds to the elements of the state transition diagram for the case that the partial 1runs are such that the constraint k+L+1≧S is satisfied. Note that the diagrams are illustrative of only the portions of the large state transition diagram for the embeddable Markov chain. For illustrative purposes we present only the parts of the diagram that are the elements of the larger graph. The larger graph is the concatenation of those individual elements.
The partition of the state space is based on the number of success runs, x. The values of transition probabilities are given by the q<sub>t </sub>and 1−q<sub>t </sub>where<FORM>q<sub>t</sub>=p<sub>b(t)</sub>{0}pf(t)+p<sub>b(t)</sub>{1}(1−pm(t)).</FORM>Statistics of Binary Series in Morphology Operator Output
The EMC approach is illustrated to show how it can be used to compute the probability distribution of the Hamming distance between the output closing operation and a ground truth signal perturbed by a stochastic process. After that, how one could derive similar statistics when multiple morphological operations are applied consecutively will be introduced.
Statistics for Closing Morphology Operator
The effect of the closing operator with parameter T<sub>g </sub>when applied to the detector output {circumflex over (b)}<sub>d</sub>(t) to produce {circumflex over (b)}<sub>g</sub>(t) is analyzed. The deviation of {circumflex over (b)}<sub>g</sub>(t) from the ideal signal b(t) is measured by the Hamming distance (the number of locations where the two Boolean series differ in value). Let X be a discrete random variable corresponding to the Hamming distance between the binary random series {circumflex over (b)}<sub>g</sub>(t) and the ground truth b(t). In this analysis, b(t) is assumed to be composed of independent (but not necessarily identically distributed) random variables or spatially correlated random variables where the correlation is described by a Markov model. {circumflex over (b)}<sub>d</sub>(t) is a perturbed version of b(t) where the perturbation model parameters are described by p<sub>f</sub>(t) and P<sub>m</sub>(t)
Statistical Independent Case Example
Let the input binary series consist of Boolean random variables that are statistically independent with the probability, of the tth pixel being value 0, q<sub>t</sub>. The EMC for calculation of the Hamming distance in this case is given below. The EMC for calculation of correlated binary sequences and the state transition diagram have been devised without providing details herein.
The State Space is:<FORM>Ω={(x,p,q):x=0, . . . ,N;p=ω,0, . . . ,T<sub>g</sub>;q=ω,0, . . . ,T<sub>g</sub>}.</FORM>
The state variable in the left graph is (x,p,q), where x is the Hamming distance, p is the number of 0s in the trailing run of the observed sequence and q is the number of 0s in the ground truth binary series in this trailing run's domain. The value p is needed since it provides the partial gap length observed so far and if this gap were closed, one would have to use the value of q to update the Hamming distance. The notation p=ω is used for an overflowing state that corresponds to the condition that a given gap will not be filled by the closing operation.
The Partition is:<FORM>C<sub>x</sub>={(x,p,q):p=ω,0, . . . ,T<sub>g</sub>;q=ω,0, . . . ,T<sub>g</sub>},x=0, . . . ,N.</FORM>
The State Transition Probabilities are specified by the following equations.<FORM>When b(t)=1 and {circumflex over (b)}<sub>d</sub>(t)=1, (correct detection):</FORM><FORM>p<sub>t</sub>(x+2q−p,0,0;x,p,q)=1−q<sub>t</sub>,p≠ω,q≠ω p<sub>t</sub>(x,0,0;x,ω,ω)=1−q<sub>t</sub></FORM><FORM>When b(t)=0 and {circumflex over (b)}<sub>d</sub>(t)=0, (correct rejection):</FORM><FORM>p<sub>t</sub>(x,p+1,q+1;x,p,q)=1−q<sub>t</sub>,p≠ω,q≠ω p<sub>t</sub>(x,ω,ω,x,ω,ω)=1−q<sub>t</sub></FORM><FORM>When b(t)=0 and {circumflex over (b)}<sub>d</sub>(t)=1, (false alarm error):</FORM><FORM>p<sub>t</sub>(x+2q−p+1,0,0; x,p,q)=q<sub>t</sub>,p≠ω,q≠ω p<sub>t</sub>(x+1,0,0; x, ω,ω)=q<sub>t</sub></FORM><FORM>When b(t)=1 and {circumflex over (b)}<sub>d</sub>(t)=0, (miss detection error):</FORM><FORM>p<sub>t</sub>(x+1,p+1,q;x,p,q)=q<sub>t</sub>,p≠ω,q≠ωp<sub>t</sub>(x+1,ω,ω,x,ω,ω)=q<sub>t</sub></FORM>
The interpretation of the above equations is as follows. View the value x as the partial Hamming distance between b(t) and {circumflex over (b)}<sub>g</sub>(t) until the current pixel instant t. If the closing operation given the current observation results in an alteration of the output {circumflex over (b)}<sub>g</sub>(t) then we have to alter our estimated Hamming distance to correspond to the correct value. The state jumps are essentially of two types:
 1) States (2nd and 4th equations in the above set), where the Hamming distance values are continually updated (changing) given the current measurement at pixel t (since the decision concerning the effect of closing operation on the output cannot be made because we have observed only a partial sequence of 0s of length less than T<sub>g</sub>), and
 2) States (1<sup>st </sup>and 3<sup>rd </sup>equations in the above set), where the Hamming distance values take discrete jumps because of the switching of all the zero values in the trailing window to a 1 as a result of the closing operation.
For example, when correct detection occurs, there are two possible cases to be considered:
 1) The last trail of 0s is of length smaller than or equal to T<sub>g</sub>. In this case, the gap will be closed by the closing operation. This corresponds to a state jump from (x,p,q) to (x+2q−p,0,0). The term 2q−p is an update factor that increments the Hamming distance to take account the flipping of 0's to 1's after the closing operation.
 2) The last trail of 0s is of length greater than T<sub>g </sub>and hence this gap cannot be closed. This corresponds to being the state (x,ω,ω). After state transition the new state is (x,0,0), i.e., no change is made to our estimate of the Hamming distance, but the p and q values are reset to zero.
FIG. 6 shows the EMC model for calculating the Hamming distance after closing. The symbols with a right arrow in the figure represent events. The left graph corresponds to the case when input binary r.v.s. b(t) is statistically independent, and the right graph is the case when the background model b(t) has the first order Markov property. Because the right graph assumes the Markov property, an additional variable, L, is added to the state variable. It shows the previous ground truth.
In FIG. 6, the dashed lines show four special cases and the solid lines indicate closing operation case. The upper dashed lines correspond to state transitions when the length of the trailing 0run is longer than the closing parameter and the operator gets a 1. The lower dashed lines correspond to the state jumps to an overflow state (p≠ω), when the trailing 0run length is above T<sub>g </sub>and the operator gets a 0. The lower solid lines correspond to state jumps as described below. When there is a correct rejection, p and q will be increased by 1. When there is a miss detection error, not only p is increased, but also x is increased because of one more error introduced (however, q will not be increased). The upper solid lines need more explanation. According to the definition, p−q, 0 errors in the trailing run become correct labelings, and at the same time, q correct 0's will become error labelings after the closing operation. When there is a correct detection (the closing operation is applied in this case), 2q−p errors are made. When there is a false alarm error, 2q−p+1 errors are made. The difference between these is due to the current false alarm error. From FIG. 6, it is clear that the state variable is not only a function of the operation used, but also a function of the input statistics. The independent observation assumption can reduce the complexity of the graph and computation requirements.
Statistical Markov Case Example
The right graph in FIG. 6 shows the EMC model for calculating the Hamming distance after closing when the background model b(t) has the first order Markov property. Note the use of an additional variable, L, in the state variable. It is needed to keep track of the previous state (i.e., the ground truth) value at t−1. Details of the state transition probability equations are omitted due to lack of space.
Statistics for CloseOpen Operator Sequences
Previously, a method was introduced to calculate the Hamming distance distribution after a morphological closing by using the EMC approach. Next, it will be shown how to generalize the EMC approach to get the statistics of closeopen or openclose operator sequences. The main idea is to use more state variables to save all the temporary information and to extend the graph from a single layer representation to a multilayered graph.
As one example to show how to generate the state space and transition probabilities matrix for the EMC when given an operator sequence, the Hamming distance after the closingopening operator sequence—closing with parameter K<sub>1</sub>, opening with parameter K<sub>2</sub>, will be used. To simplify the description, the term “operator K<sub>1</sub>” is used to indicate the closing operator with closing parameter K<sub>1 </sub>and the “operator K<sub>2</sub>” to indicate the opening operator with opening parameter K<sub>2</sub>.
State Space Construction
View the sequence of binary observations up to a pixel instant T as partial observations of 0 and 1 runs. x is used to denote the Hamming distance and the state variable is (x,p<sub>2</sub>p<sub>2</sub><sup>a</sup>,q<sub>1</sub>,q<sub>1</sub><sup>a</sup>). The components with the superscript a indicate how many correct detections in the corresponding numbers without the superscript. q<sub>1 </sub>is the number of 0's in the last 0run and q<sub>1</sub><sup>a </sup>is the number of 0's which are correct 0's in the last 0run. P<sub>2 </sub>is the number of 1's in the last 1run and p2<sup>a </sup>is the number of 1's which are correct 1's in the last 1run. It is clear that there are q<sub>1</sub>−q<sub>1 </sub>errors in the last 0run and P<sub>2</sub>−P<sub>2</sub><sup>a </sup>errors in the last on run. Then if the closing operator is applied to q<sub>1</sub>, those q<sub>1</sub><sup>a </sup>correct 0's in the last 0run will become errors in the last 1run. Those q<sub>1</sub>−q<sub>1</sub><sup>a </sup>errors in the last 0run will become correct 1's. Thus, the Hamming distance should be adjusted by q<sub>1</sub><sup>a−</sup>(q<sub>1</sub>−q<sub>1</sub><sup>a</sup>) Similar changes happen to the other component when the opening operator is applied.
Definition of State Transition Probabilities Λ<sub>t</sub>:
In the Hamming distance calculation, the events are:
<tables id="TABLEUS00001" num="00001"><table frame="none" colsep="0" rowsep="0"><tgroup align="left" colsep="0" rowsep="0" cols="4"><colspec colname="1" colwidth="42pt" align="center"/><colspec colname="2" colwidth="63pt" align="left"/><colspec colname="3" colwidth="42pt" align="center"/><colspec colname="4" colwidth="70pt" align="left"/><thead><row><entry namest="1" nameend="4" align="center" rowsep="1"/></row></thead><tbody valign="top"><row><entry>I<sub>0,0</sub></entry><entry>correct rejection</entry><entry>I<sub>0,1</sub></entry><entry>false alarm error</entry></row><row><entry>I<sub>1,0</sub></entry><entry>miss detection error</entry><entry>I<sub>1,1</sub></entry><entry>correct detection</entry></row><row><entry namest="1" nameend="4" align="center" rowsep="1"/></row></tbody></tgroup></table></tables>
When the event is I<sub>0,0</sub>, the state (x,p<sub>2</sub>,p<sub>2</sub><sup>a</sup>,q<sub>1</sub>,q<sub>1</sub><sup>a</sup>) moves to the state (x,p<sub>2</sub>,p<sub>2</sub><sup>a</sup>,q<sub>1</sub>+1,q<sub>1</sub><sup>a</sup>+1). When the event is I<sub>1,0</sub>, the state (x,p<sub>2</sub>,p<sub>2</sub><sup>a</sup>,q<sub>1</sub>,q<sub>1</sub><sup>a</sup>) moves to the state (x+1,p<sub>2</sub>,p<sub>2</sub><sup>a</sup>,q<sub>1</sub>+1,q<sub>1</sub><sup>a</sup>).
Since the first operator in the operator sequence is a closing operator, no comparisons or judgments are necessary when the system gets a 0 (event I<sub>0,0 </sub>or I<sub>1,0</sub>). When the event is I<sub>0,1 </sub>or I<sub>1,0</sub>, operator effects need to be considered. Since the only differences between I<sub>0,1 </sub>and I<sub>1,1 </sub>are that I<sub>0,1 </sub>will increase the Hamming distance by 1 and reduce the correct 1's in p<sub>2</sub><sup>a </sup>by 1, the event I<sub>1,1 </sub>is used to show state evolution. One of ordinary skill in the related art would be able to easily derive the states for event I<sub>0,1</sub>. Suppose the event is I<sub>1,1</sub>. If q<sub>1</sub>≦K<sub>1</sub>, the operator will close the last 0run, then the state (x,p<sub>2</sub>,p<sub>2</sub><sup>a</sup>,q<sub>1</sub>,q<sub>1</sub><sup>a</sup>) will move to the state (x+2q<sub>1</sub><sup>a</sup>−q<sub>1</sub>,p<sub>2</sub>+q<sub>1</sub>+1,p<sub>2</sub><sup>a</sup>+q<sub>1</sub>−q<sub>1</sub><sup>a</sup>+1,0,0). If q<sub>1</sub>>K<sub>1</sub>, the closing operator will have no impact to the sequence, and will have no impact to the state variable. Then, the operator K<sub>2 </sub>must be considered. If (q<sub>1</sub>>K<sub>1</sub>)&&(p<sub>2</sub>≦K<sub>2</sub>), the opening will take place, the state (x,p<sub>2</sub>,p<sub>2</sub><sup>a</sup>,q<sub>1</sub>,q<sub>1</sub><sup>a</sup>) will move to the state (x−p<sub>2</sub>+2p<sub>2</sub><sup>a</sup>,1,1,0,0). If (q<sub>1</sub>>K<sub>1</sub>)&&(p<sub>2</sub>>K<sub>2</sub>), the morphology operator sequence will have no effect. The state (x,p<sub>2</sub>,p<sub>2</sub><sup>a</sup>,q<sub>1</sub>,q<sub>1</sub><sup>a</sup>) will move to the state (x,1,1,0,0)
Application
Numerous applications of the present invention can be realized, including pattern analysis or classification tasks, e.g., document processing, text analysis from video, machine vision. The following describes the use of the present invention in conjunction with license plate detection as one example to show how the present invention may be applied.
Hidden Markov Models (HMMs) have been extensively used in the document understanding literature for text extraction. In this example application, license plate and background binary pattern distributions are modeled as mixtures of HMMs. The main point is that while one way to implement the binary series classification might be by direct implementation of a mixture of HMMbased classifiers, a morphological operator sequence with associated parameters serves as an approximation to the classification mechanism. Denote the HMM parameters for the background binary series and license plate binary series as Θ<sub>B </sub>and Θ<sub>L </sub>respectively. The objective is to obtain the mapping from the (Θ<sub>B</sub>,Θ<sub>L</sub>) to a morphological operator sequence and its parameters (O<sub>S</sub>,Θ<sub>S</sub>). The major gain is in the computational performance as well as providing strong statistical justification for use of a morphological algorithm with associated parameters for the task.
In the application, pixels on the license plate are assumed to be is while nonplate pixels are assumed to be 0s. As before we use notations: ideal signal b(t), the detection result {circumflex over (b)}<sub>d</sub>(t), and the grouping result {circumflex over (b)}<sub>g</sub>(t), etc. The Hamming distance between b(t) and {circumflex over (b)}<sub>g</sub>(t) is used as the criterion function to evaluate the performance of the morphological sequence. Ideally, all the plate pixels should be detected and all of the nonplate pixels should be ignored. There are two types of errors. The type one error is the false alarm error, i.e., labeling the nonplate pixel (0) as a plate pixel (1), the type two error is the miss detection error, i.e., labeling a plate pixel (1) as a nonplate pixel (0). For example, in the binary images shown in FIGS. 7(a)(f), all the black pixels in the nonplate region are the false alarm errors and all the white pixels in the plate region are the miss detection errors in the input of morphology operator.
The license plate detection algorithm is comprised of the following steps. Initially, the image is inputted (FIG. 7a), and thresholded using an adaptive mechanism (FIG. 7b), downsampled (FIG. 7c), and text areas are classified by applying openclose operations in the horizontal direction (FIG. 7d) followed by closeopen operations in the vertical direction (FIG. 7e). The objective is to determine the parameters of the horizontal morphology operator sequence (Θ<sub>h</sub>), and the vertical morphology operator sequence (Θ<sub>v</sub>) that minimizes the probability of misclassification<FORM>P(Plate;Θ<sub>h</sub>,Θ<sub>v</sub>truth=NonPlate)+P(NonPlate;Θ<sub>h</sub>,Θ<sub>v</sub>truth=Plate).</FORM>
The parameter optimization algorithm consists of the steps:
 Fix the parameters of the adaptivethresholding step (e.g., window size, percentage threshold).
 Choose a training set of images, apply adaptive thresholding and consider samples of binary series in the backgroundand in the plate region. Estimate HMM model parameters for the binary series in background and plate region for each image. The distribution of binary patterns in the background and foreground for the collection of images are then approximated by a mixture of HMMs.
 Given the centroid of these HMM model parameter clusters, an EMC approach can be used to compute the probability of error for various morphological operator parameter combinations.
 The operator sequence and parameters that minimize the weighted sum of the probability of false alarm and misdetection is considered to be the best operator. According to the requirements of real applications, different weighting can be applied to the false alarm and miss detection probabilities.
Experimentally, 60 license plate images were initially supplied. From the 60 license plates images, 50 images were randomly chosen as the training set and 10 images as the testing set. The size of the images were 768×512 and the size of the plate in the images was usually 135 pixels wide and 65 pixels high. Transition probabilities of the Markov model for the original size image were estimated. From the Markov property and the description of how the EMC approach works, it is apparent that if the morphology parameters for the original size image is calculated, the state space will be prohibitively large leading to high computational cost. Instead, the system was configured to downsample by a factor of 4 and use it as the output of detection stage (and also input for the morphology stage). Although downsampling increases the false alarm rate in the nonplate region and decreases the miss detection rate in the plate region, the main objective was just to illustrate the utility of EMC for this application.
From various experiments, it was concluded that the “optimum” morphological operator sequence for the horizontal direction was an openclose sequence with parameters 11 and 19 respectively. For the vertical direction, we found that the best operator sequence was a closeopen sequence with parameters 2 and 5 respectively. FIG. 8 shows results of the test set of images obtained by using the chosen morphological operator sequence.
Although illustrative embodiments of the present invention have been described herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the invention.