Feed forward feed back multiple neural network with context driven recognition
First Claim
1. A method for providing automated pattern recognition, comprising:
- selecting a central component of an input signal;
positioning a plurality of complex units with respect to said central component of said input signal, each of said complex units associated with a plurality of simple units, one of said plurality of simple units designated as a central unit, wherein said positioning associates each of said central units with said central component of said input signal;
determining, for each of said simple units, a greatest weighted activation level responsive to at least one detection unit activation level within an associated reception field;
combining, for each of said complex units, said greatest weighted activation levels of each of said associated plurality of simple units, said combining resulting in a respective complex unit activation level for each of said complex units;
selecting one of said complex units associated with a greatest one of said complex unit activation levels;
recording an object feature associated with said central unit of said selected one of said complex units;
selecting, responsive to a receptive field of one of said simple units associated with said selected one of said complex units other than said central unit, a new central component of said input signal;
repositioning said plurality of complex units with respect to said new central component of said input signal, wherein said repositioning associates each of said central units with said new central component of said input signal; and
providing a recognition indication when a set of recorded object features comprises a set of object features associated with an object to be recognized.
18 Assignments
0 Petitions
Accused Products
Abstract
A recognition system is disclosed, including a representation of an object in terms of its constituent parts that is translationally invariant, and which provides scale invariant recognition. The system further provides effective recognition of patterns that are partially present in the input signal, or that are partially occluded, and also provides an effective representation for sequences within the input signal. The system utilizes dynamically determined, context based expectations, for identifying individual features/parts of an object to be recognized. The system is computationally efficient, and capable of highly parallel implementation, and further includes a mechanism for improving the preprocessing of individual sections of an input pattern, either by applying one or more preprocessors selected from a set of several preprocessors, or by changing the parameters within a single preprocessor.
22 Citations
13 Claims
-
1. A method for providing automated pattern recognition, comprising:
-
selecting a central component of an input signal;
positioning a plurality of complex units with respect to said central component of said input signal, each of said complex units associated with a plurality of simple units, one of said plurality of simple units designated as a central unit, wherein said positioning associates each of said central units with said central component of said input signal;
determining, for each of said simple units, a greatest weighted activation level responsive to at least one detection unit activation level within an associated reception field;
combining, for each of said complex units, said greatest weighted activation levels of each of said associated plurality of simple units, said combining resulting in a respective complex unit activation level for each of said complex units;
selecting one of said complex units associated with a greatest one of said complex unit activation levels;
recording an object feature associated with said central unit of said selected one of said complex units;
selecting, responsive to a receptive field of one of said simple units associated with said selected one of said complex units other than said central unit, a new central component of said input signal;
repositioning said plurality of complex units with respect to said new central component of said input signal, wherein said repositioning associates each of said central units with said new central component of said input signal; and
providing a recognition indication when a set of recorded object features comprises a set of object features associated with an object to be recognized. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
determining at least one contextual expectation with regard to said input signal; and
wherein said selecting said new central component of said input signal is further responsive to said at least one contextual expectation with regard to said input signal.
-
-
3. The method of claim 2, wherein said determining said at least one contextual expectation comprises determining a temporal expectation with regard to said input signal.
-
4. The method of claim 2, wherein said determining said at least one contextual expectation comprises determining a locational expectation with regard to said input signal.
-
5. The method of claim 2, wherein said determining said at least one contextual expectation comprises determining a contextual expectation with respect to at least one component of a recognizable object.
-
6. The method of claim 5, wherein said input signal represents handwriting, wherein said recognizable object comprises at least one word, and wherein said at least one contextual expectation with respect to said at least one component of said recognizable object comprises at least one expected letter within said at least one word.
-
7. The method of claim 1, wherein said input signal represents audio information.
-
8. The method of claim 1, wherein said input signal represents video information.
-
9. The method of claim 1, further comprising:
-
determining at least one contextual expectation with regard to said input signal; and
wherein said determining said greatest weighted activation level is further responsive to said at least one contextual expectation with regard to said input signal.
-
-
10. The method of claim 9, wherein said determining said at least one contextual expectation comprises determining a temporal expectation with regard to said input signal.
-
11. The method of claim 9, wherein said determining said at least one contextual expectation comprises determining a locational expectation with regard to said input signal.
-
12. The method of claim 9, wherein said determining said at least one contextual expectation comprises determining a contextual expectation with respect to at least one component of a recognizable object.
-
13. The method of claim 12, wherein said input signal represents handwriting, wherein said recognizable object comprises at least one word, and wherein said at least one contextual expectation with respect to said at least one component of said recognizable object comprises at least one expected letter within said at least one word.
Specification