Artificial neural networks based on a low-order model of biological neural networks
First Claim
1. An artificial neural network for processing data, comprising at least one processing unit, a first processing unit including(a) at least one artificial neuronal encoder for encoding a vector into a neuronal code;
- (b) a means for evaluating a code deviation vector that is the deviation of a neuronal code obtained by said artificial neuronal encoder from a neuronal code average;
(c) a plurality of artificial synapse memories each for storing a component of a code deviation accumulation vector;
(d) a first means for evaluating a first product of a component of a code deviation accumulation vector, a masking factor, and a component of a code deviation vector;
(e) an artificial nonspiking neuron processor for evaluating a first sum of first products obtained by said first means;
(f) a plurality of artificial synapse memories each for storing an entry of a code covariance matrix;
(g) a second means for evaluating a second product of an entry of a code covariance matrix, a masking factor, and a component of a code deviation vector; and
(h) at least one artificial spiking neuron processor for evaluating a second sum of second products obtained by said second means, and for using at least said second sum and a first sum obtained by said artificial nonspiking neuron processor to evaluate a representation of a first empirical probability distribution of a component of a label of a vector that is input to said first processing unit.
0 Assignments
0 Petitions
Accused Products
Abstract
A low-order model (LOM) of biological neural networks and its mathematical equivalents including the clusterer interpreter probabilistic associative memory (CIPAM) are disclosed. They are artificial neural networks (ANNs) organized as networks of processing units (PUs), Each PU comprising artificial neuronal encoders, synapses, spiking/nonspiking neurons, and a scheme for maximal generalization. If the weights in the artificial synapses in a PU have been learned (and then fixed) or can be adjusted by the unsupervised accumulation rule and the unsupervised covariance rule (or supervised covariance rule), the PU is called unsupervised (or supervised) PU. The disclosed ANNs, with these Hebbian-type learning rules, can learn large numbers of large input vectors with temporally/spatially hierarchical causes with ease and recognize such causes with maximal generalization despite corruption, distortion and occlusion. An ANN with a network of unsupervised PUs (called clusterer) and offshoot supervised PUs (called interpreter) is an architecture for many applications.
17 Citations
34 Claims
-
1. An artificial neural network for processing data, comprising at least one processing unit, a first processing unit including
(a) at least one artificial neuronal encoder for encoding a vector into a neuronal code; -
(b) a means for evaluating a code deviation vector that is the deviation of a neuronal code obtained by said artificial neuronal encoder from a neuronal code average; (c) a plurality of artificial synapse memories each for storing a component of a code deviation accumulation vector; (d) a first means for evaluating a first product of a component of a code deviation accumulation vector, a masking factor, and a component of a code deviation vector; (e) an artificial nonspiking neuron processor for evaluating a first sum of first products obtained by said first means; (f) a plurality of artificial synapse memories each for storing an entry of a code covariance matrix; (g) a second means for evaluating a second product of an entry of a code covariance matrix, a masking factor, and a component of a code deviation vector; and (h) at least one artificial spiking neuron processor for evaluating a second sum of second products obtained by said second means, and for using at least said second sum and a first sum obtained by said artificial nonspiking neuron processor to evaluate a representation of a first empirical probability distribution of a component of a label of a vector that is input to said first processing unit. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A learning machine for processing data, said learning machine comprising at least one processing unit, a first processing unit including
(a) an encoding means for encoding a vector into a code; -
(b) a means for evaluating a code deviation vector; (c) a memory means for storing at least one first weighted sum of an entry of a code covariance matrix and a component of a code deviation accumulation vector; (d) a memory means for storing at least one masking factor; (e) a multiplying means for evaluating a product of a second weighted sum of an entry of a code covariance matrix and a component of a code deviation accumulation vector, a masking factor, and a component of a code deviation vector; (f) a summing means for evaluating a sum of products obtained by said multiplying means; (g) an evaluation means for using at least a sum obtained by said summing means to evaluate a representation of an empirical probability distribution of a component of a label of a vector input to said first processing unit. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21)
-
-
22. A method for processing data, said method comprising steps of:
-
(a) encoding a subvector of a first vector into a code; (b) evaluating a code deviation vector that is the deviation of a code from a code average; (c) evaluating a product of a weighted sum of a component of a code deviation accumulation vector and an entry of a code covariance matrix, a masking factor, and a component of a code deviation vector; (d) evaluating a sum of products obtained by said step of evaluating a product; and (e) using at least a sum of products obtained by said step of evaluating a sum to evaluate a representation of an empirical probability distribution of a component of a label of said first vector. - View Dependent Claims (23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34)
-
Specification