Multi-layer development network having in-place learning
First Claim
1. A multi-layer in-place learning developmental network, comprising:
- an input layer having a plurality of artificial neurons;
multiple intermediate layers having a plurality of artificial neurons, each intermediate layer having artificial neurons that receive an input vector signal from neurons in previous layers;
an input vector signal from neurons in the same layer and an input vector signal from neurons in subsequent layers; and
an output layer having a plurality of artificial neurons, where the neurons in the network implement an in-place learning algorithm that estimates a feature vector of a given neuron by an amnesic average of input vectors weighted by corresponding response of the given neuron, where amnesic is a recursive, incremental computation of the input vector weighted by the response, such that a direction of the feature vector and a variance of signals in the region projected to the feature vector are both recursively estimated with a plasticity scheduling.
1 Assignment
0 Petitions
Accused Products
Abstract
An in-place learning algorithm is provided for a multi-layer developmental network. The algorithm includes: defining a sample space as a plurality of cells fully connected to a common input; dividing the sample space into mutually non-overlapping regions, where each region is a represented by a neuron having a single feature vector; and estimating a feature vector of a given neuron by an amnesic average of an input vector weighted by a response of the given neuron, where amnesic is a recursive computation of the input vector weighted by the response such that the direction of the feature vector and the variance of signal in the region projected onto the feature vector are both recursively estimated with plasticity scheduling.
24 Citations
28 Claims
-
1. A multi-layer in-place learning developmental network, comprising:
- an input layer having a plurality of artificial neurons;
multiple intermediate layers having a plurality of artificial neurons, each intermediate layer having artificial neurons that receive an input vector signal from neurons in previous layers;
an input vector signal from neurons in the same layer and an input vector signal from neurons in subsequent layers; and
an output layer having a plurality of artificial neurons, where the neurons in the network implement an in-place learning algorithm that estimates a feature vector of a given neuron by an amnesic average of input vectors weighted by corresponding response of the given neuron, where amnesic is a recursive, incremental computation of the input vector weighted by the response, such that a direction of the feature vector and a variance of signals in the region projected to the feature vector are both recursively estimated with a plasticity scheduling. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
- an input layer having a plurality of artificial neurons;
-
10. An in-place learning algorithm for a multi-layer network, comprising:
-
defining a sample space as a plurality of cells fully connected to a common input; dividing the sample space into mutually non-overlapping regions, where each region is represented by a neuron having a single feature vector; and estimating a feature vector of a given neuron by an amnesic average of an input vector weighted by the response of the given neuron, where amnesic is a recursive computation of the input vector weighted by the response such that a direction of the feature vector and a variance of signals in the region projected onto the feature vector are both recursively estimated with plasticity scheduling. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
-
17. An in-place learning algorithm for a multi-layer network having a plurality of neurons organized into multiple layers, comprising:
- computing a response for all neurons in a given layer of the network using a current input sample;
selecting one or more neurons in the given layer having the largest response;
updating the selected neurons using temporally scheduled plasticity, wherein computing a response further comprises estimating a feature vector of a given neuron by an amnesic average of an input vector weighted by corresponding response of the given neuron, where amnesic is a recursive computation of the input vector weighted by the response such that a direction of the feature vector and a variance of signals in the region projected onto the feature vector are both recursively estimated with plasticity scheduling. - View Dependent Claims (18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28)
- computing a response for all neurons in a given layer of the network using a current input sample;
Specification