Method and system for training a neural network with adaptive weight updating and adaptive pruning in principal component space
First Claim
1. A neural network having a plurality of weights for receiving a sequence of signal inputs xt,xt+1, xt+2 . . . , each input xt comprising n signal components x1 (t), x2 (t-1), . . . , xn (t-(n-1)) and for generating an output signal that anticipates the behavior of said input signal for a number of time samples ahead, said neural network comprising:
- transformation means for transforming a set of n signal inputs into a set of principal components having a saliency assigned to each of said principal component;
pruning means, coupled to said transformation means, for pruning a number of said principal components that correspond to the smallest saliencies, where the number of said principal components is limited by a sum of said saliencies of said pruned principal components to be less than or equal to a predefined threshold level, leaving a remaining set of principal components;
first computing means, coupled to said pruning means, for computing the output signal using said set of remaining principal components; and
wherein said neural network an updating means, coupled to said first computing means, for updating the weights of the neural network adaptively based on an error between a target output and the output signal.
4 Assignments
0 Petitions
Accused Products
Abstract
A signal processing system and method for accomplishing signal processing using a neural network that incorporates adaptive weight updating and adaptive pruning for tracking non-stationary signal is presented. The method updates the structural parameters of the neural network in principal component space (eigenspace) for every new available input sample. The non-stationary signal is recursively transformed into a matrix of eigenvectors with a corresponding matrix of eigenvalues. The method applies principal component pruning consisting of deleting the eigenmodes corresponding to the smallest saliencies, where a sum of the smallest saliencies is less than a predefined threshold level. Removing eigenmodes with low saliencies reduces the effective number of parameters and generally improves generalization. The output is then computed by using the remaining eigenmodes and the weights of the neural network are updated using adaptive filtering techniques.
77 Citations
21 Claims
-
1. A neural network having a plurality of weights for receiving a sequence of signal inputs xt,xt+1, xt+2 . . . , each input xt comprising n signal components x1 (t), x2 (t-1), . . . , xn (t-(n-1)) and for generating an output signal that anticipates the behavior of said input signal for a number of time samples ahead, said neural network comprising:
-
transformation means for transforming a set of n signal inputs into a set of principal components having a saliency assigned to each of said principal component; pruning means, coupled to said transformation means, for pruning a number of said principal components that correspond to the smallest saliencies, where the number of said principal components is limited by a sum of said saliencies of said pruned principal components to be less than or equal to a predefined threshold level, leaving a remaining set of principal components; first computing means, coupled to said pruning means, for computing the output signal using said set of remaining principal components; and wherein said neural network an updating means, coupled to said first computing means, for updating the weights of the neural network adaptively based on an error between a target output and the output signal.
-
-
2. A neural network having a plurality of weights for receiving a sequence of signal inputs xt,xt+1, xt+2. . . , each input xt comprising n signal components x1 (t), x2 (t-1), . . . , xn (t-(n-1)) and for generating an output signal that anticipates the behavior of said input signal for a number of time samples ahead, said neural network comprising:
-
transformation means for transforming a set of n signal inputs into a set of principal components having a saliency assigned to each of said principal component; pruning means coupled to said transformation means, for pruning a number of said principal components that correspond to the smallest saliencies, where the number of said principal components is limited by a sum of said saliencies of said pruned principal components to be less than or equal to a predefined threshold level, leaving a remaining set of principal components; first computing means, coupled to said pruning means, for computing the output signal using said set of remaining principal components; and updating means, coupled to said first computing means, for updating the weights of the neural network adaptively based on an error between a target output and the output signal, wherein said transformation means includes an estimation means for recursively estimating a current set of principal components from a set of principal components of a previously transformed set of n signal inputs. - View Dependent Claims (3, 4, 5, 6, 7, 8, 9)
-
4. The neural network of claim 2, wherein said estimation means estimates said current set of principal components by directly calculating a matrix Qt and a matrix Λ
-
t, where Qt is a matrix of eigenvectors and Λ
t is a matrix of eigenvalues.
-
t, where Qt is a matrix of eigenvectors and Λ
- 5. The neural network of claim 2, wherein said saliencies are calculated in accordance to the formula
- space="preserve" listing-type="equation">s.sub.i (t)= v.sub.i x.sub.i !.sup.T v.sub.i x.sub.i !=v.sub.t.sup.T v.sub.i x.sub.i.sub.2
where xt is the Karhunen-Loeve expansion of xt and vi is a p×
1 vector of WtT defined as WtT = v1,v2, . . . , vn !.
-
- 6. The neural network of claim 2, wherein said saliencies are calculated in accordance to the formula
- space="preserve" listing-type="equation">s.sub.i (t)=v.sub.t.sup.T v.sub.i x.sub.i.sup.2
where xt is the Karhunen-Loeve expansion of xt, vi is a p×
1 vector of WtT defined as WtT = v1,v2, . . . , vn !, xi2 is defined as xi2 (t)=μ
xi2 (t-1)+(1-μ
)xi2 (t) and μ
is a forgetting factor.
- space="preserve" listing-type="equation">s.sub.i =λ
.sub.i v.sub.i.sup.T v.sub.i,
i is the ith element on the diagonal of Λ
t and vi is a p×
1 vector of WtT defined as WtT = v1,v2, . . . , vn !.
-
second computing means for computing an output in principal component space; identifying means, coupled to said second computing means, for identifying said principal components that correspond to the smallest saliencies, where a sum of said smallest saliencies is less than a predefined threshold level; and third computing means, coupled to said identifying means, for computing a pruning vector from said principal components that correspond to the smallest saliencies, where a sum of said smallest saliencies is less than a predefined threshold level.
-
identifying means for identifying said principal components that correspond to the smallest saliencies, where a sum of said smallest saliencies is less than a predefined threshold level; and fourth computing means, coupled to said identifying means, for computing a weight matrix in regular space from said principal components that correspond to the smallest saliencies, where a sum of said smallest saliencies is less than a predefined threshold level.
-
10. A method of signal processing, utilizing a neural network having a plurality of weights, for receiving a sequence of signal inputs xt, xt+1,xt+2 . . . , each input xt comprising n signal components x1 (t), x2 (t-1), . . . , xn (t-(n-1)) and for generating an output signal that anticipates the behavior of said input signal for a number of time samples ahead, said method comprising the steps of:
-
(a) transforming a set of n signal inputs into a set of principal components having a saliency assigned to each of said principal component; (b) pruning a number of said principal components that correspond to the smallest saliencies, where the number of said pruned principal components is limited by a sum of said saliencies of said pruned principal components to be less than or equal to a predefined threshold level, leaving a remaining set of principal components; (c) computing said output signal using said remaining set of principal components; and (d) updating the weights of the neural network adaptively based on an error between a target output and the output signal.
-
-
11. A method of signal processing, utilizing a neural network having a plurality of weights, for receiving a sequence of signal inputs xt,xt+1,xt+2. . . each input xt comprising n signal components x1 (t),x2 (t-1), . . . , xn (t-(n-1)) and for generating an output signal that anticipates the behavior of said input signal for a number of time samples ahead, said method comprising the steps of:
-
(a) transforming a set of n signal inputs into a set of principal components having a saliency assigned to each of said principal component; (b) pruning a number of said principal components that correspond to the smallest saliencies, where the number of said pruned principal components is limited by a sum of said saliencies of said pruned principal components to be less than or equal to a predefined threshold level, leaving a remaining set of principal components; (c) computing said output signal using said remaining set of principal components; and (d) updating the weights of the network adaptively based on an error between a target output and the output signal wherein said transformation step includes an estimation step for recursively estimating a current set of principal components from a set of principal components of a previously transformed set of n signal inputs. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19)
-
14. The method of claim 11 wherein said estimating step estimates said current set of principal components by directly calculating a matrix Qt and a matrix Λ
-
t, where Qt is a matrix of eigenvectors and Λ
t is a matrix of eigenvalues.
-
t, where Qt is a matrix of eigenvectors and Λ
- 15. The method of claim 11, wherein said saliencies are calculated in accordance to the formula
- space="preserve" listing-type="equation">s.sub.i (t)= v.sub.i x.sub.i !.sup.T v.sub.i x.sub.i !=v.sub.t.sup.T v.sub.i x.sub.i.sup.2
where xt is the Karhunen-Loeve expansion of xt and vi is a p×
1 vector of WtT defined as WtT = v1,v2, . . . , vn !.
-
- 16. The method of claim 11, wherein said saliencies are calculated in accordance to the formula
- space="preserve" listing-type="equation">s.sub.i (t)=v.sub.t.sup.T v.sub.i x.sub.i.sup.2
where xxt is the Karhunen-Loeve expansion of xt, vi is a p×
1 vector of WtT defined as WtT = v1,v2, . . . , vn !, xi 2 is defined as xi2 (t)=μ
xi2 (t-1)+(1-μ
)xi2 (t) and μ
is a forgetting factor.
- space="preserve" listing-type="equation">s.sub.i =λ
.sub.i v.sub.i.sup.T v.sub.i,
i is the ith element on the diagonal of Λ
t and vi is a p×
1 vector of WtT defined as WtT = v1,v2, . . . , vn !.
-
computing an output in principal component space; identifying said principal components that correspond to the smallest saliencies, where a sum of said smallest saliencies is less than a predefined threshold level; and computing a pruning vector from said principal components that correspond to the smallest saliencies, where a sum of said smallest saliencies is less than a predefined threshold level.
-
identifying said principal components that correspond to the smallest saliencies, where a sum of said smallest saliencies is less than a predefined threshold level; and computing a weight matrix in regular space from said principal components that correspond to the smallest saliencies, where a sum of said smallest saliencies is less than a predefined threshold level.
-
20. A signal processing system having a neural network for receiving a sequence of signal inputs xt, xt+1, xt+2 . . . , each input xt comprising n signal components x1 (t), x2 (t-1), . . . , xn (t-(n-1)) and generating an output signal that anticipates the behavior of said input signal for a number of time samples ahead, said neural network having a plurality of hierarchically connected nodes forming a plurality of layers, each of said layer consisting of at least one node, said nodes being inter-connected with a plurality of weights, said signal processing system comprising:
-
transformation means for transforming a set of n signal inputs into a set of principal components having a saliency assigned to each of said principal component; pruning means, coupled to said transformation means, for pruning a number of said principal components that correspond to the smallest saliencies, where the number of said pruned principal components is limited by a sum of said saliencies of said pruned principal components to be less than or equal to a predefined threshold level, leaving a remaining set of principal components; computing means, coupled to said pruning means, for computing the output signal of a layer of the neural network using said set of remaining principal components; and updating means, coupled to said computing means, for updating the weights of the neural network adaptively based on an error between a target output and the output signal. - View Dependent Claims (21)
-
Specification