Neural filter architecture for overcoming noise interference in a non-linear, adaptive manner
First Claim
1. A non-linear filter architecture, comprising:
- a) a memory means for storing values for input to a neural network;
b) supply means connected to said memory means for successively supplying said values from said memory means to said neural network;
c) a neural network having inputs connected said supply means to receive an input signal f(n-i) associated with time n-i, a time signal for time i and a parameter vector p(n-i) associated with time n-i are linked, at time i;
d) an accumulator means connected to said neural network for summing signals from said neural network;
e) the neural network being constructed such that an output signal g(n) from said accumulator means at time n results from summation of M+1 output signals, which are associated with times n, . . . , n-M, in accordance with a formula ##EQU13## where N[f(n-i),i,p(n-i)] designates an output function of the neural network.
1 Assignment
0 Petitions
Accused Products
Abstract
The non-linear filter architecture according to the invention provides a neural network for modelling a non-linear transfer function, there being supplied to the neural network, on the input side, the filter input signals f(n), . . . , f(n-i), . . . , f(n-M), a time index signal i and the values p(n), . . . , p(n-i), . . . , p(n-M) of the parameter vector p. The neural network uses these values to calculate, at each time i, output values which are summed for the M+1 times i=0, . . . , M, as a result of which the filter output function g(n) is formed. The invention can be used for implementing a method for overcoming noise signals in digital signal processing, by using a circuit arrangement or a software system. Specifically, the invention can be used in a method for suppressing cardio-interference in magneto-encephalography. The invention can furthermore be used for overcoming motor noise.
14 Citations
6 Claims
-
1. A non-linear filter architecture, comprising:
-
a) a memory means for storing values for input to a neural network; b) supply means connected to said memory means for successively supplying said values from said memory means to said neural network; c) a neural network having inputs connected said supply means to receive an input signal f(n-i) associated with time n-i, a time signal for time i and a parameter vector p(n-i) associated with time n-i are linked, at time i; d) an accumulator means connected to said neural network for summing signals from said neural network; e) the neural network being constructed such that an output signal g(n) from said accumulator means at time n results from summation of M+1 output signals, which are associated with times n, . . . , n-M, in accordance with a formula ##EQU13## where N[f(n-i),i,p(n-i)] designates an output function of the neural network.
-
-
2. An adaptation method for a non-linear filter architecture, comprising the steps of:
-
a) inputting an input signal f(n-i) to a memory of the non-linear filter, which input signal is associated with time n-i, a time signal for a time i and a parameter vector p(n-i) associated with time n-i are linked, at time i, via said memory to input nodes of a neural network; b) summing in an accumulator connected to an output of the neural network results M+1 output signals, which are associated with the times n, . . . , n-M, of the neural network, in accordance with a formula ##EQU14## where N[f(n-i),i,p(n-i)] designates an output function of the neural network to obtain an output signal g(n) of the non-linear filter at time n; varying each weighting w of the neural network in such a manner that a predetermined error E, which represents a value of deviation, determined over a predetermined time interval, of the output signals g(n) of the filter from required output signals gm(n), is minimized by varying the weightings of the neural network.
-
-
3. A circuit arrangement for implementing a non-linear filter architecture, having:
-
a) a neural network having inputs connected to receive an input signal f(n-i) associated with time n-i, a time signal for time i and a parameter vector p(n-i) associated with time n-i are linked, at time i; b) the neural network being constructed such that an output signal g(n) at time n results from summation of M+1 output signals, which are associated with times n, . . . , n-M, in accordance with a formula ##EQU15## where N[f(n-i),i,p(n-i)] designates an output function of the neural network, comprising; a) a memory having an input for storing an instantaneous filter input signal f(n), instantaneous values p(n) of the parameter vector, last M filter input signals f(n-1), . . . , f(n-M) and parameter vectors p(n-1), . . . , p(n-M); b) supply means connected to said memory for, over M+1 successive times n-M, . . . , n, supplying memory contents associated with these times to input nodes to the neural network; c) summing means connected to an output of the neural network are provided for summation of output values of the neural network over the times n-M, . . . , n.
-
-
4. A method for overcoming noise signals in digital signal processing, comprising the steps of:
-
a) inputting an input signal f(n-i) to a memory for sequential application via a supply means to the non-linear filter, which input signal is associated with time n-i, a time signal for a time i and a parameter vector p(n-i) associated with time n-i are linked, at time i, to input nodes of a neural network; b) summing in an accumulator means connected to an output of the non-linear filter results M+1 output signals, which are associated with the times n, . . . , n-M, of the neural network, in accordance with a formula ##EQU16## where N[f(n-i),i,p(n-i)] designates an output function of the neural network to obtain an output signal g(n) of the non-linear filter at time n; varying each weighting w of the neural network in such a manner that a predetermined error E, which represents a value of deviation, determined over a predetermined time interval, of the output signals g(n) of the filter from required output signals gm(n), is minimized by varying the weightings of the neural network. - View Dependent Claims (5, 6)
-
Specification