ARCHITECTURE, SYSTEM AND METHOD FOR ARTIFICIAL NEURAL NETWORK IMPLEMENTATION
First Claim
1. A system comprising:
- an input port for receiving an input vector;
a scalable artificial neural network, wherein the input vector is fed forward through the scalable artificial neural network to provide an output vector and wherein the input vector is subject to synchronization within the scalable artificial neural network based on a predetermined degree of parallelization; and
an output port for outputting the output vector.
4 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the artificial neural network by providing scalability to neurons and layers. In a particular case, the systems and methods may include a back-propagation subsystem that is configured to scalably adjust weights in the artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements.
15 Citations
18 Claims
-
1. A system comprising:
-
an input port for receiving an input vector; a scalable artificial neural network, wherein the input vector is fed forward through the scalable artificial neural network to provide an output vector and wherein the input vector is subject to synchronization within the scalable artificial neural network based on a predetermined degree of parallelization; and an output port for outputting the output vector. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A method for designing a hardware configuration of an artificial neural network, the method comprising:
-
receiving information relating to hardware resources available for at least one hardware device; receiving a desired network topology; determining a plurality of degrees of parallelism for the desired network topology; for each degree of parallelism of the plurality of degrees of parallelism estimating at least one of; a hardware resource estimate to implement the network topology with the degree of parallelism; and a performance estimate for the network topology with the degree of parallelism; selecting a degree of parallelism based on the hardware resources available and at least one of the hardware resource estimates and the performance estimates; and generating a hardware configuration based on the degree of parallelism. - View Dependent Claims (7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A method for training a scalable artificial neural network involving multi-layer perceptrons and error back propagation, the method comprising:
-
feed-forwarding an input vector through the scalable network;
wherein the input vector is subject to synchronization to provide a synchronized output vector; andback-propagating an error gradient vector through the scalable network, wherein the error gradient vector is calculated using the synchronized output vector and a target vector, which has been subject to synchronization, such that the error gradient vector is provided in a synchronized format based on the degree of parallelization.
-
-
17. A method for operating a scalable artificial neural network involving multi-layer perceptions, the method comprising feed-forwarding an input vector through the scalable network;
- wherein the input vector is subject to synchronization within the scalable network to provide a synchronized output vector.
- View Dependent Claims (18)
Specification