METHOD AND SYSTEM FOR TRAINING OF NEURAL NETS

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
0Forward
Citations 
0
Petitions 
1
Assignment
First Claim
1. A method for training a neural network comprising:
 providing at least one continuously differentiable model of the neural network, the at least one continuously differentiable model being specific to hardware of the neural network;
iteratively training the neural network using the at least one continuously differentiable model to provide at least one output for the neural network, each iteration using at least one output of a previous iteration and a current continuously differentiable model of the at least one continuously differentiable model.
1 Assignment
0 Petitions
Accused Products
Abstract
A method and system for training a neural network are described. The method includes providing at least one continuously differentiable model of the neural network. The at least one continuously differentiable model is specific to hardware of the neural network. The method also includes iteratively training the neural network using the at least one continuously differentiable model to provide at least one output for the neural network. Each iteration uses at least one output of a previous iteration and a current continuously differentiable model of the at least one continuously differentiable model.
0 Citations
No References
No References
20 Claims
 1. A method for training a neural network comprising:
providing at least one continuously differentiable model of the neural network, the at least one continuously differentiable model being specific to hardware of the neural network; iteratively training the neural network using the at least one continuously differentiable model to provide at least one output for the neural network, each iteration using at least one output of a previous iteration and a current continuously differentiable model of the at least one continuously differentiable model.  View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
 18. A method for training a neural network comprising:
providing at least one continuously differentiable model of the neural network, the at least one continuously differentiable model being specific to hardware of the neural network, the neural network using a plurality of discrete weights; iteratively training the neural network using the at least one continuously differentiable model to provide at least one output for the neural network, each iteration using at least one output of a previous iteration and a current continuously differentiable model of the at least one continuously differentiable model, the current continuously differentiable model providing a closer approximation to the hardware of the neural network than a previous continuously differentiable model of the at least one continuously differentiable model, the iteratively training step further including using a software model for the neural network as a first continuously differentiable model in a first iteration; performing back propagation using at least one output of each iteration to obtain at least one weight for each iteration; for each iteration, applying to the at least one weight from a previous iteration a function (f) of the current continuously differentiable model multiplied by at least one input and added to a bias f(g(ω
,X,α
)X+b)), where g is the current continuously differentiable model, co is the at least one weight for the current iteration, X is the at least one input and b is a bias, α
is a realism parameter indicating the closeness to the hardware of the neural
 19. A neural network training system implemented using at least one computing device, the at least one computing device including at least one processor and memory, the training system comprising:
at least one continuously differentiable model of the neural network, each of the continuously differentiable model being specific to hardware of the neural network; a training subsystem iteratively using the at least one continuously differentiable model of the neural network and at least one input, the training subsystem configured such that each iteration uses at least one output of a previous iteration and a current continuously differentiable model of the at least one continuously differentiable model closer to the hardware of the neural network than a previous continuously differentiable model of the at least one continuously differentiable model.  View Dependent Claims (20)
1 Specification
This application claims the benefit of provisional Patent Application Ser. No. 62/664,142, filed Apr. 28, 2018, entitled “A HARDWAREAWARE ALGORITHM FOR OFFLINE TRAINING OF NEURAL NETS”, and provisional Patent Application Ser. No. 62/664,102, filed Apr. 28, 2018, entitled “A HARDWAREAWARE ALGORITHM FOR OFFLINE TRAINING OF NEURAL NETS” assigned to the assignee of the present application, and incorporated herein by reference.
Applications involving DeepLearning Neural Networks (NNs) or neuromorphic computing such as image recognition, natural language processing and more generally various patternmatching or classification tasks are quickly becoming as important as generalpurpose computing. The essential computational element of the NN, or neuron, includes multiple inputs and an output. Associated with each input is a number, or weight. The activation of the neuron is computed by performing a weighted sum of the inputs (using the weights), which is then processed by the activation function. The activation function is typically a thresholding function. Thus, the neuron generally performs a vectormatrix product, or multiplyaccumulate (MAC) operation, which is then thresholded.
The weights defined by the mathematical description of a NN are real numbers, and thus continuous. However, many hardware implementations of NNs use or propose to use lowerprecision, discrete approximations to the real values for the weights. For example, some recent NNs are XNOR or gated XNOR (GXNOR) networks that would use only two (binary) or three (ternary) discrete levels. Such NNs may use −1 and 1 (binary), or −1, 0, and 1 (ternary) weights. Other hardware implementations might use a different number of discrete weights. While such reducedprecision weights are attractive from a hardware perspective, there is a potential penalty in the achievable inference accuracy. This is particularly true in the case of offchip training, in which training is performed on a different system than is actually used for inference.
The degree of loss in inference accuracy depends on the details of the weights and on the training algorithm used. The straightforward approach to quantization is to simply perform standard training using floatingpoint weights offline, and then choose discrete “bins” into which the mathematical weights are placed. A refinement of this algorithm treats the size of the bins as a hyperparameter, to be optimized on validation data for best accuracy. However, even with this refinement, NNs using lowerprecision weights may suffer appreciable inference accuracy losses.
What is desired is improved inference accuracy for NNs that use lower precision weights even if such NNs are trained offline.
The exemplary embodiments relate to training neural networks and that may be employed in a variety of fields including but not limited to machine learning, artificial intelligence, neuromorphic computing and neural networks. The method and system may be extended to other applications in which logic devices are used. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the exemplary embodiments and the generic principles and features described herein will be readily apparent. The exemplary embodiments are mainly described in terms of particular methods and systems provided in particular implementations. However, the methods and systems will operate effectively in other implementations.
Phrases such as “exemplary embodiment”, “one embodiment” and “another embodiment” may refer to the same or different embodiments as well as to multiple embodiments. The embodiments will be described with respect to systems and/or devices having certain components. However, the systems and/or devices may include more or fewer components than those shown, and variations in the arrangement and type of the components may be made without departing from the scope of the invention. The exemplary embodiments will also be described in the context of particular methods having certain steps. However, the method and system operate effectively for other methods having different and/or additional steps and steps in different orders that are not inconsistent with the exemplary embodiments. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features described herein.
The use of the terms “a” and “an” and “the” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as openended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It is noted that the use of any and all examples, or exemplary terms provided herein is intended merely to better illuminate the invention and is not a limitation on the scope of the invention unless otherwise specified. Further, unless defined otherwise, all terms defined in generally used dictionaries may not be overly interpreted.
A method and system for training a neural network are described. The method includes providing at least one continuously differentiable model of the neural network. The at least one continuously differentiable model is specific to hardware of the neural network. The method also includes iteratively training the neural network using the at least one continuously differentiable model to provide at least one output for the neural network. Each iteration uses at least one output of a previous iteration and a current continuously differentiable model of the at least one continuously differentiable model.
The training system 100 contains training data 130 and a training engine 110 that includes training algorithm(s) 114 and continuously differentiable models 1121, 1122 and 1123 (collectively continuously differentiable models 112). Although three continuously differentiable models 112 are shown, fewer or more models might be used. The training system 100 also includes processor(s) 150, a data store 160, and input/output (I/O) device(s). The data store 160 may store components 112, 114 and 130. The processors 150 may execute the training algorithms 114 and continuously differentiable models 112.
The training system 100 utilizes one or more of the continuously differentiable models 112 in performing training for the NN 120. The continuously differentiable models 112 approximate the behavior of the NN 120. Thus, the differentiable models 112 take into account the hardware of the NN 120. For example, the NN 120 may be a binary network (−1 and 1 weights), ternary network (−1, 0 and 1 weights), twobit weight network (−2, −1, 0, 1 and 2 weights) or other NN that uses discrete weights. In such embodiments, the continuously differentiable models 112 may approximate the transitions between the weights (e.g. step functions) while maintaining differentiability. This allows for the calculation of gradients during training. Similarly, the NN 120 may use discrete activations. The continuously differentiable models 112 may provide an approximation of the activations analogous to that provided for weights. Thus, the continuously differentiable models 112 more accurately represent the NN 120 while maintaining differentiability.
Conventional offchip training does not use the continuously differentiable models 112 and may result in poorer inference accuracy for a NN 120. It is believed that the loss in accuracy for offchip training occurs because the conventional training methods provide a poor approximation of the hardware for the NN 120. For example, a discrete NN (e.g. that uses discrete weights or activations) has vanishing gradients at all points. Standard backpropagation is not possible for such NNs. Conventional training systems (not shown) may thus use direct discretization of floatingpoint weights obtain to binary or ternary (i.e. discrete) weights. Such a method is subject to uncertainty in weights near the discretization boundaries. For the floating pointtrained network, a small uncertainty in the weights is generally of little consequence because of partial cancellation of errors that occurs in the neuron. However, after discretization, the weights near the boundaries snap to one or the other side of the boundary. This can result in error amplification and may cause in a large loss of inference accuracy. Stated differently, inference accuracy may be dramatically reduced because the training was not performed with the knowledge that weights would be discretized or otherwise altered by hardware imperfections.
In contrast, the continuously differentiable models 112 provide an approximation of the behavior of the hardware for the NN 120 while maintaining calculable gradients. Through the use of the continuously differentiable models, the training system 100 is made aware of the hardware for the NN 120 for which training is performed. Because such models are differentiable, techniques such as back propagation that use gradients can be employed. The training may, therefore, take into account the discrete nature of the hardware for the NN 120 while employing accepted training techniques in the training algorithm 114. Thus, hardwareaware training may be provided by the training system 100 and inference accuracy of the NN 120 improved.
One or more continuously differentiable models 112 are provided for the NN 120 to be trained, via step 202. The continuously differentiable model(s) 112 are specific to hardware of the neural network 120. In some embodiments, the continuously differentiable models 112 include a software model for floatingpoint weights and which does not take into account aspects of the hardware for the NN 120, such as discrete weights. Such a software model may be used for the first iteration (e.g. continuously differentiable model 1121). However, subsequent iterations use other continuously differentiable model(s) 112 that provide closer approximations of the hardware (e.g. continuously differentiable models 1122 and/or 1123). Alternatively, all of the continuously differentiable models 112 approximate the hardware of the NN 120 being trained.
Iterative training for the NN 120 is performed using the continuously differentiable model(s) 112, via step 204. Each training iteration provides output(s) for the NN 120. Each iteration uses the output(s) of a previous iteration and a current continuously differentiable model to provide new outputs. As discussed above, in some embodiments, the first iteration is performed using a conventional floatingpoint/continuous model that does not incorporate discretization or other hardware aspects of the NN 120. Such an iteration provides a first approximation of the inputs (e.g. weights) for subsequent hardwareaware iterations. In another embodiments, all iterations use continuously differentiable models 112 that are hardware specific. In such embodiments, a first iteration may use inputs from a floatingpoint model or inputs obtained in another manner.
Iteratively training in step 204 may include performing back propagation using the output to obtain the weight(s) for each neuron. The weight so obtained is used in conjunction with the next continuously differentiable model 112 for a next iteration. Iterative training continues until the desired results are achieved. For example, training may terminate in response to weight(s) obtained from the current iteration being within a threshold from the weight(s) from a previous iteration.
For example, step 204 may use the continuously differentiable model 1121, training data 130 and predetermined weights or activations for a first iteration. The output of the first iteration may undergo back propagation or other processing to obtain a second set of weights for the second iteration. The second iteration may use training data 130, the continuously differentiable model 1122 and the second set of weights for training. The continuously differentiable model 1122 may be a better approximately of the hardware for the NN 120 than the first continuously differentiable model 1121. For example, transitions between discrete weights may be sharper. Based on the output of the second iteration, a third set of weights may be calculated. The third iteration may use training data 130, the continuously differentiable model 1123 and the third set of weights for training. The continuously differentiable model 1123 may be a better approximately of the hardware for the NN 120 than the second continuously differentiable model 1122. This iterative training in step 204 may continue until some condition is reached. In some embodiments, the condition is that the new weights calculated are within a particular threshold of the previous weights. In such a case, not all of the continuously differentiable models 112 may be used because the weights may converge more quickly to a final value. In other embodiments, other conditions may be used. For example, it may be required that all available continuously differentiable models 112 are used. Once the training is completed, the results (e.g. weights) may be provided to the NN 120 for use.
Using the method 200, a significant improvement in the inference accuracy may be achieved for the NN 120. Because the continuously differentiable models 112 provide an approximation of the behavior of the hardware for the NN 120, the training system 100 is made aware of the hardware for the NN 120 for which training is performed. Because such models 112 are differentiable, techniques that use gradients can be employed during training. The training may take into account the discrete nature of the hardware for the NN 120 while employing accepted training techniques. Thus, inference accuracy of the NN 120 enhanced. Because offchip training may be employed, the NN 120 may be smaller and more efficient. In some applications, only a small number of iterations need be employed. For example, acceptable accuracy may be achieved with only one or a few iterations. Thus, the benefits described herein may be achieved with a modest increase in the training CPU time.
One or more continuously differentiable models 112 are provided for the NN 120 to be trained, via step 212. Thus, step 212 is analogous to step 202. The continuously differentiable model(s) 112 are specific to hardware of the neural network 120. In some embodiments, the continuously differentiable models 112 include a software model that may be used for the first iteration. However, subsequent iterations use other continuously differentiable model(s) 112 that provide closer approximations of the hardware (e.g. continuously differentiable models 1122 and/or 1123). Alternatively, all of the continuously differentiable models 112 approximate the hardware of the NN 120 being trained.
For example, the continuously differentiable models 112 may be given by g(ω, X, a), where ω are the weights, X are the inputs from the training data 130 and b is a bias. In some embodiments, each continuously differentiable model, g^{n}, where n is the number of discrete levels, is given by:
where ω is a weight, Δ is a discretization step for the at least one discrete weight, ω_{sc }is a transition scale between steps, ε_{n }is an offset such that the each of the at least one continuously differentiable model passes through the origin, and σ is a scaled sigmoid. In some embodiments, the weight set is characterized by, the number of nonnegative discrete weights in the set minus one (0 for binary, 1 for ternary, etc.). By decreasing the weight parameter ω_{sc}, the weight function becomes an increasingly accurate approximation to the step function. The scaled sigmoid may be given by
σ=1/(1+e^{−ω/ω}^{sc})
The above continuously differentiable models 112 may be used for discrete NNs. In another embodiment, other hardware models may be used to reflect other aspects of the NN 120. For example, if the NN 120 is a pruned, analog NN, the continuously differentiable model 112 may be given by
g_{prune}(ω,Δ,ω_{sc})=2ωΔ·[σ(ω,Δ,ω_{sc})+σ(ω,Δ,−ω_{sc})]
where (−Δ,Δ) defines a zerowindow. The range of the allowable weights may also be set in step 212. This range may be selected so that that the initial value is equivalent to purely floating pointbased training, while the final value is a good approximation to the hardware of the NN 120.
Training is performed such that a first continuously differentiable model 1121 is incorporated, via step 214. For example, the standard activation for a software training method (e.g. training function 114) may be a=f(ωX+b), where f is the activation function, ω are the (floating point) weights, X are the inputs and b is the bias. An analog of this function that incorporates the continuously differentiable models may be used in the method 210. For the method 210, therefore, the activation may be given by the activation function using:
a=f(g^{n}(ω,X,α_{i})X+b)
where g^{n }is the continuously differentiable model 112 discussed above, a is realism parameter for the current continuously differentiable model, i is the iteration number and b is the bias. The bias, b, along with the weights may be iteratively determined. Thus, the bias for a next iteration may be determined using the current continuously differentiable model. Alternatively, the bias may be determined using another mechanism including a different (e.g. previous or final) continuously differentiable model different from the current continuously differentiable model.
For the first iteration in step 214, the standard (unmodified) activation a=f(ωX+b) may be used. In other embodiments, the output of the standard activation may be used to calculate the weights for the first iteration in step 214. For the method 210, it is assumed that the first iteration uses weights from the above activation as initial weights. Consequently, the first iteration in step 214 may use a=f(g(ω, X, α_{1})X+b), where g(ω, X, α_{1}) is the continuously differentiable model 1121. Step 214 may also include determining the weights, for example using back propagation or an analogous method that employs gradients. Because the continuously differentiable model 1121 has a gradient, such methods can be used.
The results of the first iteration may optionally be validated, via step 216. Validation may include be performed using the weight determined in step 214 and the final continuously differentiable model 1123. In some embodiments, a model of noise in the training process may also be applied as part of the validation step. In other embodiments other validation mechanisms might be used and/or validation may be performed at the end of the method 210.
Steps 214 and (optionally) 216 may be iteratively repeated, via step 218. In each iteration, the weights from the previous iteration are used and the next continuously differentiable model 1122 and 1123 are used. Each continuously differentiable model 1122 and 1123 may be a more accurate approximation of the hardware for the NN 120 than a previous continuously differentiable model 1121 and 1122, respectively. Thus, the second iteration may use the weights determined in the first iteration and a=f(g(ω, X, α_{2})X+b), where g(ω, X, α_{2}) is the continuously differentiable model 1122. The third iteration may use the weights determined in the second iteration and a=f(g(ω, X, α_{3})X+b), where g(ω, X, α_{3}) is the continuously differentiable model 1123.
For example,
The training is terminated when acceptable results are achieved, via step 220. In some embodiments, this occurs when fewer iterations have been performed than there are continuously differentiable models 112. For example, for the system 100, one, two or three iterations may be performed, depending upon how rapidly acceptable results are achieved. In some embodiments, training is terminated when the weights determined in a particular iteration do not exceed threshold(s) from weights determined in a next previous iteration. In some such embodiments, training is terminated when the weights determined in a particular iteration are less than threshold(s) from weights determined in the next previous iteration. In other embodiments, other conditions for termination may be used.
Once acceptable weights have been determined, the results are provided to the NN 120, via step 222. Consequently, offchip training for the NN 120 may be completed. For example,
Using the method 210, a significant improvement in the inference accuracy may be achieved for the NN 120. Because the continuously differentiable models 112 provide an approximation of the behavior of the hardware for the NN 120, the training system 100 is made aware of the hardware for the NN 120 for which training is performed. For example, the training takes incorporates the discrete nature of the weights used in the NN 120. Because such models 112 are differentiable, techniques that use gradients can be employed. Thus, inference accuracy of the NN 120 enhanced. Because offchip training may be employed, the NN 120 may be smaller and more efficient. In some applications, only a small number of iterations need be employed.
Thus, using the methods 200 and 210 hardware aware training may be performed for the NN 120. As a result, improved inference accuracy may be achieved. The method and system have been described in accordance with the exemplary embodiments shown, and one of ordinary skill in the art will readily recognize that there could be variations to the embodiments, and any variations would be within the spirit and scope of the method and system. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.