OBJECT RECOGNITION WITH REDUCED NEURAL NETWORK WEIGHT PRECISION
First Claim
Patent Images
1. A client device configured with a neural network, the client device comprising:
- a processor, a memory, a user interface, a communications interface, a power supply and an input device;
the memory comprising a trained neural network received from a server system that has trained and configured the neural network for the client device.
1 Assignment
0 Petitions
Accused Products
Abstract
A client device configured with a neural network includes a processor, a memory, a user interface, a communications interface, a power supply and an input device, wherein the memory includes a trained neural network received from a server system that has trained and configured the neural network for the client device. A server system and a method of training a neural network are disclosed.
101 Citations
20 Claims
-
1. A client device configured with a neural network, the client device comprising:
-
a processor, a memory, a user interface, a communications interface, a power supply and an input device; the memory comprising a trained neural network received from a server system that has trained and configured the neural network for the client device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A system for providing object recognition with a client device, the system comprising:
a server system configured for training a neural network to perform object recognition and exporting the neural network to the client device. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19)
-
20. A server system, comprising:
-
an input device configured to receive a training image; a neural network that includes at least two layer pairs, each layer pair comprising a convolutional layer and a subsampling layer; a multilayer perceptron (MLP) classifier; wherein the neural network is configured to perform quantization of interim weights in the convolutional layers, and the neural network is also configured to generate in the subsampling layer an interim feature map in response to an input applied to the convolutional layer; and
the neural network configured to perform quantization of weights in the MLP classifier, andthe neural network is configured to generate in the MLP classifier a classification output in response to the feature map being applied to quantized weights MLP.
-
Specification