Method and apparatus for fractal computation
First Claim
1. A computerized accelerated leaming method for image-based decision system applications such as machine vision, non-contact gauging, inspection, robot guidance, medical imaging o accelerate learning maturity and enhance learning outcome comprises the following steps:
- (a) input learning samples images;
(b) perform object of interest implantation on images using the learning samples images to generate simulated learning samples containing simulated objects of interest in the images;
(c) perform computerized algorithm learning using the input learning samples images and the simulated learning samples images.
0 Assignments
0 Petitions
Accused Products
Abstract
Fractal computers are neural network architectures that exploit the characteristics of fractal attractors to perform general computation. This disclosure explains neural network implementations for each of the critical components of computation: composition, minimalization, and recursion. It then describes the creation of fractal attractors within these implementations by means of selective amplification or inhibition of input signals, and it describes how to estimate critical parameters for each implementation by using results from studies of fractal percolation. These implementation provide standardizable implicit alternatives to traditional neural network designs. Consequently, fractal computers permit the exploitation of alternative technologies for computation based on dynamic systems with underlying fractal attractors.
-
Citations
91 Claims
-
1. A computerized accelerated leaming method for image-based decision system applications such as machine vision, non-contact gauging, inspection, robot guidance, medical imaging o accelerate learning maturity and enhance learning outcome comprises the following steps:
-
(a) input learning samples images; (b) perform object of interest implantation on images using the learning samples images to generate simulated learning samples containing simulated objects of interest in the images; (c) perform computerized algorithm learning using the input learning samples images and the simulated learning samples images. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 26)
-
-
9. An accelerated computerized algorithm training method for image-based decision system applications such as machine vision, non-contact gauging, inspection, robot guidance, medical imaging to accelerate learning maturity and enhance learning outcome comprises the following steps:
-
(a) input learning samples images; (b) perform object of interest implantation on imams using the learning samples to generate simulated learning samples containing simulated objects of interest in the images; (c) perform computerized algorithm training using the learning samples images and the simulated learning samples images. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16, 17, 25)
-
-
18. A computerized accelerated start-up learning method for image-based decision system applications such as machine vision, non-contact gauging, inspection, robot guidance, medical imaging to accelerate learning maturity and enhance learning outcome comprising:
-
(a) input start-up learning sample images; (b) perform object of interest implantation on images using the start-up learning sample images to generate simulated learning samples containing simulated objects of interest in the images; (c) perform computerized start-up learning on a general computerized algorithm using the input start-up learning samples images and the simulated learning samples-images. - View Dependent Claims (19, 20, 21, 22, 23, 24)
-
-
27. An apparatus for implicit digital computation comprising:
-
a neural network architecture means having a plurality of layer means, each layer means comprising a plurality of computational nodes, each of the plurality of computational nodes being implemented as a software process on a general purpose computer or by a digital or analog hardware device, the plurality of layer means comprising; a processing layer means including; at least one input processing layer means which is capable of rendering a stable transformable digital representation of input signals, at least one central processing layer means, and at least one output processing layer means; feedforward input channel means; full lateral and feedback connection means within the processing layer means;
output channel means;re-entrant feedback means from the output channel means to the processing layer means;
means for updating each of the plurality of computational node means using local update processes; andmeans for using re-entrant feedback from the output channel means to perform minimalization for general computation such that said stable transformable digital representations of input signals are distributed to the plurality of computational nodes which combine the stable transformable digital representations of input signals according to the interconnectivity of the at least one input processing layer and the plurality of computational layers, and which perform minimization steps on on the plurality of combinations to match at least one specified success criteria whereby the said minimalization step selects from an inventory of at least one output value based on the said plurality of combinations at the moment of the selection step to minimize the difference between the said plurality of stable transformable combinations and the said at least one success criteria, whereby the selection weight for said selection is decreased when the plurality of subsequent input stable transformable combinations diverges from the said at least one success criteria, and the selection weight for said selection is increased when the plurality of subsequent input stable transformable combinations converges with said at least one success criteria. - View Dependent Claims (28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50)
-
-
51. A method for computation using a of a neural network architecture in a computing device comprising the steps of:
- organizing a plurality of computational nodes into a neural computing device, each of the plurality of computational nodes being implemented as a software process on a general purpose computer or by a digital or analog hardware device, wherein the architecture of the neural computing device comprises;
using at least one stable transformable digital representation as an input to receive data to be processed from an environment;
using a locally connected subset of the plurality of computational nodes for fractal percolation using the at least one stable transformable digital representation;using a minimalization step for computation; and using at least one output to output processed data that can be used by a human or as an input to a machine. - View Dependent Claims (52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69)
- organizing a plurality of computational nodes into a neural computing device, each of the plurality of computational nodes being implemented as a software process on a general purpose computer or by a digital or analog hardware device, wherein the architecture of the neural computing device comprises;
-
70. An apparatus for implicit computation comprising:
a of a neural network architecture means including; an input means from an environment capable of rendering a stable transformable digital representation of the environment;
an output means; and
,a plurality of locally connected computation nodes, each of the plurality of computation nodes being implemented as a software process on a general purpose computer or by a digital or analog hardware device, wherein the plurality of locally connected computation nodes is organized to perform fractal percolation using said stable transformable digital representations, wherein a minimalization step is used for computation. - View Dependent Claims (71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90)
-
91. A system comprising:
-
a plurality of computational nodes, wherein each computational node is implemented as a software process on a general purpose computer or by a digital or analog hardware device, each computational node comprising a local update process which transforms data received by the computational node wherein, a first subset of the plurality of computational nodes is organized into at least one feedforward input channel operatively connected to an input digital or analog data source; a second subset of the plurality of computational nodes organized into a plurality of processing layers having full lateral and feedback connections between the plurality of processing layers, at least one of the second subset of the plurality of computational nodes operatively being connected at least one of the first subset of the plurality of computational nodes; a third subset of the plurality of computational nodes organized into at least one output channel operatively connected to a data storage device or network, at least one of the third subset of the plurality of computational nodes operatively being operatively connected to at least one of the second subset of the plurality of computational nodes comprising at least one re-entrant feedback channel, wherein the feedforward input channel receives data from the external data source and transforms the data to a digital format and distributes the data in the digital format to at least one of second subset of the plurality of computational nodes, wherein the data in the digital format is processed by the plurality of processing layers using the local update processes of the nodes comprising plurality of processing layers, using the full lateral and feedback connections within the processing layers and using re-entrant feedback from the re-entrant feedback channel such that the data in the digital format is combined and minimalized, and wherein the combined data is output by the at least one output channel to the data storage device or network.
-
Specification