×

Method for pseudo-recurrent processing of data using a feedforward neural network architecture

  • US 10,152,673 B2
  • Filed: 06/21/2013
  • Issued: 12/11/2018
  • Est. Priority Date: 06/21/2013
  • Status: Active Grant
First Claim
Patent Images

1. A computer implemented method for recurrent data processing, comprising the steps of:

  • computing activity of multiple layers of hidden layer nodes in a feed forward neural network, given an input data instance,forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing,finding memories that are closest to a presented test data instance according to a class decision of the feedforward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion,wherein the step of forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing further comprises the substeps of;

    computing hidden layer activities of every training data instance, then low-pass filtering and stacking the hidden layer activities in a data structure;

    keeping a first and second hidden layer activity memory, indexed by class label;

    forming both class specific and class independent cluster centers as quantized memories of the training data'"'"'s second hidden layer activity, via k-means clustering, using each class data separately or using all the data together depending on a choice of class specificity;

    keeping quantized second hidden layer memories, indexed by class labels or non-indexed, depending on the class specificity choice;

    training a cascade of classifiers for enabling multiple hypotheses generation of a network, via utilizing a subset of the input data as the training data; and

    keeping a classifier memory, indexed with the set of data used during training;

    wherein the step of finding memories that are closest to the presented test data instance according to the class decision of the feed forward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion further comprises the substeps of;

    determining first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions;

    computing a set of candidate samples for the second layer, that are closest Euclidian distance hidden layer memories to the test data'"'"'s second hidden layer activity, using the multiple hypotheses class decisions of the network and a corresponding memory database then assigning the second hidden layer sample as one of the candidate hidden layer memories, via max or averaging operations depending on a choice of multi-hypotheses competition;

    merging the second hidden layer sample with the test data'"'"'s second hidden layer activity via weighted averaging operation, creating an updated second hidden layer activity;

    using the updated second hidden layer activity to compute the closest Euclidian distance first hidden layer memory, and assigning as the first hidden layer sample, merging the first hidden layer sample with the test data first hidden layer activity via weighted averaging operation, creating an updated first hidden layer activity;

    computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feed-forward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation; and

    repeating these steps for multiple iterations starting from the step of determining the first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions, and using the output of step of computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feed forward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation in the beginning of the next iteration.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×