×

TRAINING MULTIPLE NEURAL NETWORKS WITH DIFFERENT ACCURACY

  • US 20150340032A1
  • Filed: 05/23/2014
  • Published: 11/26/2015
  • Est. Priority Date: 05/23/2014
  • Status: Active Grant
First Claim
Patent Images

1. A method comprising:

  • receiving a digital representation of speech;

    generating a plurality of feature vectors that each model a different portion of an audio waveform from the digital representation of speech during a different period of time, the plurality of feature vectors including a first feature vector and subsequent feature vectors;

    generating a first posterior probability vector for the first feature vector using a first neural network, the first posterior probability vector comprising one score for each key word or key phrase which the first neural network is trained to identify;

    determining whether one of the scores in the first posterior probability vector satisfies a first threshold value using a first posterior handling module; and

    in response to determining that one of the scores in the first posterior probability vector satisfies the first threshold value and for each of the feature vectors;

    generating a second posterior probability vector for the respective feature vector using a second neural network, wherein the second neural network is trained to identify the same key words and key phrases as the first neural network, and comprises more inner layer nodes than the first neural network, and the second posterior probability vector comprises one score for each key word or key phrase which the second neural network is trained to identify; and

    determining whether one of the scores in the second posterior probability vector satisfies a second threshold value using a second posterior handling module, the second threshold value being more restrictive than the first threshold value.

View all claims
  • 2 Assignments
Timeline View
Assignment View
    ×
    ×