Applying neural network language models to weighted finite state transducers for automatic speech recognition

  • US 10,049,668 B2
  • Filed: 05/16/2016
  • Issued: 08/14/2018
  • Est. Priority Date: 12/02/2015
  • Status: Active Grant
  • ×
    • Pin Icon | RPX Insight
    • Pin
First Claim
Patent Images

1. A non-transitory computer-readable medium having instructions stored thereon, the instructions, when executed by one or more processors, cause the one or more processors to:

  • receive speech input;

    traverse, based on the speech input, a sequence of states and arcs of a weighted finite state transducer (WFST), wherein;

    the sequence of states and arcs represents one or more history candidate words and a current candidate word; and

    a first probability of the candidate word given the one or more history candidate words is determined by traversing the sequence of states and arcs of the WFST;

    traverse a negating finite state transducer (FST), wherein traversing the negating FST negates the first probability of the candidate word given the one or more history candidate words;

    compose a virtual FST using a neural network language model and based on the sequence of states and arcs of the WFST, wherein one or more virtual states of the virtual FST represent the current candidate word;

    traverse the one or more virtual states of the virtual FST, wherein a second probability of the candidate word given the one or more history candidate words is determined by traversing the one or more virtual states of the virtual FST;

    determine, based on the second probability of the candidate word given the one or more history candidate words, text corresponding to the speech input;

    based on the determined text, perform one or more tasks to obtain a result; and

    cause the result to be presented in spoken or visual form.

View all claims