Machine learning contextual approach to word determination for text input via reduced keypad keys
First Claim
1. A method for determining a word entered using a reduced keypad, where each of one or more keys of the reduced keypad is mapped to a plurality of letters, the method comprising:
- receiving key input corresponding to the entered word and at least one of a left context and a right context;
determining a list of possible words corresponding to the key input for the entered word, wherein each listed word is in a vocabulary or previously entered into a cache;
using a language model comprising probability values corresponding to sequence of word N-grams of a natural language to rank the listed words based on at least one of the left context and the right context of the key input; and
updating the language model with additional training using words entered into the cache.
2 Assignments
0 Petitions
Accused Products
Abstract
Determination of a word input on a reduced keypad, such as a numeric keypad, by entering a key sequence ambiguously corresponding to the word, by taking into account the context of the word via a machine learning approach, is disclosed. Either the left context, the right context, or the double-sided context of the number sequence can be used to determine the intended word. The machine learning approach can use a statistical language model, such as an n-gram language model. The compression of a language model for use with small devices, such as mobile phones and other types of small devices, is also disclosed.
-
Citations
44 Claims
-
1. A method for determining a word entered using a reduced keypad, where each of one or more keys of the reduced keypad is mapped to a plurality of letters, the method comprising:
-
receiving key input corresponding to the entered word and at least one of a left context and a right context; determining a list of possible words corresponding to the key input for the entered word, wherein each listed word is in a vocabulary or previously entered into a cache; using a language model comprising probability values corresponding to sequence of word N-grams of a natural language to rank the listed words based on at least one of the left context and the right context of the key input; and updating the language model with additional training using words entered into the cache. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. A computer-readable medium having instructions stored thereon for execution by a processor to perform a method for determining a word entered using a reduced keypad, where each of one or more input keys of the reduced keypad is mapped to a plurality of letters, the method comprising:
-
receiving key input corresponding to the word and a left context; for each word in a vocabulary that is consistent with the key input, determining an n-gram probability of the word given the left context, and adding the word and the n-gram of the word to an array of word-probability pairs, wherein the n-gram probabilities are stored in a language model trained at least in part on words entered in a cache, the language model comprising n-gram probabilities corresponding to sequences of words in a natural language; determining the word corresponding to the key input as a word of a word-probability pair within the array of word-probability pairs having a greatest probability; and updating the language model based on words previously entered into the cache. - View Dependent Claims (22, 23, 24, 25, 26)
-
-
27. A method for determining a word entered using a reduced keypad, wherein each of one or more keys of the reduced keypad is mapped to a plurality of letters, the method comprising:
-
receiving key input corresponding to the word and at least one of a left context and a right context; determining the word corresponding to the key input by using a compressed language model based one or more of the at least one of the left context and the right context of the key input, wherein the language model comprises probabilities corresponding to N-gram word sequences of a natural language; updating the language model with additional training using at least words previously entered in a cache; and compressing the language model by performing the steps of; smoothing the language model; and pruning the language model to yield the compressed language model. - View Dependent Claims (28, 29, 30, 31, 32, 33)
-
-
34. An apparatus comprising:
-
a plurality of keys, each of one or more of the keys mapped to a plurality of letters, the plurality of keys used to enter key input corresponding to a word and at least one of a left context and a right context; and
,a word-determining logic designed to construct a list of possible words corresponding to the entered word and ranking the listed words to determine the word corresponding to the key input by using a language model based on one or more of the at least one of the left context and the right context of the key input, wherein the language model comprises N-gram probability values corresponding to sequences of words in a natural language, and wherein the language model is updated based on words previously entered into a cache by a user. - View Dependent Claims (35, 36, 37, 38, 39, 40, 41, 42, 43, 44)
-
Specification