Predictive text input
First Claim
1. A method for text prediction comprising:
- at an electronic device;
receiving a text input, the text input associated with an input context;
determining, using a first language model, a first frequency of occurrence of an m-gram with respect to a first subset of a corpus, wherein the first subset is associated with a first context, and wherein the m-gram includes at least one word in the text input;
determining, based on a degree of similarity between the input context and the first context, a first weighting factor, wherein a weighted first frequency of occurrence of the m-gram is obtained by applying the first weighting factor to the first frequency of occurrence of the m-gram;
determining, using the first language model, a second frequency of occurrence of the m-gram with respect to a second subset of the corpus, wherein the second subset is associated with a second context;
determining, based on a degree of similarity between the input context and the second context, a second weighting factor, wherein a weighted second frequency of occurrence of the m-gram is obtained by applying the second weighting factor to the second frequency of occurrence of the m-gram; and
determining, based on the weighted first frequency of occurrence of the m-gram and the weighted second frequency of occurrence of the m-gram, a first weighted probability of a first predicted text given the text input, wherein the m-gram includes at least one word in the first predicted text.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems and processes for predictive text input are provided. In one example process, a text input can be received. The text input can be associated with an input context. A frequency of occurrence of an m-gram with respect to a subset of a corpus can be determined using a language model. The subset can be associated with a context. A weighting factor can be determined based on a degree of similarity between the input context and the context. A weighted probability of a predicted text given the text input can be determined based on the frequency of occurrence of the m-gram and the weighting factor. The m-gram can include at least one word in the text input and at least one word in the predicted text.
3382 Citations
43 Claims
-
1. A method for text prediction comprising:
at an electronic device; receiving a text input, the text input associated with an input context; determining, using a first language model, a first frequency of occurrence of an m-gram with respect to a first subset of a corpus, wherein the first subset is associated with a first context, and wherein the m-gram includes at least one word in the text input; determining, based on a degree of similarity between the input context and the first context, a first weighting factor, wherein a weighted first frequency of occurrence of the m-gram is obtained by applying the first weighting factor to the first frequency of occurrence of the m-gram; determining, using the first language model, a second frequency of occurrence of the m-gram with respect to a second subset of the corpus, wherein the second subset is associated with a second context; determining, based on a degree of similarity between the input context and the second context, a second weighting factor, wherein a weighted second frequency of occurrence of the m-gram is obtained by applying the second weighting factor to the second frequency of occurrence of the m-gram; and determining, based on the weighted first frequency of occurrence of the m-gram and the weighted second frequency of occurrence of the m-gram, a first weighted probability of a first predicted text given the text input, wherein the m-gram includes at least one word in the first predicted text. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21)
-
22. A method for text prediction comprising:
at an electronic device; receiving a first text input, the first text input associated with a first input context; determining, using a language model and based on the first input context, a first weighted probability of a predicted text given the first text input; receiving a second text input, the second text input associated with a second input context, wherein the second text input is received after the first text input is received, wherein the first text input is identical to the second text input, and wherein the first input context is different from the second input context; and determining, using the language model and based on the second input context, a second weighted probability of the predicted text given the second text input, wherein the first weighted probability is different from the second weighted probability. - View Dependent Claims (23, 24, 25, 26)
-
27. A non-transitory computer-readable storage medium comprising instructions for causing one or more processors of an electronic device to:
-
receive a text input, the text input associated with an input context; determine, using a first language model, a first frequency of occurrence of an m-gram with respect to a first subset of a corpus, wherein the first subset is associated with a first context, and wherein the m-gram includes at least one word in the text input; determine, based on a degree of similarity between the input context and the first context, a first weighting factor, wherein a weighted first frequency of occurrence of the m-gram is obtained by applying the first weighting factor to the first frequency of occurrence of the m-gram; determine, using the first language model, a second frequency of occurrence of the m-gram with respect to a second subset of the corpus, wherein the second subset is associated with a second context; determine, based on a degree of similarity between the input context and the second context, a second weighting factor, wherein a weighted second frequency of occurrence of the m-gram is obtained by applying the second weighting factor to the second frequency of occurrence of the m-gram; and determine, based on the weighted first frequency of occurrence of the m-gram and the weighted second frequency of occurrence of the m-gram, a first weighted probability of a first predicted text given the text input, wherein the m-gram includes at least one word in the first predicted text. - View Dependent Claims (28, 29, 30, 31, 32, 33)
-
-
34. A system comprising:
-
one or more processors; memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for; receiving a text input, the text input associated with an input context; determining, using a first language model, a first frequency of occurrence of an m-gram with respect to a first subset of a corpus, wherein the first subset is associated with a first context, and wherein the m-gram includes at least one word in the text input; determining, based on a degree of similarity between the input context and the first context, a first weighting factor, wherein a weighted first frequency of occurrence of the m-gram is obtained by applying the first weighting factor to the first frequency of occurrence of the m-gram; determining, using the first language model, a second frequency of occurrence of the m-gram with respect to a second subset of the corpus, wherein the second subset is associated with a second context; and determining, based on a degree of similarity between the input context and the second context, a second weighting factor, wherein a weighted second frequency of occurrence of the m-gram is obtained by applying the second weighting factor to the second frequency of occurrence of the m-gram; determining, based on the weighted first frequency of occurrence of the m-gram and the weighted second frequency of occurrence of the m-gram, a first weighted probability of a first predicted text given the text input, wherein the m-gram includes at least one word in the first predicted text. - View Dependent Claims (35, 36, 37, 38)
-
-
39. A non-transitory computer-readable storage medium comprising instructions for causing one or more processors of an electronic device to:
-
receive a first text input, the first text input associated with a first input context; determine, using a language model and based on the first input context, a first weighted probability of a predicted text given the first text input; receive a second text input, the second text input associated with a second input context, wherein the second text input is received after the first text input is received, wherein the first text input is identical to the second text input, and wherein the first input context is different from the second input context; and determine, using the language model and based on the second input context, a second weighted probability of the predicted text given the second text input, wherein the first weighted probability is different from the second weighted probability. - View Dependent Claims (40, 41, 42, 43)
-
Specification