Speech and gesture recognition enhancement
First Claim
Patent Images
1. A computer-implemented process for enhancing the recognition of user input to a voice-enabled and touch-enabled computing device, comprising:
- the computing device receiving the user input which is either speech comprising one or more words which are spoken by the user, or handwriting data comprising a series of characters which are handwritten by the user making screen-contacting gestures;
the computing device using a user-specific supplementary data context to narrow a vocabulary of a user input recognition subsystem and reduce the size of the vocabulary, wherein the user input recognition subsystem is a speech recognition subsystem whenever the user input is speech, and the user input recognition subsystem is a handwriting recognition subsystem whenever the user input is handwriting data; and
the computing device using the user input recognition subsystem and said narrowed vocabulary to translate the user input into recognizable text that forms either a word or word sequence which is predicted by the user input recognition subsystem to correspond to the user input, wherein said narrowed vocabulary serves to maximize the accuracy of said translation.
2 Assignments
0 Petitions
Accused Products
Abstract
The recognition of user input to a computing device is enhanced. The user input is either speech, or handwriting data input by the user making screen-contacting gestures, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed screen-contacting gestures that are made by the user, or a combination of one or more prescribed words that are spoken by the user and one or more prescribed non-screen-contacting gestures that are made by the user.
-
Citations
11 Claims
-
1. A computer-implemented process for enhancing the recognition of user input to a voice-enabled and touch-enabled computing device, comprising:
-
the computing device receiving the user input which is either speech comprising one or more words which are spoken by the user, or handwriting data comprising a series of characters which are handwritten by the user making screen-contacting gestures; the computing device using a user-specific supplementary data context to narrow a vocabulary of a user input recognition subsystem and reduce the size of the vocabulary, wherein the user input recognition subsystem is a speech recognition subsystem whenever the user input is speech, and the user input recognition subsystem is a handwriting recognition subsystem whenever the user input is handwriting data; and the computing device using the user input recognition subsystem and said narrowed vocabulary to translate the user input into recognizable text that forms either a word or word sequence which is predicted by the user input recognition subsystem to correspond to the user input, wherein said narrowed vocabulary serves to maximize the accuracy of said translation. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
Specification