Automatically adapting user interfaces for hands-free interaction
First Claim
1. A computer-implemented method for adapting a user interface on a computing device having at least one processor, comprising:
- performing, at the computing device, a plurality of steps including;
detecting whether a hands-free context is active;
prompting a user for an input;
receiving a user input comprising natural language information;
interpreting the received user input to derive a representation of a user intent, wherein the interpreting of the received user input comprises;
generating a plurality of candidate interpretations based on the received user input,determining the representation of the user intent based on the plurality of candidate interpretations;
identifying at least one task and at least one parameter for the task, based at least in part on the derived representation of the user intent;
executing the at least one task using the at least one parameter, to derive a result;
in accordance with the derived result,paraphrasing at least a portion of the user input in a spoken form;
generating speech using a plurality of voices to differentiate paraphrased user input from other spoken output; and
providing an audio output of the generated speech;
wherein, responsive to a detection that the computing device is in the hands-free context, the user interface is adapted to display a subset of user-interaction mechanisms displayed with the hands-free context being inactive, the subset including at least one user-interaction mechanism.
1 Assignment
0 Petitions
Accused Products
Abstract
A user interface for a system such as a virtual assistant is automatically adapted for hands-free use. A hands-free context is detected via automatic or manual means, and the system adapts various stages of a complex interactive system to modify the user experience to reflect the particular limitations of such a context. The system of the present invention thus allows for a single implementation of a complex system such as a virtual assistant to dynamically offer user interface elements and alter user interface behavior to allow hands-free use without compromising the user experience of the same system for hands-on use.
3461 Citations
39 Claims
-
1. A computer-implemented method for adapting a user interface on a computing device having at least one processor, comprising:
-
performing, at the computing device, a plurality of steps including; detecting whether a hands-free context is active; prompting a user for an input; receiving a user input comprising natural language information; interpreting the received user input to derive a representation of a user intent, wherein the interpreting of the received user input comprises; generating a plurality of candidate interpretations based on the received user input, determining the representation of the user intent based on the plurality of candidate interpretations; identifying at least one task and at least one parameter for the task, based at least in part on the derived representation of the user intent; executing the at least one task using the at least one parameter, to derive a result; in accordance with the derived result, paraphrasing at least a portion of the user input in a spoken form; generating speech using a plurality of voices to differentiate paraphrased user input from other spoken output; and providing an audio output of the generated speech; wherein, responsive to a detection that the computing device is in the hands-free context, the user interface is adapted to display a subset of user-interaction mechanisms displayed with the hands-free context being inactive, the subset including at least one user-interaction mechanism. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A computer program product for interpreting a user input to perform a task on a computing device having at least one processor, comprising:
-
a non-transitory computer-readable storage medium; and computer program code, encoded on the medium, configured to cause the at least one processor of the computing device to perform a plurality of steps including; detecting whether a hands-free context is active; prompting a user for an input; receiving the user input comprising natural language information; interpreting the received user input to derive a representation of a user intent, wherein the interpreting of the received user input comprises; generating a plurality of candidate interpretations based on the received user input, determining the representation of the user intent based on the plurality of candidate interpretations; identifying at least one task and at least one parameter for the task, based at least in part on the derived representation of user intent; executing the at least one task using the at least one parameter, to derive a result; in accordance with the derived result, paraphrasing at least a portion of the user input in a spoken form; generating speech using a plurality of voices to differentiate paraphrased user input from other spoken output; and providing an audio output of the generated speech; wherein, responsive to a detection that the computing device is in the hands-free context, the computer program code is configured to cause the at least one processor to adapt the user interface to display a subset of user-interaction mechanisms displayed with the hands-free context being inactive, the subset including at least one user-interaction mechanism. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25, 26, 27)
-
-
28. A system for interpreting a user input to perform a task on a computing device, comprising:
-
an output device, configured to prompt a user for an input; an input device, configured to receive the user input comprising natural language information; at least one processor, communicatively coupled to the output device and to the input device, configured to perform a plurality of steps including; detecting whether a hands-free context is active; interpreting the received user input to derive a representation of a user intent, wherein the interpreting of the received user input comprises; generating a plurality of candidate interpretations based on the received user input, determining the representation of the user intent based on the plurality of candidate interpretations; identifying at least one task and at least one parameter for the task, based at least in part on the derived representation of the user intent; executing the at least one task using the at least one parameter, to derive a result; in accordance with the derived result, paraphrasing at least a portion of the user input in a spoken form; generating speech using a plurality of voices to differentiate paraphrased user input from other spoken output; wherein the output device is further configured to provide an audio output of the generated speech; and wherein, responsive to a detection that the computing device is in the hands-free context, the user interface is adapted to display a subset of user-interaction mechanisms displayed with the hands-free context being inactive, the subset including at least one user-interaction mechanism. - View Dependent Claims (29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39)
-
Specification