Automatically Adapting User Interfaces For Hands-Free Interaction
First Claim
1. A computer-implemented method for interpreting user input to perform a task on a computing device having at least one processor, comprising:
- at a processor, detecting whether or not a hands-free context is active;
at an output device, prompting a user for input;
at an input device, receiving user input;
at the processor, interpreting the received user input to derive a representation of user intent;
at the processor, identifying at least one task and at least one parameter for the task, based at least in part on the derived representation of user intent;
at the processor, executing the at least one task using the at least one parameter, to derive a result;
at the processor, generating a dialog response based on the derived result; and
at the output device, outputting the generated dialog response;
wherein, responsive to detection that the device is in a hands-free context, at least one of the steps of prompting the user for input, receiving user input, interpreting the receiving user input, identifying the at least one task and at least one parameter for the task, and generating the dialog response is performed in a manner consistent with limitations associated with the hands-free context.
1 Assignment
0 Petitions
Accused Products
Abstract
A user interface for a system such as a virtual assistant is automatically adapted for hands-free use. A hands-free context is detected via automatic or manual means, and the system adapts various stages of a complex interactive system to modify the user experience to reflect the particular limitations of such a context. The system of the present invention thus allows for a single implementation of a complex system such as a virtual assistant to dynamically offer user interface elements and alter user interface behavior to allow hands-free use without compromising the user experience of the same system for hands-on use.
809 Citations
43 Claims
-
1. A computer-implemented method for interpreting user input to perform a task on a computing device having at least one processor, comprising:
-
at a processor, detecting whether or not a hands-free context is active; at an output device, prompting a user for input; at an input device, receiving user input; at the processor, interpreting the received user input to derive a representation of user intent; at the processor, identifying at least one task and at least one parameter for the task, based at least in part on the derived representation of user intent; at the processor, executing the at least one task using the at least one parameter, to derive a result; at the processor, generating a dialog response based on the derived result; and at the output device, outputting the generated dialog response; wherein, responsive to detection that the device is in a hands-free context, at least one of the steps of prompting the user for input, receiving user input, interpreting the receiving user input, identifying the at least one task and at least one parameter for the task, and generating the dialog response is performed in a manner consistent with limitations associated with the hands-free context. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A computer program product for interpreting user input to perform a task on a computing device having at least one processor, comprising:
-
a nontransitory computer-readable storage medium; and computer program code, encoded on the medium, configured to cause at least one processor to perform the steps of; detecting whether or not a hands-free context is active; causing an output device to prompt a user for input; receiving user input via an input device; interpreting the received user input to derive a representation of user intent; identifying at least one task and at least one parameter for the task, based at least in part on the derived representation of user intent; executing the at least one task using the at least one parameter, to derive a result; generating a dialog response based on the derived result; and causing the output device to output the generated dialog response; wherein, responsive to detection that the device is in a hands-free context, the computer program code is configured to cause at least one processor to perform at least one of the steps of prompting the user for input, receiving user input, interpreting the receiving user input, identifying the at least one task and at least one parameter for the task, and generating the dialog response in a manner consistent with limitations associated with the hands-free context. - View Dependent Claims (21, 22, 23, 24, 25, 26, 27, 28, 29, 30)
-
-
31. A system for interpreting user input to perform a task on a computing device, comprising:
-
an output device, configured to prompt a user for input; an input device, configured to receive user input; at least one processor, communicatively coupled to the output device and to the input device, configured to perform the steps of; detecting whether or not a hands-free context is active; interpreting the received user input to derive a representation of user intent; identifying at least one task and at least one parameter for the task, based at least in part on the derived representation of user intent; executing the at least one task using the at least one parameter, to derive a result; and generating a dialog response based on the derived result; and wherein the output device is further configured to output the generated dialog response; and wherein, responsive to detection that the device is in a hands-free context, at least one of prompting the user for input, receiving user input, interpreting the receiving user input, identifying the at least one task and at least one parameter for the task, and generating the dialog response is performed in a manner consistent with limitations associated with the hands-free context. - View Dependent Claims (32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43)
-
Specification