Contextual user interface based on environment
First Claim
Patent Images
1. A home assistant device, comprising:
- a display screen;
a microphone;
one or more processors; and
memory storing instructions, wherein the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to;
determine that first speech has been spoken in an environment of the home assistant device using the microphone;
determine a first context of the environment of the home assistant device, the first context of the environment including characteristics of the first speech and one or more of a location of a user providing the first speech, a time of the first speech, a user identity corresponding to the user providing the first speech, a skill level with interacting with the home assistant device of the user providing the first speech, or a schedule of the user providing the first speech;
determine a first distance from a source of the first speech to the home assistant device;
display a first graphical user interface (GUI) for the home assistant device on the display screen to provide a response regarding the first speech, the first GUI based on the first context of the environment including the characteristics of the first speech, the first distance, and content of the first speech;
determine that second speech has been spoken in the environment of the home assistant device using the microphone, the first speech and the second speech including the same content;
determine a second context of the environment of the home assistant device, the second context of the environment including characteristics of the second speech and one or more of a location of a user providing the second speech, a time of the second speech, a user identity corresponding to the user providing the second speech, a skill level with interacting with the home assistant device of the user providing the second speech, a schedule of the user providing the second speech, the first context and the second context being different, wherein the characteristics of the first speech is different than the characteristics of the second speech; and
determine a second distance from a source of the second speech to the home assistant device, the second distance being farther than the first distance;
display a second GUI for the home assistant device on the display screen to provide a response regarding the second speech, the second GUI based on the second context of the environment including the characteristics of the second speech, the second distance, and content of the second speech, the first GUI and the second GUI providing different content, the second GUI including content also included in the first GUI, the content in the second GUI being a different size than the content in the first GUI based on the second distance being farther than the first distance, wherein the content on the second GUI is updated at a different speed than the content on the first GUI based on the first context and the second context being different.
1 Assignment
0 Petitions
Accused Products
Abstract
A contextual user interface based on environment is described. An assistant device can determine that speech has been spoken and determine the context of an environment of that speech. A user interface can then be generated based on the context of the environment and the content of the speech. Different context can result in different user interfaces being generated.
155 Citations
23 Claims
-
1. A home assistant device, comprising:
-
a display screen; a microphone; one or more processors; and memory storing instructions, wherein the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to; determine that first speech has been spoken in an environment of the home assistant device using the microphone; determine a first context of the environment of the home assistant device, the first context of the environment including characteristics of the first speech and one or more of a location of a user providing the first speech, a time of the first speech, a user identity corresponding to the user providing the first speech, a skill level with interacting with the home assistant device of the user providing the first speech, or a schedule of the user providing the first speech; determine a first distance from a source of the first speech to the home assistant device; display a first graphical user interface (GUI) for the home assistant device on the display screen to provide a response regarding the first speech, the first GUI based on the first context of the environment including the characteristics of the first speech, the first distance, and content of the first speech; determine that second speech has been spoken in the environment of the home assistant device using the microphone, the first speech and the second speech including the same content; determine a second context of the environment of the home assistant device, the second context of the environment including characteristics of the second speech and one or more of a location of a user providing the second speech, a time of the second speech, a user identity corresponding to the user providing the second speech, a skill level with interacting with the home assistant device of the user providing the second speech, a schedule of the user providing the second speech, the first context and the second context being different, wherein the characteristics of the first speech is different than the characteristics of the second speech; and determine a second distance from a source of the second speech to the home assistant device, the second distance being farther than the first distance; display a second GUI for the home assistant device on the display screen to provide a response regarding the second speech, the second GUI based on the second context of the environment including the characteristics of the second speech, the second distance, and content of the second speech, the first GUI and the second GUI providing different content, the second GUI including content also included in the first GUI, the content in the second GUI being a different size than the content in the first GUI based on the second distance being farther than the first distance, wherein the content on the second GUI is updated at a different speed than the content on the first GUI based on the first context and the second context being different.
-
-
2. A method for providing a contextual user interface on an assistant device, comprising:
-
determining, by a processor, that a first speech has been spoken; determining, by the processor, a first context of an environment corresponding to the first speech, the first context of the environment including how the first speech was spoken; determining a first distance from a source of the first speech to the assistant device; providing, by the processor, a first user interface based on the first context of the environment including the how the first speech was spoken, the first distance, and content of the first speech; determining, by the processor, that a second speech has been spoken, the second speech spoken at a different time than the first speech; determining, by the processor, a second context of the environment corresponding to the second speech, the second context of the environment including how the second speech was spoken, the first context and the second context being different, wherein how the first speech was spoken is different than how the second speech was spoken; determining a second distance from a source of the second speech to the assistant device, the second distance being farther than the first distance; and providing, by the processor, a second user interface based on the second context of the environment including how the second speech was spoken, the second distance, and content of the second speech, the content of the first speech and the second speech being similar, the first user interface and the second user interface being different, the second user interface including content also included in the first user interface, the content in the second user interface being a different size than the content in the first user interface based on the second distance being farther than the first distance, wherein the content on the second user interface is updated at a different speed than the content on the first user interface based on the first context and the second context being different. - View Dependent Claims (3, 4, 5, 6, 7, 8)
-
-
9. An electronic device, comprising:
-
one or more processors; and memory storing instructions, wherein the processor is configured to execute the instructions such that the processor and memory are configured to; determine that a first speech has been spoken; determine a first context of an environment corresponding to the first speech, the first context of the environment including a volume of the first speech; determine a first distance from a source of the first speech to an assistant device; generate a first user interface based on the first context of the environment including the volume of the first speech, the first distance, and content of the first speech; determine that a second speech has been spoken, the second speech spoken at a different time than the first speech; determine a second context of the environment corresponding to the second speech, the second context of the environment including a volume of the second speech, the first context and the second context being different, wherein the volume of the first speech is different than the volume of the second speech; determine a second distance from a source of the second speech to the assistant device, the second distance being farther than the first distance; and generate a second user interface based on the second context of the environment including the volume of the second speech, the second distance, and content of the second speech, the content of the first speech and the second speech being similar, the first user interface and the second user interface being different, the second user interface including content also included in the first user interface, the content in the second user interface being a different size than the content in the first user interface based on the second distance being farther than the first distance, wherein the content on the second user interface is updated at a different speed than the content on the first user interface based on the first context and the second context being different. - View Dependent Claims (10, 11, 12, 13, 14, 15)
-
-
16. A computer program product, comprising one or more non-transitory computer-readable media having computer program instructions stored therein, the computer program instructions being configured such that, when executed by one or more computing devices, the computer program instructions cause the one or more computing devices to:
-
determine that a first speech has been spoken; determine a first context of an environment corresponding to the first speech, the first context of the environment including an indication that the first speech was spoken at a first speed; determine a first distance from a source of the first speech to an assistant device; generate a first user interface based on the first context of the environment including the indication that the first speech was spoken at the first speed, the first distance, and content of the first speech; determine that a second speech has been spoken, the second speech spoken at a different time than the first speech; determine a second context of the environment corresponding to the second speech, the second context of the environment including an indication that the second speech was spoken at a second speed, the first context and the second context being different, wherein the first speed and the second speed are different; determine a second distance from a source of the second speech to the assistant device, the second distance being farther than the first distance; and generate a second user interface based on the second context of the environment including the second speed of the second speech, the second distance, and content of the second speech, the content of the first speech and the second speech being similar, the first user interface and the second user interface being different, the second user interface including content also included in the first user interface, the content in the second user interface being a different size than the content in the first user interface based on the second distance being farther than the first distance, wherein the content on the second user interface is updated at a different speed than the content on the first user interface based on the first context and the second context being different. - View Dependent Claims (17, 18, 19, 20, 21, 22, 23)
-
Specification