VPA WITH INTEGRATED OBJECT RECOGNITION AND FACIAL EXPRESSION RECOGNITION
First Claim
1. A method, comprising:
- receiving, by an integrated circuit in a computing device, sensory input, wherein the sensory input includes at least two different types of information;
determining semantic information from the sensory input, wherein the semantic information provides an interpretation of the sensory input;
identifying a context-specific framework, wherein the context-specific framework includes a cumulative sequence of one or more previous intents;
determining a current intent, wherein determining the current intent includes using the semantic information and the context-specific framework;
determining a current input state, wherein determining the current input state includes using the semantic information and one or more behavioral models, and wherein the behavioral models include one or more interpretations of previously-provided semantic information; and
determining an action, wherein determining the action includes using the current intent and the current input state.
1 Assignment
0 Petitions
Accused Products
Abstract
Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information. The virtual personal assistant can further be configured to determine an action using the current intent and the current input state.
214 Citations
20 Claims
-
1. A method, comprising:
-
receiving, by an integrated circuit in a computing device, sensory input, wherein the sensory input includes at least two different types of information; determining semantic information from the sensory input, wherein the semantic information provides an interpretation of the sensory input; identifying a context-specific framework, wherein the context-specific framework includes a cumulative sequence of one or more previous intents; determining a current intent, wherein determining the current intent includes using the semantic information and the context-specific framework; determining a current input state, wherein determining the current input state includes using the semantic information and one or more behavioral models, and wherein the behavioral models include one or more interpretations of previously-provided semantic information; and determining an action, wherein determining the action includes using the current intent and the current input state. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A virtual personal assistant device, comprising:
-
one or more processors; and a non-transitory computer-readable medium including instructions that, when executed by the one or more processors, cause the one or more processors to perform operations including; receiving sensory input, wherein the sensory input includes at least two different types of information; determining semantic information from the sensory input, wherein the semantic information provides an interpretation of the sensory input; identifying a context-specific framework, wherein the context-specific framework includes a cumulative sequence of one or more previous intents; determining a current intent, wherein determining the current intent includes using the semantic information and the context-specific framework; determining a current input state, wherein determining the current input state includes using the semantic information and one or more behavioral models, and wherein the behavioral models include one or more interpretations of previously-provided semantic information; and determining an action, wherein determining the action includes using the current intent and the current input state. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A computer-program product tangibly embodied in a non-transitory machine-readable storage medium, including instructions that, when executed by one or more processors, cause the one or more processors to:
-
receive, by an integrated circuit in a computing device, sensory input, wherein the sensory input includes at least two different types of information; determine semantic information from the sensory input, wherein the semantic information provides an interpretation of the sensory input; identify a context-specific framework, wherein the context-specific framework includes a cumulative sequence of one or more previous intents; determine a current intent, wherein determining the current intent includes using the semantic information and the context-specific framework; determine a current input state, wherein determining the current input state includes using the semantic information and one or more behavioral models, and wherein the behavioral models include one or more interpretations of previously-provided semantic information; and determine an action, wherein determining the action includes using the current intent and the current input state. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification