Extensible context-aware natural language interactions for virtual personal assistants
First Claim
1. A computing device for automatically querying a database for contextual natural language processing, the computing device comprising:
- a plurality of context source modules;
a plurality of language models, wherein each language model is associated with a context source module of the plurality of context source modules; and
a metadata interpretation module to index the plurality of language models to determine a plurality of important words of each of the plurality of language models that are important for the corresponding context source module; and
a request interpretation module to;
determine, for each of the plurality of language models, a relevance measure of a plurality of words of a textual representation of a user request based on the plurality of important words of the corresponding language model;
generate a ranking of the determined relevance measures corresponding to the plurality of language models;
generate, based on the ranking of the determined relevance measures, a semantic representation of the textual representation of the user request;
generate a database query as a function of the semantic representation using a database query mapping of a first context source module of the plurality of context source modules, the first context source module associated with a word of the textual representation; and
apply the database query generated as a function of the semantic representation.
1 Assignment
0 Petitions
Accused Products
Abstract
Technologies for extensible, context-aware natural language interactions include a computing device having a number of context source modules. Context source modules may be developed or installed after deployment of the computing device to a user. Each context source module includes a context capture module, a language model, one or more database query mappings, and may include one or more user interface element mappings. The context capture module interprets, generates, and stores context data. A virtual personal assistant (VPA) of the computing device indexes the language models and generates a semantic representation of a user request that associates each word of the request to a language model. The VPA translates the user request into a database query, and may generate a user interface element for the request. The VPA may execute locally on the computing device or remotely on a cloud server. Other embodiments are described and claimed.
35 Citations
20 Claims
-
1. A computing device for automatically querying a database for contextual natural language processing, the computing device comprising:
-
a plurality of context source modules; a plurality of language models, wherein each language model is associated with a context source module of the plurality of context source modules; and a metadata interpretation module to index the plurality of language models to determine a plurality of important words of each of the plurality of language models that are important for the corresponding context source module; and a request interpretation module to; determine, for each of the plurality of language models, a relevance measure of a plurality of words of a textual representation of a user request based on the plurality of important words of the corresponding language model; generate a ranking of the determined relevance measures corresponding to the plurality of language models; generate, based on the ranking of the determined relevance measures, a semantic representation of the textual representation of the user request; generate a database query as a function of the semantic representation using a database query mapping of a first context source module of the plurality of context source modules, the first context source module associated with a word of the textual representation; and apply the database query generated as a function of the semantic representation. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A virtual personal assistant (VPA) server for automatically querying a database for contextual natural language processing, the VPA server comprising:
-
a speech recognition engine to (i) receive, from a computing device, audio input data representing a request spoken by a user of the computing device and (ii) produce a textual representation of the user request based on the audio input data, the textual representation including a plurality of words; a metadata interpretation module to; receive a plurality of language models and associated database query mappings from the computing device; and index the plurality of language models to determine a plurality of important words of each of the plurality of language models; and a request interpretation module to; determine, for each of the plurality of language models, a relevance measure of a plurality of words of the textual representation of the user request based on the plurality of important words of the corresponding language model; generate a ranking of the determined relevance measures corresponding to the plurality of language models; generate, based on the ranking of the determined relevance measures, a semantic representation of the textual representation; generate a database query as a function of the semantic representation using a database query mapping associated with a first language model of the plurality of language models, the first language model associated with a word of the textual representation; apply the database query to a context database to generate query results; and transmit the query results from the VPA server to the computing device. - View Dependent Claims (10, 11, 12, 13)
-
-
14. One or more non-transitory computer-readable storage media device for automatically querying a database for contextual natural language processing comprising a plurality of instructions that in response to being executed cause a computing device to:
-
index a plurality of language models to determine a plurality of important words of each of the plurality of language models that are important for the corresponding context source module, wherein each language model is associated with a context source module of a plurality of context source modules; determine, for each of the plurality of language models, a relevance measure of a plurality of words of a textual representation of a user request based on the plurality of important words of the corresponding language model; generate a ranking of the determined relevance measures corresponding to the plurality of language models; generate a semantic representation of the textual representation of the user request; generate a database query as a function of the semantic representation using a database query mapping of a first context source module of the plurality of context source modules, the first context source module associated with a word of the textual representation; and apply the database query generated as a function of the semantic representation. - View Dependent Claims (15, 16, 17)
-
-
18. One or more non-transitory computer-readable storage media for automatically querying a database for contextual natural language processing comprising a plurality of instructions that in response to being executed cause a virtual personal assistant (VPA) server to:
-
receive, from a computing device, audio input data representing a request spoken by a user of the computing device; produce a textual representation of the user request based on the audio input data, the textual representation including a plurality of words; receive a plurality of language models and associated database query mappings from the computing device; index the plurality of language models to determine a plurality of important words of each of the plurality of language models; determine, for each of the plurality of language models, a relevance measure of a plurality of words of the textual representation of the user request based on the plurality of important words of the corresponding language model; generate a ranking of the determined relevance measures corresponding to the plurality of language models; generate, based on the ranking of the determined relevance measures, a semantic representation of the textual representation; generate a database query as a function of the semantic representation using a database query mapping associated with a first language model of the plurality of language models, the first language model associated with a word of the textual representation; apply the database query to a context database to generate query results; and transmit the query results from the VPA server to the computing device. - View Dependent Claims (19, 20)
-
Specification