Location-Based Conversational Understanding
First Claim
Patent Images
1. A method for providing location-based conversational understanding, the method comprising:
- receiving a query from a user;
generating an environmental context associated with the query;
interpreting the query according to the environmental context;
executing the interpreted query; and
providing at least one result of the query to the user.
2 Assignments
0 Petitions
Accused Products
Abstract
Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result associated with the query may be provided to the user.
-
Citations
20 Claims
-
1. A method for providing location-based conversational understanding, the method comprising:
-
receiving a query from a user; generating an environmental context associated with the query; interpreting the query according to the environmental context; executing the interpreted query; and providing at least one result of the query to the user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A computer-readable medium which stores a set of instructions which when executed performs a method for providing location-based conversational understanding, the method executed by the set of instructions comprising:
-
receiving a speech-based query from a user at a location; loading an environmental context associated with the location; converting the speech-based query to text according to the environmental context; executing the converted query according to the environmental context; and providing at least one result associated with the executed query to the user. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19)
-
-
20. A system for providing location-based conversational understanding, the system comprising:
-
a memory storage; and a processing unit coupled to the memory storage, wherein the processing unit is operative to; receive a speech-based query from a user at a location, determine whether an environmental context associated with the location exists in the memory storage, in response to determining that the environmental context does not exist; identify at least one acoustic interference in the speech-based query; identify at least one subject associated with the speech-based query; and create a new environmental context associated with the location for storing in the memory storage, wherein the at least one acoustic interference is associated with an acoustic model and wherein the at least one identified subject is associated with a semantic model, in response to determining that the environmental context does exist, load the environmental context, convert the speech-based query to a text-based query according to the environmental context, wherein being operative to convert the speech-based query to a text-based query according to the environmental context comprises being operative to adapt the query according at least one acoustic interference associated with the environmental context, execute the text-based query according to the environmental context, wherein being operative to execute the text-based query according to the environmental context comprises being operative to execute the query according to the semantic model associated with the environmental context, and provide at least one result of the executed text-based query to the user.
-
Specification