Method, apparatus, and system for facilitating cross-application searching and retrieval of content using a contextual user model
First Claim
Patent Images
1. A content discovery module to retrieve content in an automated fashion at a computing device, comprising executable instructions embodied in one or more non-transitory machine-readable media, configured to cause a computing system comprising one or more computing devices to:
- access a contextual user model, the contextual user model comprising;
gaze-tracking data comprising one or more real-time sensor inputs indicative of user gaze in relation to on-screen locations of a display of the computing device; and
semantic descriptions of user interface elements displayed at the on-screen locations corresponding to the gaze-tracking data, the semantic descriptions comprising information about the user interface elements;
generate an inference relating to a user-specific current interaction context based on the contextual user model;
formulate a search query including one or more aspects of the user-specific current interaction context;
execute the search query across one or more software applications; and
present one or more results of the search query at the computing device.
1 Assignment
0 Petitions
Accused Products
Abstract
A method, apparatus, and system for facilitating cross-application searching and retrieval of computer-stored content using a contextual user model includes using passive and/or active interaction data to formulate a user-specific search query. In some embodiments, inferences relating to the user'"'"'s current interaction context may be used to automatically retrieve relevant information for the user.
-
Citations
32 Claims
-
1. A content discovery module to retrieve content in an automated fashion at a computing device, comprising executable instructions embodied in one or more non-transitory machine-readable media, configured to cause a computing system comprising one or more computing devices to:
-
access a contextual user model, the contextual user model comprising; gaze-tracking data comprising one or more real-time sensor inputs indicative of user gaze in relation to on-screen locations of a display of the computing device; and semantic descriptions of user interface elements displayed at the on-screen locations corresponding to the gaze-tracking data, the semantic descriptions comprising information about the user interface elements; generate an inference relating to a user-specific current interaction context based on the contextual user model; formulate a search query including one or more aspects of the user-specific current interaction context; execute the search query across one or more software applications; and present one or more results of the search query at the computing device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. A method for retrieving content in an automated fashion at a computing device, comprising:
-
maintaining a contextual user model, the contextual user model comprising; gaze-tracking data comprising one or more real-time sensor inputs indicative of user gaze in relation to on-screen locations of a display of the computing device; and semantic descriptions of user interface elements displayed at the on-screen locations corresponding to the gaze-tracking data, the semantic descriptions comprising information about the user interface elements; generating an inference relating to a user-specific current interaction context based on the contextual user model; formulating a search query including one or more aspects of the user-specific current interaction context; executing the search query across one or more software applications; and presenting one or more results of the search query at the computing device. - View Dependent Claims (18)
-
-
19. A cognitive contextual search module to retrieve content in response to user input at a computing device, comprising executable instructions embodied in one or more non-transitory machine-readable media, configured to cause a computing system comprising one or more computing devices to:
-
receive a search request comprising a data characteristic and a user-specific interaction characteristic; access a contextual user model, the contextual user model comprising; gaze-tracking data comprising one or more real-time sensor inputs indicative of a person'"'"'s gaze in relation to on-screen locations of a display of the computing device; and semantic descriptions of user interface elements displayed at the on-screen locations corresponding to the gaze-tracking data, the semantic descriptions comprising information about the user interface elements; perform a search based on the search request and based on the contextual user model to obtain one or more search results; and present the search results at the computing device. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30)
-
-
31. A method for retrieving content in response to user input at a computing device, comprising:
-
receiving a search request comprising a data characteristic and a user-specific interaction characteristic; maintaining a contextual user model, the contextual user model comprising; gaze-tracking data comprising one or more real-time sensor inputs indicative of a person'"'"'s gaze in relation to on-screen locations of a display of the computing device; and semantic descriptions of user interface elements displayed at the on-screen locations corresponding to the gaze-tracking data, the semantic descriptions comprising information about the user interface elements; interpreting the search request based on the contextual user model; developing a search query based on the interpreted search request; executing the search query across one or more software applications to obtain one or more search results; and presenting the search results at the computing device. - View Dependent Claims (32)
-
Specification