Filtering document search results using contextual metadata
First Claim
Patent Images
1. A computer-implemented method for filtering document search results, the computer-implemented method comprising:
- receiving contextual data comprising;
(i) a facial movement associated with an active document, and (ii) one or more images of a face of a user viewing the active document;
detecting an emotional response associated with the active document based on the received contextual data, wherein the emotional response is detected based, at least in part, on respective shapes of a mouth, nose, eyebrows, and forehead lines of the user, and on respective positions, in relation to one another, of the mouth, nose, eyebrows, and forehead lines of the user;
responsive to detecting a change in location of a device displaying the active document, generating a contextual metadata tag, wherein generating the contextual metadata tag includes generating a movement tag indicating movement of the device when the user is accessing the active document over a period of time, and wherein access is determined by an eye movement tracker that analyzes eye movements of the user related to specific text within the active document, captured images of the face of the user while reading the active document, and extracted actions that the user was performing while reading the active document;
receiving a query comprising a contextual keyword corresponding to the contextual metadata tag;
filtering search results received in response to the query based on the contextual metadata tag; and
displaying a list of the filtered search results and a list of filters applied to the search results.
1 Assignment
0 Petitions
Accused Products
Abstract
Receiving contextual data including a facial movement associated with an active document. A response associated with the active document is detected and associated with the received contextual data. A contextual metadata tag is generated based on the detected response to the active document. A contextual keyword is created that corresponds to the contextual metadata tag. Search results received in response to the query are filtered based on the contextual metadata tag.
28 Citations
20 Claims
-
1. A computer-implemented method for filtering document search results, the computer-implemented method comprising:
-
receiving contextual data comprising;
(i) a facial movement associated with an active document, and (ii) one or more images of a face of a user viewing the active document;detecting an emotional response associated with the active document based on the received contextual data, wherein the emotional response is detected based, at least in part, on respective shapes of a mouth, nose, eyebrows, and forehead lines of the user, and on respective positions, in relation to one another, of the mouth, nose, eyebrows, and forehead lines of the user; responsive to detecting a change in location of a device displaying the active document, generating a contextual metadata tag, wherein generating the contextual metadata tag includes generating a movement tag indicating movement of the device when the user is accessing the active document over a period of time, and wherein access is determined by an eye movement tracker that analyzes eye movements of the user related to specific text within the active document, captured images of the face of the user while reading the active document, and extracted actions that the user was performing while reading the active document; receiving a query comprising a contextual keyword corresponding to the contextual metadata tag; filtering search results received in response to the query based on the contextual metadata tag; and displaying a list of the filtered search results and a list of filters applied to the search results. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A computer program product for filtering document search results, the computer program product comprising a computer readable storage medium having stored thereon:
-
program instructions programmed to receive contextual data comprising;
(i) a facial movement associated with an active document, and (ii) one or more images of a face of a user viewing the active document;program instructions programmed to detect an emotional response associated with the active document based on the received contextual data, wherein the emotional response is detected based, at least in part, on respective shapes of a mouth, nose, eyebrows, and forehead lines of the user, and on respective positions, in relation to one another, of the mouth, nose, eyebrows, and forehead lines of the user; program instructions programmed to, responsive to detecting a change in location of a device displaying the active document, generate a contextual metadata tag, wherein generating the contextual metadata tag includes generating a movement tag indicating movement of the device when the user is accessing the active document over a period of time, and wherein access is determined by an eye movement tracker that analyzes eye movements of the user related to specific text within the active document, captured images of the face of the user while reading the active document, and extracted actions that the user was performing while reading the active document; program instructions programmed to receive a query comprising a contextual keyword corresponding to the contextual metadata tag; program instructions to filter search results received in response to the query based on the contextual metadata tag; and program instructions to display a list of the filtered search results and a list of filters applied to the search results. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. A computer system for filtering document search results, the computer system comprising:
-
a processor(s) set; and a computer readable storage medium; wherein; the processor set is structured, located, connected and/or programmed to run program instructions stored on the computer readable storage medium; and the program instructions include; program instructions programmed to receive contextual data comprising;
(i) a facial movement associated with an active document, and (ii) one or more images of a face of a user viewing the active document;program instructions programmed to detect an emotional response associated with the active document based on the received contextual data, wherein the emotional response is detected based, at least in part, on respective shapes of a mouth, nose, eyebrows, and forehead lines of the user, and on respective positions, in relation to one another, of the mouth, nose, eyebrows, and forehead lines of the user; program instructions programmed to, responsive to detecting a change in location of a device displaying the active document, generate a contextual metadata tag, wherein generating the contextual metadata tag includes generating a movement tag indicating movement of the device when the user is accessing the active document over a period of time, and wherein access is determined by an eye movement tracker that analyzes eye movements of the user related to specific text within the active document, captured images of the face of the user while reading the active document, and extracted actions that the user was performing while reading the active document; program instructions programmed to receive a query comprising a contextual keyword corresponding to the contextual metadata tag; program instructions to filter search results received in response to the query based on the contextual metadata tag; and program instructions to display a list of the filtered search results and a list of filters applied to the search results. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification