METHOD, APPARATUS, AND SYSTEM FOR MODELING PASSIVE AND ACTIVE USER INTERACTIONS WITH A COMPUTER SYSTEM
First Claim
1. A method for modeling user activity with regard to a plurality of different software applications on a computing device, the method comprising, with the computing device:
- receiving gaze-tracking data comprising one or more real-time sensor inputs indicative of a user'"'"'s gaze in relation to a display of the computing device;
identifying a user interface element displayed at a location on the display corresponding to the gaze-tracking data;
obtaining a semantic description of the user interface element, the semantic description comprising information about the user interface element;
associating the semantic description with the gaze-tracking data; and
using the association of the semantic description with the gaze-tracking data to model user activity at the computing device.
1 Assignment
0 Petitions
Accused Products
Abstract
A method, apparatus, and system for modeling user interactions with a computer system associates semantic descriptions of passive and active user interactions, which are meaningful at a user level, with application events and user interaction data as a user interacts with one or multiple software applications with a computing device, and uses those associations to build and maintain a user-specific contextual model. In some embodiments, the contextual models of multiple users are leveraged to form one or more collective contextual user models. Such models are useful in many different applications.
38 Citations
27 Claims
-
1. A method for modeling user activity with regard to a plurality of different software applications on a computing device, the method comprising, with the computing device:
-
receiving gaze-tracking data comprising one or more real-time sensor inputs indicative of a user'"'"'s gaze in relation to a display of the computing device; identifying a user interface element displayed at a location on the display corresponding to the gaze-tracking data; obtaining a semantic description of the user interface element, the semantic description comprising information about the user interface element; associating the semantic description with the gaze-tracking data; and using the association of the semantic description with the gaze-tracking data to model user activity at the computing device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A computing system to develop a semantic model of user attention to user interface elements of a plurality of software applications, the computing system comprising:
-
a display; a sensor subsystem to obtain gaze-tracking data, the gaze-tracking data being indicative of a user'"'"'s gaze in relation to the display; a framework embodied in one or more machine-accessible media, the framework configured to, over time; determine locations on the display corresponding to the gaze-tracking data; identify user interface elements displayed by the software applications at each of the locations corresponding to the gaze-tracking data; obtain a semantic description of each of the user interface elements, each of the semantic descriptions comprising information about the corresponding user interface element; and associate the semantic descriptions with the gaze-tracking data; and a model embodied in one or more machine-accessible media, the model configured to store data relating to associations of the semantic descriptions with the gaze-tracking data. - View Dependent Claims (21, 22)
-
-
23. A system for modeling user attention to user interface elements displayed by a plurality of software applications on a computing device, the system embodied in one or more machine-accessible storage media, the system comprising:
-
a contextual model comprising data relating to; a plurality of real-time sensor inputs received at a computing device, the real-time sensor inputs being indicative of a user'"'"'s gaze in relation to a display of the computing device; locations on the display corresponding to the real-time sensor inputs; and user interface elements displayed at the locations corresponding to the real-time sensor inputs; and a framework configured to; derive gaze-tracking data from the real-time sensor inputs, the gaze-tracking data indicating an aspect of user attention to the user interface elements; determine semantic descriptions of the user interface elements, each of the semantic descriptions comprising information about the corresponding user interface element; associate the semantic descriptions with the gaze-tracking data; and store data relating to the associations of the semantic descriptions with the gaze-tracking data in the contextual model. - View Dependent Claims (24, 25, 26)
-
-
27. A method for modeling user activity with regard to a plurality of different software applications on a computing device, the method comprising, with the computing device:
-
receiving passive interaction data comprising one or more real-time sensor inputs indicative of a passive user interaction with the computing device, the passive user interaction being an interaction that does not result in an application event; receiving active interaction data indicative of an active user interaction, the active user interaction being an interaction that results in an application event; identifying user interface elements displayed at on-screen locations of the display corresponding to the passive and active interaction data; obtaining semantic descriptions of the user interface elements, the semantic descriptions comprising information about the user interface elements; associating the semantic descriptions with the corresponding passive and active interaction data; and using the associations of semantic descriptions with the passive and active interaction data to model user activity at the computing device.
-
Specification