METHOD OF INFERRING NAVIGATIONAL INTENT IN GESTURAL INPUT SYSTEMS
First Claim
1. In a processing system having a touch screen display, a method of inferring navigational intent by a user in a gestural input system of the processing system comprising:
- receiving current gestural input data for an application of the processing system from the touch screen display;
generating an output action based at least in part on an analysis of one or more of the current gestural input data, past gestural input data for the application, and current and past context information of usage of the processing system; and
performing the output action.
1 Assignment
0 Petitions
Accused Products
Abstract
In a processing system having a touch screen display, a method of inferring navigational intent by a user in a gestural input system of the processing system is disclosed. A graphical user interface may receive current gestural input data for an application of the processing system from the touch screen display. The graphical user interface may generate an output action based at least in part on an analysis of one or more of the current gestural input data, past gestural input data for the application, and current and past context information of usage of the processing system. The graphical user interface may cause performance of the output action.
22 Citations
16 Claims
-
1. In a processing system having a touch screen display, a method of inferring navigational intent by a user in a gestural input system of the processing system comprising:
-
receiving current gestural input data for an application of the processing system from the touch screen display; generating an output action based at least in part on an analysis of one or more of the current gestural input data, past gestural input data for the application, and current and past context information of usage of the processing system; and performing the output action. - View Dependent Claims (2)
-
-
3. A machine-readable medium comprising one or more instructions that when executed on a processor of a processing system, the processing system including a touch screen display, to perform one or more operations to receive current gestural input data for an application of the processing system from the touch screen display, and to generate an output action based at least in part on an analysis of one or more of the current gestural input data, past gestural input data for the application, and current and past context information of usage of the processing system.
-
4. The machine-readable medium of claim 4, wherein the current and past context information comprises at least one of current time of day, current time zone, geographic location of the processing system, other applications active on the processing system, and current status of the user in a calendar application.
-
5. A processing system comprising:
-
a touch screen display; and a graphical user interface to infer navigational intent by a user when providing gestural input data to the touch screen display, the graphical user interface adapted to receive current gestural input data for an application of the processing system from the touch screen display, and to generate an output action based at least in part on an analysis of one or more of the current gestural input data, past gestural input data for the application, and current and past context information of usage of the processing system. - View Dependent Claims (6)
-
-
7. In a processing system having a touch screen display, a method of inferring navigational intent by a user in a gestural input system of the processing system comprising:
-
receiving current gestural input data for an application of the processing system from the touch screen display; sending the current gestural input data to at least one aggregator component; at least one of creating and updating an application specific usage model by the at least one aggregator component based at least in part on the current gestural input data, past gestural input data, and the application; at least one of creating and updating a context usage model based at least in part on a current context of the processing system; predicting modifications to the current gestural input data based at least in part one or more of the current gestural input data, the current context, the application specific usage model, and the context usage model; and modifying the current gestural input data based at least in part on the predicted modifications. - View Dependent Claims (8, 9)
-
-
10. A processing system comprising:
-
a touch screen display; at least one reporting component to receive current gestural input data by a user from the touch screen display for use by an application; at least one aggregator component to receive current gestural input data from the at least one reporting component, to analyze the current gestural input data in relation to past gestural input data, and to at least one of create and update an application specific usage model; a context trainer component to at least one of create and update a context usage model based at least in part on a current context of the processing system; an application specific predictor component to predict the user'"'"'s current navigational intent for gestural input based at least in part on the current gestural input data and the application specific usage model; a context predictor component to predict the user'"'"'s current navigational intent for gestural input based at least in part on the current gestural input data and the context usage model; and a modifying component to modify the current gestural input data based at least in part on the predicted values from at least one of the application specific predictor component and the context predictor component. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
Specification