CROSS-PLATFORM INTERACTIVITY ARCHITECTURE
First Claim
1. A method for handling user commands via an extensible environment when manipulating visualized data in a user interface, comprising:
- receiving raw pointer input from a client;
determining an object selected by the raw pointer input in the user interface based on a position in the user interface of the raw pointer input and a position of the object;
recognizing a gesture based on the raw pointer input, a pointer status, and locations of the raw pointer input relative to the object;
determining an interactivity associated with the object and the gesture; and
transmitting the interactivity to the client to update the user interface.
1 Assignment
0 Petitions
Accused Products
Abstract
Maintaining an application for deployment across platforms requires significant engineering effort and an expanded code base, especially when dealing with different input types from the host platforms. To reduce the computational expenses and improve the standardization of the user interface, systems and methods are provided to recognize gestures in a platform agnostic code architecture, which may be deployed on multiple platforms with different input and output methodologies to provide a consistent user experience with a smaller code base to install on those platforms and to maintain. Individual platforms may tailor the size of the architecture to meet their deployment needs and the architecture may be updated independently of the client logic of various applications.
-
Citations
20 Claims
-
1. A method for handling user commands via an extensible environment when manipulating visualized data in a user interface, comprising:
-
receiving raw pointer input from a client; determining an object selected by the raw pointer input in the user interface based on a position in the user interface of the raw pointer input and a position of the object; recognizing a gesture based on the raw pointer input, a pointer status, and locations of the raw pointer input relative to the object; determining an interactivity associated with the object and the gesture; and transmitting the interactivity to the client to update the user interface. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A system having an architecture for handling user commands via an extensible environment when manipulating visualized data in a user interface, the architecture comprising:
-
a gesture recognizer; and an interactivity library; wherein the gesture recognizer is operable to receive raw pointer input and an object identity from the user interface, wherein the raw pointer input indicates a pointer status and a location in the user interface; wherein the gesture recognizer is further operable to select an interactivity from the interactivity library based on the raw pointer input and the object identity; and wherein the interactivity library is operable to transmit the interactivity to update the user interface. - View Dependent Claims (13, 14, 15)
-
-
16. A method for handling user commands via an extensible environment when manipulating visualized data in a user interface, comprising:
-
transmitting, from an input dispatcher of a client logic, raw pointer input and hit test results to a gesture recognizer hosted in an architecture of the extensible environment, wherein the hit test results identify an object in a document model displayed in the user interface; receiving, at an interactivity handler hosted in the client logic from the gesture recognizer, an identified gesture based on the raw pointer input and the hit test results; receiving, at the interactivity handler from an interactivities library hosted in the architecture, an identified action related to the identified gesture to perform on the object; passing the identified action to a command stack hosted by the client logic; and executing the identified action passed to the command stack to update the document model and the user interface according to the identified action. - View Dependent Claims (17, 18, 19, 20)
-
Specification