GESTURE BASED MODELING SYSTEM AND METHOD
First Claim
1. A method of using a gesture to create a model presented in a display area, wherein the gesture is computer readable, comprising:
- performing the gesture such that two or more characteristics associated with the gesture are input to a computer along with the gesture, wherein at least one of the characteristics includes a context of the gesture with respect to objects within the display area, and at least one of the characteristics includes an orientation of the gesture with respect to the display area;
mapping, by the computer, the gesture and the two or more characteristics to one or more model elements;
creating the model by accumulating the one or more model elements, wherein the model conforms to a meta-model; and
,presenting the model in the display area.
3 Assignments
0 Petitions
Accused Products
Abstract
Described is a method and system for creating model components, such as business model components, using gestures that are input to a computer system. In an exemplary embodiment, the gestures are input to a computer system with a mouse device, but in general the gestures can be input via any suitable information input device. The gestures have at least three attributes. First, the gesture is orientation sensitive. This requires that the meaning of the gesture depends on the direction in which the gesture is made. Second, the gesture is context sensitive. This requires that the meaning of the gesture depends on the starting point and the ending point of the gesture. Third, the gesture is coincident input sensitive. This requires that the meaning of the gesture depends on the state of additional input from the user.
-
Citations
20 Claims
-
1. A method of using a gesture to create a model presented in a display area, wherein the gesture is computer readable, comprising:
-
performing the gesture such that two or more characteristics associated with the gesture are input to a computer along with the gesture, wherein at least one of the characteristics includes a context of the gesture with respect to objects within the display area, and at least one of the characteristics includes an orientation of the gesture with respect to the display area; mapping, by the computer, the gesture and the two or more characteristics to one or more model elements; creating the model by accumulating the one or more model elements, wherein the model conforms to a meta-model; and
,presenting the model in the display area. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 12)
-
-
11. A system for creating a model from a gesture performed by a user, and presenting the model in a display area, wherein the gesture is computer readable, comprising:
-
a computing device having at least a processor, a display, and a memory device; an input device with which the user performs the gesture, wherein the input device provides two or more characteristics associated with the gesture to the computing device along with the gesture, at least one of the characteristics includes a context of the gesture with respect to objects within the display area, and at least one of the characteristics includes an orientation of the gesture with respect to the display area; wherein the computing device; (i) maps the gesture and the two or more characteristics to at one or more model elements; (iii) creates the model by accumulating the one or more model elements, wherein the model conforms to a meta-model; and
,(iv) presents the model in the display area. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20)
-
Specification