DISAMBIGUATION OF MULTITOUCH GESTURE RECOGNITION FOR 3D INTERACTION
First Claim
Patent Images
1. A method for operating an electronic device, the method comprising:
- displaying an image of a 3D region;
detecting an initial gesture on a touch-sensitive surface associated with the electronic device, the initial gesture having characteristics including one or more contact areas and an initial motion of at least one of the one or more contact areas;
selecting, based on one or more of the characteristics of the initial gesture, a manipulation mode, wherein the manipulation mode is selected from a plurality of modes including at least one single-control mode and at least one multi-control mode; and
modifying the image of the 3D region based on the detected initial gesture and the selected manipulation mode.
1 Assignment
0 Petitions
Accused Products
Abstract
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
-
Citations
23 Claims
-
1. A method for operating an electronic device, the method comprising:
-
displaying an image of a 3D region; detecting an initial gesture on a touch-sensitive surface associated with the electronic device, the initial gesture having characteristics including one or more contact areas and an initial motion of at least one of the one or more contact areas; selecting, based on one or more of the characteristics of the initial gesture, a manipulation mode, wherein the manipulation mode is selected from a plurality of modes including at least one single-control mode and at least one multi-control mode; and modifying the image of the 3D region based on the detected initial gesture and the selected manipulation mode. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A computer-implemented system, comprising:
-
one or more data processors; and one or more non-transitory computer-readable storage media including instructions configured to cause the one or more data processors to perform operations including; displaying an image of a 3D region; detecting an initial gesture on a touch-sensitive surface associated with the system, the initial gesture having characteristics including one or more contact areas and an initial motion of at least one of the one or more contact areas; selecting, based on one or more of the characteristics of the initial gesture, a manipulation mode, wherein the manipulation mode is selected from a plurality of modes including at least one single-control mode and at least one multi-control mode; and modifying the image of the 3D region based on the detected initial gesture and the selected manipulation mode. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
-
17. A computer-program product tangibly embodied in a non-transitory machine readable storage medium, including instructions configured to cause a data processing apparatus to:
-
display an image of a 3D region; detect an initial gesture on a touch-sensitive surface associated with an electronic device, the initial gesture having characteristics including one or more contact areas and an initial motion of at least one of the one or more contact areas; select, based on one or more of the characteristics of the initial gesture, a manipulation mode, wherein the manipulation mode is selected from a plurality of modes including at least one single-control mode and at least one multi-control mode; and modify the image of the 3D region based on the detected initial gesture and the selected manipulation mode. - View Dependent Claims (18, 19, 20, 21, 22, 23)
-
Specification