METHOD AND SYSTEM FOR PERFORMING DRAG AND DROP OPERATIONS ON A DEVICE VIA USER GESTURES
First Claim
Patent Images
1. A method for performing a drag and drop operation using user finger gesture input to a device having a plurality of touch sensitive display, comprising:
- receiving an input of a first finger gesture to a first of the touch sensitive displays, wherein the first finger gesture input is for identifying a source area of the first touch sensitive display, wherein the source area includes data to be copied;
receiving an input of a finger drag gesture for identifying a target area of a second of the touch sensitive displays into which the data from the source data is to be copied, wherein the finger drag gesture extends across a boundary between the first touch sensitive display and the second touch sensitive display, wherein the first and second touch sensitive displays are foldable relative to one another along the boundary;
wherein the target area corresponds to a location on the second touch sensitive display where the drag gesture is last detected before it ceases to be detected;
changing a display of the target area for identifying the target area to a user as able to receive the data from the source area; and
copying the data into the target area.
3 Assignments
0 Petitions
Accused Products
Abstract
A multi-screen user device and methods for performing a drag and drop operation using finger gestures are disclosed. A first finger gesture is used to select a display area from which data is to be copied. Subsequently, a drag finger gesture is used to identify where the data that is to be pasted. The drag may extend across a non-display boundary between a first and second display screen of the multi-screen device.
133 Citations
20 Claims
-
1. A method for performing a drag and drop operation using user finger gesture input to a device having a plurality of touch sensitive display, comprising:
-
receiving an input of a first finger gesture to a first of the touch sensitive displays, wherein the first finger gesture input is for identifying a source area of the first touch sensitive display, wherein the source area includes data to be copied; receiving an input of a finger drag gesture for identifying a target area of a second of the touch sensitive displays into which the data from the source data is to be copied, wherein the finger drag gesture extends across a boundary between the first touch sensitive display and the second touch sensitive display, wherein the first and second touch sensitive displays are foldable relative to one another along the boundary; wherein the target area corresponds to a location on the second touch sensitive display where the drag gesture is last detected before it ceases to be detected; changing a display of the target area for identifying the target area to a user as able to receive the data from the source area; and copying the data into the target area. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A computer readable medium for performing a drag and drop operation using user finger gesture inputs to first and second display screens of a device, wherein the device includes a folding mechanism attached to each of the first and second display screens for providing the first and seconds display screens in a folded configuration wherein the first and second display screens face in opposite directions, and in an unfolded configuration wherein the first and second display screens face in a substantially same direction, comprising:
-
machine instructions for performing the following steps; determining that the first and second display screens are in the unfolded configuration; receiving an input of a first finger gesture to the first display screen, wherein the first finger gesture input is for identifying a source area of the first display screen, wherein the source area includes data to be copied; receiving an input of a finger drag gesture for identifying a target area of the second display screen into which the data from the source data is to be copied, wherein the finger drag gesture extends across a boundary between the first display screen and the second display screen, wherein the first and second display screens are foldable relative to one another along the boundary; wherein the target area corresponds to a location on the second display screen where the drag gesture is last detected before it ceases to be detected; changing a display of the target area for identifying the target area to a user as able to receive the data from the source area; and copying the data into the target area. - View Dependent Claims (11, 12, 13, 14)
-
-
15. A hand-held device for performing a drag and drop operation using user gestures as inputs, comprising:
-
first and second display screens of the device, wherein the device includes a folding mechanism attached to each of the first and second display screens for providing the first and seconds display screens in a folded configuration wherein the first and second display screens face in opposite directions, and in an unfolded configuration wherein the first and second display screens face in a substantially same direction; a display manager for determining whether the first and second display screens are in an unfolded configuration or a folded configuration; a gesture interpreter for interpreting an input of a first gesture to the first display screen for identifying a first area, and subsequently interpreting an input of a second gesture for selecting a second area, wherein the first area includes the second area; a window manager for receiving;
(i) first data from the gesture interpreter, the first data includes location data indicative of a location on the first display screen of the first gesture input, and (ii) second data from the gesture interpreter, the second data for identifying the second area;wherein the window manager uses the location data for determining the first area, and changing a display presentation of the first area for identifying it to a user; wherein after the change in display presentation of the first area, the gesture interpreter receives the second gesture input, and the window manager uses the second data to change a display presentation of the second area for identifying the second area to the user as a source area for obtaining data for the drag and drop operation; wherein when the window manager receives input from the gesture interpreter of a drag gesture being input for performing a drag for the source area, a corresponding image to be dragged is generated; wherein the window manager uses a display screen location from the drag gesture for identifying a target area of the second display screen into which data from the source data is to be copied, wherein the target area corresponds to a location of the second display screen where the drag gesture is last detected by the gesture interpreter before the drag gesture ceases to be detected by the gesture interpreter; wherein the data from the source area is copied into the target area. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification