MULTI-MODAL GESTURE BASED INTERACTIVE SYSTEM AND METHOD USING ONE SINGLE SENSING SYSTEM
First Claim
1. A method for multi-modal touch and touch-less interaction with a computerized system in which said multi-modal touch and touch-less interaction is performed using data information from a single sensing system, the single sensing system being a three-dimensional imaging device, the method comprising the steps of:
- a) detecting and tracking at least one portion of at least one object within a frustum of the three-dimensional imaging device;
b) initiating the interaction by determining if said at least one portion of said at least one object being tracked is performing at least one of;
a predetermined touch gesture on a predetermined interactive area of an interaction surface and, a predetermined touch-less tree-dimensional gesture in a predetermined interactive volume on the vector axis normal to a predetermined interactive area;
c) interacting with the computerized system by detecting and recognizing the gestures performed by said at least one portion of said at least one object within the frustum of the three-dimensional imaging device, the detected and recognized gestures being at least one of;
a predetermined touch gesture on a predetermined interactive area of the interaction surface and a predetermined touch-less tree-dimensional gesture in a predetermined interactive volume on the vector axis normal to a predetermined interactive area.
2 Assignments
0 Petitions
Accused Products
Abstract
Described herein is a method and a system for providing efficient and complementary natural multi-modal gesture based interaction with a computerized system which displays visual feedback information on a graphical user interface on an interaction surface. The interaction surface is within the frustum of an imaging device comprising a single sensing system. The system uses the single sensing system for detecting both touch gesture interactions with the interaction surface (120) and three-dimensional touch-less gesture interactions in areas or volumes above the interaction surface performed by hands of a user. Both types of interaction are associated contextually with an interaction command controlling the computerized system when the gesture has been detected. The system comprises preferably a projection system for displaying the graphical user interface and visual feedback on the interaction surface, the projection system being locatable on the same side or on the opposite side of the interaction surface to the sensing system.
17 Citations
18 Claims
-
1. A method for multi-modal touch and touch-less interaction with a computerized system in which said multi-modal touch and touch-less interaction is performed using data information from a single sensing system, the single sensing system being a three-dimensional imaging device, the method comprising the steps of:
-
a) detecting and tracking at least one portion of at least one object within a frustum of the three-dimensional imaging device; b) initiating the interaction by determining if said at least one portion of said at least one object being tracked is performing at least one of;
a predetermined touch gesture on a predetermined interactive area of an interaction surface and, a predetermined touch-less tree-dimensional gesture in a predetermined interactive volume on the vector axis normal to a predetermined interactive area;c) interacting with the computerized system by detecting and recognizing the gestures performed by said at least one portion of said at least one object within the frustum of the three-dimensional imaging device, the detected and recognized gestures being at least one of;
a predetermined touch gesture on a predetermined interactive area of the interaction surface and a predetermined touch-less tree-dimensional gesture in a predetermined interactive volume on the vector axis normal to a predetermined interactive area. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 17, 18)
-
-
14. A system for interacting with a graphical user interface, the system comprising:
-
a display system for displaying the graphical user interface onto an interaction surface; a three-dimensional imaging system being operated for at least tracking at least one portion of at least one hand of a user within its frustum; and a computer system being configured for controlling the display system and the three-dimensional imaging device, and for determining gesture based interaction controls using data output from the three-dimensional imaging device; the system being characterized in that the display surface for displaying the graphical user interface comprises at least a portion of the frustum of the three-dimensional imaging device and being substantially aligned therewith. - View Dependent Claims (15, 16)
-
Specification