Virtual mouse driving apparatus and method using two-handed gestures
First Claim
1. A virtual mouse driving method using a video camera comprising:
- keeping track of an input gesture captured with the video camera;
removing a background portion of the input gesture image with reference to information of the input gesture image and extracting both a left-hand region and a right-hand region from the input gesture image having the background portion thereof removed, wherein the information of the input gesture image includes information regarding a color of skin of a user;
recognizing a left-hand gesture from the extracted left-hand region and recognizing a right-hand gesture from the extracted right-hand region and recognizing gesture commands corresponding to a combination of both the left-hand recognition results and the right-hand recognition results,wherein the left-hand gesture and the right-hand gesture are recognized on the basis of the contours and features of a right hand and a left hand of the user; and
executing the recognized gesture commands;
wherein one of the left-hand gesture and the right-hand gesture is selected as a gesture corresponding to a first-hand for selecting a command function among a plurality of command functions, wherein the plurality of command functions correspond to a plurality of command icons in a menu which is displayed on a screen of a display device, andthe remaining one of the left-hand gesture and the right-hand gesture is selected as a gesture corresponding to a second-hand for controlling an action on the display device,wherein the recognizing of the gesture commands comprises;
detecting the gesture corresponding to the first-hand, displaying the menu on the screen in response to the detection of the gesture corresponding to the first-hand, and recognizing one of a plurality of command icons of the displayed menu currently pointed at by or selected by a cursor according to the detected gesture corresponding to the first-hand; and
detecting the gesture corresponding to the second-hand and recognizing an object currently pointed at or selected by the cursor according to the detected gesture corresponding to the second-hand.
1 Assignment
0 Petitions
Accused Products
Abstract
A virtual mouse driving apparatus and method for processing a variety of gesture commands as equivalent mouse commands based on two-handed gesture information obtained by a video camera are provided. The virtual mouse driving method includes: keeping track of an input gesture input with the video camera; removing a background portion of the input gesture image and extracting a left-hand region and a right-hand region from the input gesture image whose background portion has been removed; recognizing a left-hand gesture and a right-hand gesture from the extracted left-hand region and the extracted right-hand region, respectively, and recognizing gesture commands corresponding to the recognition results; and executing the recognized gesture commands.
103 Citations
8 Claims
-
1. A virtual mouse driving method using a video camera comprising:
-
keeping track of an input gesture captured with the video camera; removing a background portion of the input gesture image with reference to information of the input gesture image and extracting both a left-hand region and a right-hand region from the input gesture image having the background portion thereof removed, wherein the information of the input gesture image includes information regarding a color of skin of a user; recognizing a left-hand gesture from the extracted left-hand region and recognizing a right-hand gesture from the extracted right-hand region and recognizing gesture commands corresponding to a combination of both the left-hand recognition results and the right-hand recognition results, wherein the left-hand gesture and the right-hand gesture are recognized on the basis of the contours and features of a right hand and a left hand of the user; and executing the recognized gesture commands; wherein one of the left-hand gesture and the right-hand gesture is selected as a gesture corresponding to a first-hand for selecting a command function among a plurality of command functions, wherein the plurality of command functions correspond to a plurality of command icons in a menu which is displayed on a screen of a display device, and the remaining one of the left-hand gesture and the right-hand gesture is selected as a gesture corresponding to a second-hand for controlling an action on the display device, wherein the recognizing of the gesture commands comprises; detecting the gesture corresponding to the first-hand, displaying the menu on the screen in response to the detection of the gesture corresponding to the first-hand, and recognizing one of a plurality of command icons of the displayed menu currently pointed at by or selected by a cursor according to the detected gesture corresponding to the first-hand; and detecting the gesture corresponding to the second-hand and recognizing an object currently pointed at or selected by the cursor according to the detected gesture corresponding to the second-hand. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A virtual mouse driving apparatus comprising:
-
a camera control unit keeping track of a gesture, and capturing an input gesture image with a video camera; a gesture region extraction unit removing a background portion of the input gesture image with reference to information of the input gesture image, and extracting both a left-hand region and a right-hand region from the input gesture image having the background portion thereof removed, wherein the information of the input gesture image includes information regarding a color of skin of a user; a gesture recognition unit sensing left-hand gestures based on contours and features of the left hand extracted from the extracted left-hand region by the gesture region extraction unit and right-hand gestures based on contours and features of the right hand extracted from the extracted right hand-region by the gesture region extraction unit and recognizing gesture commands represented by both the sensed left-hand and right-hand gestures; and a command processing unit processing the gesture commands recognized by the gesture recognition unit as equivalent mouse commands, wherein one of the left-hand gestures and the right-hand gestures is selected as a gesture corresponding to a first-hand for selecting a command function among a plurality of command functions, wherein the plurality of command functions correspond to a plurality of command icons in a menu displayed on a screen of a display device, and the remaining one of the left-hand gesture and the right-hand gesture is selected as a gesture corresponding to a second-hand for controlling an action on the display device, wherein the gesture recognition unit comprises; a first-hand gesture processor displaying the menu on the screen when the gesture corresponding to the first-hand is detected, wherein the first-hand gesture processor highlights one of a plurality of command icons of the displayed menu pointed at by a cursor when the detected gesture corresponding to the first-hand is determined to be a pointing gesture, and wherein the first-hand gesture processor selects and executes a command corresponding to the highlighted command icon when another gesture corresponding to the first-hand is further detected and is determined to be a clicking gesture; and a second-hand gesture processor determining whether the gesture corresponding to the second-hand is the pointing gesture or a selection gesture, wherein the second-hand gesture processor varies the location of the cursor by keeping track of a location designated by the detected second-hand gesture when the detected gesture corresponding to the second-hand is determined to be a pointing gesture, and wherein the second-hand gesture processor selects an object currently pointed at by the cursor and varies the location of the object when the detected gesture corresponding to the second-hand is determined to be a selection gesture.
-
-
8. A virtual mouse driving method using a video camera comprising:
-
keeping track of an input gesture captured with the video camera; removing a background portion of the input gesture image with reference to information of the input gesture image, and extracting both a first region and a second region from the input gesture image having the background portion thereof removed, wherein the information of the input gesture image includes information regarding a color of skin of a user; recognizing a first-hand gesture from the extracted first region and recognizing a second-hand gesture from the extracted second region, and recognizing gesture commands corresponding to a combination of both the first-hand recognition results and the second-hand recognition results, the gesture commands consisting of a pointing gesture and a selecting gesture, wherein the first-hand gesture and the second-hand gesture are recognized on the basis of the contours and features of a right hand and a left hand of the user; and executing the recognized gesture commands; when the first-hand gesture is the pointing gesture, a graphic user interface-based menu comprising a plurality of command icons corresponding to mouse commands is navigated, the mouse commands comprising; a left-click command; a right-click command; a double-click command; a scroll-up command; a scroll-down command; a forward-click command; and a back-click command; when the first-hand gesture is the selecting gesture, selecting a command icon among the plurality of command icons so as to execute the corresponding mouse command; when the second-hand gesture is the pointing gesture, controlling movement of a curser on a display device; and when a second-hand gesture is the selecting gesture, selecting and dragging an object displayed on the display device.
-
Specification