Apparatus and method for inputting command using gesture
First Claim
Patent Images
1. An input apparatus using a gesture comprising:
- at least one camera photographing a gesture image of a user'"'"'s hands introduced into a predetermined detection region near the input apparatus;
a controller displaying content, a virtual image of a device to which a gesture of the user is applied, and a user gesture image that has been photographed in a gesture recognition mode, and executing a command according to the gesture of the user;
a storage unit storing a command code used to run an operation of the controller;
a micro-phone receiving a voice of the user; and
a display outputting the images;
wherein, when a keyboard input mode is executed according to the voice received through the micro-phone, a keyboard input window is displayed, wherein a keyboard virtual image and input values are displayed on the keyboard input window;
wherein, in the keyboard mode, the controller is configured to;
recognize a finger position on the user gesture image based on a position of a user'"'"'s finger in the detection region;
compare the finger position with a position of the keyboard virtual image to recognize the finger position on the keyboard virtual image;
display a feedback image corresponding to the finger position on the keyboard virtual image;
set points to positions of finger joints of the finger position;
detect a key input gesture position on the keyboard virtual image when a key input gesture is detected by recognizing a movement of each of the points; and
display a key image corresponding to the key input gesture position on the keyboard virtual image;
wherein the user'"'"'s finger does not touch any real input device or the virtual image of the device;
wherein, when a mouse input mode is executed according to the voice received through the micro-phone, a virtual mouse image is displayed;
wherein, when the user makes a gesture to grip the virtual mouse image in the mouse mode, a hand image corresponding to the user'"'"'s hand grips the virtual mouse image, and, in this state, when the user moves the user'"'"'s hand, the virtual mouse image moves accordingly;
wherein, when a phone mode is executed according to the voice received through the micro-phone, an input window to display a key pad and phone numbers, which have been input, is displayed; and
wherein all of the key pad and the input window are displayed as semi-transparent images.
1 Assignment
0 Petitions
Accused Products
Abstract
Disclosure is a method of inputting commands into displays such as TVs or image processing devices. User'"'"'s hands have been photographed through a camera to recognize the motion of the user'"'"'s hands, so that commands are input according to the motion of user'"'"'s hands instead of conventional input devices such as a mouse and a keyboard.
17 Citations
17 Claims
-
1. An input apparatus using a gesture comprising:
-
at least one camera photographing a gesture image of a user'"'"'s hands introduced into a predetermined detection region near the input apparatus; a controller displaying content, a virtual image of a device to which a gesture of the user is applied, and a user gesture image that has been photographed in a gesture recognition mode, and executing a command according to the gesture of the user; a storage unit storing a command code used to run an operation of the controller; a micro-phone receiving a voice of the user; and a display outputting the images; wherein, when a keyboard input mode is executed according to the voice received through the micro-phone, a keyboard input window is displayed, wherein a keyboard virtual image and input values are displayed on the keyboard input window; wherein, in the keyboard mode, the controller is configured to; recognize a finger position on the user gesture image based on a position of a user'"'"'s finger in the detection region; compare the finger position with a position of the keyboard virtual image to recognize the finger position on the keyboard virtual image; display a feedback image corresponding to the finger position on the keyboard virtual image; set points to positions of finger joints of the finger position; detect a key input gesture position on the keyboard virtual image when a key input gesture is detected by recognizing a movement of each of the points; and display a key image corresponding to the key input gesture position on the keyboard virtual image; wherein the user'"'"'s finger does not touch any real input device or the virtual image of the device; wherein, when a mouse input mode is executed according to the voice received through the micro-phone, a virtual mouse image is displayed; wherein, when the user makes a gesture to grip the virtual mouse image in the mouse mode, a hand image corresponding to the user'"'"'s hand grips the virtual mouse image, and, in this state, when the user moves the user'"'"'s hand, the virtual mouse image moves accordingly; wherein, when a phone mode is executed according to the voice received through the micro-phone, an input window to display a key pad and phone numbers, which have been input, is displayed; and wherein all of the key pad and the input window are displayed as semi-transparent images. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. An input method using a gesture of a user, the input method comprising:
-
photographing a gesture of a user introduced into a predetermined detection region; simultaneously displaying content and a virtual image of a device to which the gesture of the user is applied in a gesture recognition mode; displaying a photographed image of the gesture of the user; displaying a feedback image according to the gesture of the user; displaying a key image according to the gesture of the user; and executing a command according to the gesture of the user; wherein, when a keyboard input mode is executed according to a voice received through a micro-phone, a keyboard input window is displayed, wherein a keyboard virtual image and input values are displayed on the keyboard input window; wherein, in the keyboard mode, displaying the feedback image comprises; recognizing a finger position on the photographed image based on a position of a user'"'"'s finger in the detection region; comparing the finger position with a position of the keyboard virtual image to recognize the finger position on the keyboard virtual image; displaying the feedback image corresponding to the finger position on the keyboard virtual image; and setting points to positions of finger joints of the finger position; wherein displaying the key image comprises; detecting a key input gesture position on the keyboard virtual image when a key input gesture is detected by recognizing a movement of each of the points; and displaying the key image corresponding to the key input gesture position on the keyboard virtual image; wherein the user'"'"'s finger does not touch any real input device or the virtual image of the device; wherein, when a mouse input mode is executed according to the voice received through the micro-phone, a virtual mouse image is displayed; wherein, when the user makes a gesture to grip the virtual mouse image in the mouse mode, a hand image corresponding to the user'"'"'s hand grips the virtual mouse image, and, in this state, when the user moves the user'"'"'s hand, the virtual mouse image moves accordingly; wherein, when a phone mode is executed according to the voice received through the micro-phone, an input window to display a key pad and phone numbers, which have been input, is displayed; and wherein all of the key pad and the input window are displayed as semi-transparent images. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17)
-
Specification