Input method applied in electronic devices
First Claim
1. An input method applicable for inputting into an electronic device having an image capturing unit, a processing module, a lip-reading analyzing unit, a lip motion code database, a display module, a facial expression analyzing unit and a facial expression code database, and the input method comprising the steps of:
- capturing a lip motion of a person through the image capturing unit;
receiving an image of the lip motion from the image capturing unit;
encoding the lip motion image through the lip-reading analyzing unit to obtain a lip motion code;
the processing module comparing the lip motion code with a plurality of standard lip motion codes stored in the lip motion code database, to obtain a first text result matching the lip motion code;
displaying the first text result through the display module if the first text result is obtained;
activating an auxiliary analyzing mode, if the first text result is not obtained, wherein the auxiliary analyzing mode is a facial expression analyzing mode;
capturing a facial expression of the person through the image capturing unit;
receiving an image of the facial expression from the image capturing unit;
encoding the facial expression image through the facial expression analyzing unit to obtain a facial expression code;
the processing module comparing the facial expression code with a plurality of standard facial expression codes stored in the facial expression code database, and comparing the lip motion code with the plurality of standard lip motion codes, to obtain a second text result matching the facial expression code and the lip motion code; and
displaying the second text result through the display module if the second text result is obtained.
1 Assignment
0 Petitions
Accused Products
Abstract
An input method applicable for inputting into an electronic device, which includes the steps of capturing a lip motion of a person; receiving an image of the lip motion; encoding the lip motion image to obtain a lip motion code; comparing the lip motion code with a plurality of standard lip motion codes to obtain a first text result matching the lip motion code; and displaying the first text result on the electronic device if the first text result is obtained. If the first text result is not obtained, the method may further include activating an auxiliary analyzing mode for the electronic device for recognizing a facial expression, a hand gesture, or an audio signal to be inputted. The input method can diversify input methods for the electronic device.
9 Citations
12 Claims
-
1. An input method applicable for inputting into an electronic device having an image capturing unit, a processing module, a lip-reading analyzing unit, a lip motion code database, a display module, a facial expression analyzing unit and a facial expression code database, and the input method comprising the steps of:
-
capturing a lip motion of a person through the image capturing unit; receiving an image of the lip motion from the image capturing unit; encoding the lip motion image through the lip-reading analyzing unit to obtain a lip motion code; the processing module comparing the lip motion code with a plurality of standard lip motion codes stored in the lip motion code database, to obtain a first text result matching the lip motion code; displaying the first text result through the display module if the first text result is obtained; activating an auxiliary analyzing mode, if the first text result is not obtained, wherein the auxiliary analyzing mode is a facial expression analyzing mode; capturing a facial expression of the person through the image capturing unit; receiving an image of the facial expression from the image capturing unit; encoding the facial expression image through the facial expression analyzing unit to obtain a facial expression code; the processing module comparing the facial expression code with a plurality of standard facial expression codes stored in the facial expression code database, and comparing the lip motion code with the plurality of standard lip motion codes, to obtain a second text result matching the facial expression code and the lip motion code; and displaying the second text result through the display module if the second text result is obtained. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
Specification