Automatic speech recognition (ASR) feedback for head mounted displays (HMD)
First Claim
Patent Images
1. A method of utilizing a user input from a user, comprising:
- receiving, by a headset computer, the user input comprising a combination of one or more of (i) a voice command and a head movement and (ii) a voice command and a hand gesture;
interpreting the head movement to generate an interpreted head movement command;
interpreting the hand gesture to generate an interpreted hand gesture;
interpreting the voice command using an automatic speech recognition system to generate an interpreted voice command, and providing visual feedback to the user on a display such that the visual feedback is presented to the user one of (a) within 500mS of when the voice command is interpreted and (b) within two cycles of a frame rate of the display of when the voice command is interpreted; and
combining the interpreted voice command and at least one of the interpreted head movement command and the interpreted hand gesture command to generate a host command.
3 Assignments
0 Petitions
Accused Products
Abstract
Feedback mechanisms to the user of a Head Mounted Display (HMD) are provided. It is important to provide feedback to the user when speech is recognized as soon as possible after the user utters a voice command. The HMD displays and/or audibly renders an ASR acknowledgment in a manner that ensures the user that the HMD has received/understood his voiced command.
-
Citations
16 Claims
-
1. A method of utilizing a user input from a user, comprising:
-
receiving, by a headset computer, the user input comprising a combination of one or more of (i) a voice command and a head movement and (ii) a voice command and a hand gesture; interpreting the head movement to generate an interpreted head movement command; interpreting the hand gesture to generate an interpreted hand gesture; interpreting the voice command using an automatic speech recognition system to generate an interpreted voice command, and providing visual feedback to the user on a display such that the visual feedback is presented to the user one of (a) within 500mS of when the voice command is interpreted and (b) within two cycles of a frame rate of the display of when the voice command is interpreted; and combining the interpreted voice command and at least one of the interpreted head movement command and the interpreted hand gesture command to generate a host command. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. An apparatus for utilizing a user input from a user, comprising:
-
a headset computer, including a processor, configured to; receive the user input comprising a combination of one or more of (i) a voice command and a head movement and (ii) a voice command and a hand gesture; interpret the head movement to generate an interpreted head movement command; interpret the hand gesture to generate an interpreted hand gesture; interpreting the voice command using an automatic speech recognition system to generate an interpreted voice command, and providing visual feedback to the user on a display such that the visual feedback is presented to the user one of (a) within 500mS of when the voice command is interpreted and (b) within two cycles of a frame rate of the display of when the voice command is interpreted; and combine the interpreted voice command and at least one of the interpreted head movement command and the interpreted hand gesture command to generate a host command. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
Specification