COORDINATED SPEECH AND GESTURE INPUT
First Claim
1. Enacted in a computer system operatively coupled to a vision system, a method to apply natural user input (NUI) to control the computer system, the method comprising:
- detecting a gesture of a user of the computer system, the gesture characterized by a position of a hand with respect to a body of the user;
selecting, based on coordinates derived from the position of the hand, one of a plurality of user-interface (UI) objects displayed on a UI in sight of the user, the selected UI object supporting a plurality of actions;
detecting vocalization from the user;
decoding the vocalization to identify a selected action from among the plurality of actions supported by the selected UI object; and
executing the selected action on the selected UI object.
3 Assignments
0 Petitions
Accused Products
Abstract
A method to be enacted in a computer system operatively coupled to a vision system and to a listening system. The method applies natural user input to control the computer system. It includes the acts of detecting verbal and non-verbal touchless input from a user of the computer system, selecting one of a plurality of user-interface objects based on coordinates derived from the non-verbal, touchless input, decoding the verbal input to identify a selected action from among a plurality of actions supported by the selected object, and executing the selected action on the selected object.
20 Citations
20 Claims
-
1. Enacted in a computer system operatively coupled to a vision system, a method to apply natural user input (NUI) to control the computer system, the method comprising:
-
detecting a gesture of a user of the computer system, the gesture characterized by a position of a hand with respect to a body of the user; selecting, based on coordinates derived from the position of the hand, one of a plurality of user-interface (UI) objects displayed on a UI in sight of the user, the selected UI object supporting a plurality of actions; detecting vocalization from the user; decoding the vocalization to identify a selected action from among the plurality of actions supported by the selected UI object; and executing the selected action on the selected UI object. - View Dependent Claims (2, 3, 4)
-
-
5. Enacted in a computer system operatively coupled to a vision system, a method to apply natural user input (NUI) to control the computer system, the method comprising:
-
detecting one of non-verbal, touchless input and verbal input as a first type of natural user input; detecting a second type of natural user input, the second type being verbal input if the first type is non-verbal touchless input, the second type being non-verbal touchless input if the first type is verbal input; using the first type of user input to constrain a return-parameter space of the second type of user input to reduce noise in the first type of input; selecting a user-interface (UI) object based on the first type of user input; determining a selected action for the selected UI object based on the second type of user input; and executing the selected action on the selected UI object. - View Dependent Claims (6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. Enacted in a computer system operatively coupled to a vision system, a method to apply natural user input (NUI) to control the computer system, the method comprising:
-
detecting non-verbal, touchless input; computing, based on the non-verbal, touchless user input, coordinates on a user interface (UI) arranged in sight of the user; detecting a vocalization; if the coordinates are within a first range, operating a speech-recognition engine of the computer system to interpret the vocalization using a first set of vocabulary; and if the coordinates are within a second range, different than the first range, operating the speech-recognition engine to interpret the vocalization using a second set of vocabulary, which differs from the first set. - View Dependent Claims (17, 18, 19, 20)
-
Specification