SYSTEM AND METHOD FOR ASSIGNING VOICE AND GESTURE COMMAND AREAS
First Claim
1. An apparatus for assigning voice and air-gesture command areas, said apparatus comprising:
- a recognition module configured to receive data captured by at least one sensor related to a computing environment and at least one user within and identify one or more attributes of said user based on said captured data and establish user input based on said user attributes, wherein said user input includes at least one of a voice command and air-gesture command and a corresponding one of a plurality of user input command areas in which said voice or air-gesture command occurred; and
an application control module configured to receive and analyze said user input and an application to be controlled by said user input based, at least in part, on said user input command area in which said user input occurred and permit user interaction with and control of one or more parameters of said identified application based on said user input.
1 Assignment
0 Petitions
Accused Products
Abstract
A system and method for assigning user input command areas for receiving user voice and air-gesture commands and allowing user interaction and control of multiple applications of a computing device. The system includes a voice and air-gesture capturing system configured to allow a user to assign three-dimensional user input command areas within the computing environment for each of the multiple applications. The voice and air-gesture capturing system is configured to receive data captured by one or more sensors in the computing environment and identify user input based on the data, including user speech and/or air-gesture commands within one or more user input command areas. The voice and air-gesture capturing system is further configured to identify an application corresponding to the user input based on the identified user input command area and allow user interaction with the identified application based on the user input.
-
Citations
20 Claims
-
1. An apparatus for assigning voice and air-gesture command areas, said apparatus comprising:
-
a recognition module configured to receive data captured by at least one sensor related to a computing environment and at least one user within and identify one or more attributes of said user based on said captured data and establish user input based on said user attributes, wherein said user input includes at least one of a voice command and air-gesture command and a corresponding one of a plurality of user input command areas in which said voice or air-gesture command occurred; and an application control module configured to receive and analyze said user input and an application to be controlled by said user input based, at least in part, on said user input command area in which said user input occurred and permit user interaction with and control of one or more parameters of said identified application based on said user input. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. At least one computer accessible medium storing instructions which, when executed by a machine, cause the machine to perform operations for assigning voice and air-gesture command areas, said operations comprising:
-
monitoring a computing environment and at least one user within said computing environment attempting to interact with a user interface; receiving data captured by at least one sensor within said computing environment; identifying one or more attributes of said at least one user in said computing environment based on said captured data and establishing user input based on said user attributes, said user input including at least one of a voice command and an air-gesture command and a corresponding one of a plurality of user input command areas in which said voice or air-gesture command occurred; and identifying an application to be controlled by said user input based, at least in part, on said corresponding user input command area. - View Dependent Claims (12, 13, 14, 15)
-
-
16. A method for assigning voice and air-gesture command areas, said method comprising:
-
monitoring a computing environment and at least one user within said computing environment attempting to interact with a user interface; receiving data captured by at least one sensor within said computing environment; identifying one or more attributes of said at least one user in said computing environment based on said captured data and establishing user input based on said user attributes, said user input including at least one of a voice command and an air-gesture command and a corresponding one of a plurality of user input command areas in which said voice or air-gesture command occurred; and identifying an application to be controlled by said user input based, at least in part, on said corresponding user input command area. - View Dependent Claims (17, 18, 19, 20)
-
Specification