Manipulating an object utilizing a pointing device
First Claim
1. A computer-readable storage device having stored thereon computer-executable instructions comprising:
- identifying a position and an orientation of a pointing device in three-dimensional space;
determining that the pointing device is directed to an object based on the position and the orientation of the pointing device in three-dimensional space;
determining that an input sequence of sensor values output by the pointing device matches a matching prototype sequence from a set of stored prototype sequences, wherein each stored prototype sequence represents a sequence of said sensor values that are generated if a user performs a unique gesture representing a different control action using the pointing device, wherein the matching prototype sequence is determined at least by comparing the matching prototype sequence to the input sequence of sensor values and by comparing the input sequence of sensor values to one or more versions of the matching prototype sequence that are scaled up and down in amplitude and warped in time; and
controlling a computer system based on a command and referent pair, wherein the referent is determined based on the position and the orientation of the pointing device in three-dimensional space, and wherein the command is determined based on a gesture associated with the matching prototype sequence.
2 Assignments
0 Petitions
Accused Products
Abstract
The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs.
435 Citations
14 Claims
-
1. A computer-readable storage device having stored thereon computer-executable instructions comprising:
-
identifying a position and an orientation of a pointing device in three-dimensional space; determining that the pointing device is directed to an object based on the position and the orientation of the pointing device in three-dimensional space; determining that an input sequence of sensor values output by the pointing device matches a matching prototype sequence from a set of stored prototype sequences, wherein each stored prototype sequence represents a sequence of said sensor values that are generated if a user performs a unique gesture representing a different control action using the pointing device, wherein the matching prototype sequence is determined at least by comparing the matching prototype sequence to the input sequence of sensor values and by comparing the input sequence of sensor values to one or more versions of the matching prototype sequence that are scaled up and down in amplitude and warped in time; and controlling a computer system based on a command and referent pair, wherein the referent is determined based on the position and the orientation of the pointing device in three-dimensional space, and wherein the command is determined based on a gesture associated with the matching prototype sequence. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A method for multimodal electronic component control comprising:
-
identifying a position and an orientation of a pointing device in three-dimensional space; determining that the pointing device is directed to an object based on the position and the orientation of the pointing device in three-dimensional space; determining that an input sequence of sensor values output by the pointing device matches a matching prototype sequence from a set of stored prototype sequences, wherein each stored prototype sequence represents a sequence of said sensor values that are generated if a user performs a unique gesture representing a different control action using the pointing device, wherein the matching prototype sequence is determined at least by comparing the matching prototype sequence to the input sequence of sensor values and by comparing the input sequence of sensor values to one or more versions of the matching prototype sequence that are scaled up and down in amplitude and warped in time; and controlling a computer system based a command and referent pair, wherein the referent is determined based on the position and the orientation of the pointing device in three-dimensional space, and wherein the command is determined based on a gesture associated with the matching prototype sequence. - View Dependent Claims (7, 8, 9, 10)
-
-
11. A system for multimodal electronic component control comprising:
-
one or more cameras that capture one or more images of a pointing device in a three-dimensional space; a processing component configured to perform instructions comprising; receiving data associated with the one or more images; identifying a position and an orientation of the pointing device in the three-dimensional space based on the received data; determining that the pointing device is directed to an object based on the position and the orientation of the pointing device in three-dimensional space; determining that an input sequence of sensor values output by the pointing device matches a matching prototype sequence from a set of stored prototype sequences, wherein each stored prototype sequence represents a sequence of said sensor values that are generated if a user performs a unique gesture representing a different control action using the pointing device, wherein the matching prototype sequence is determined at least by comparing the matching prototype sequence to the input sequence of sensor values and by comparing the input sequence of sensor values to one or more versions of the matching prototype sequence that are scaled up and down in amplitude and warped in time; and controlling the system based on a command and referent pair, wherein the referent is determined based on the position and the orientation of the pointing device in three-dimensional space, and wherein the command is determined based on a gesture associated with the matching prototype sequence. - View Dependent Claims (12, 13, 14)
-
Specification