System and method for gesture based control system
First Claim
1. A method of controlling a computer display comprising:
- detecting a physical control gesture made by a user by dynamically detecting a position of at least one marker on the user, the detecting comprising using gesture data that is absolute three-space location data of an instantaneous state of the user at a point in time and space, and identifying the physical control gesture using only the gesture data;
translating the control gesture to an executable command;
updating the computer display in response to the executable command.
4 Assignments
0 Petitions
Accused Products
Abstract
The system provides a gestural interface to various visually presented elements, presented on a display screen or screens. A gestural vocabulary includes ‘instantaneous’ commands, in which forming one or both hands into the appropriate ‘pose’ results in an immediate, one-time action; and ‘spatial’ commands, in which the operator either refers directly to elements on the screen by way of literal ‘pointing’ gestures or performs navigational maneuvers by way of relative or “offset” gestures. The system contemplates the ability to identify the users hands in the form of a glove or gloves with certain indicia provided thereon, or any suitable means for providing recognizable indicia on a user'"'"'s hands or body parts. A system of cameras can detect the position, orientation, and movement of the user'"'"'s hands and translate that information into executable commands.
-
Citations
107 Claims
-
1. A method of controlling a computer display comprising:
-
detecting a physical control gesture made by a user by dynamically detecting a position of at least one marker on the user, the detecting comprising using gesture data that is absolute three-space location data of an instantaneous state of the user at a point in time and space, and identifying the physical control gesture using only the gesture data; translating the control gesture to an executable command; updating the computer display in response to the executable command.
-
-
2. A method comprising:
-
detecting poses and motion of an object from gesture data received via a detector, the detecting comprising dynamically detecting a position of at least one marker on the object using the gesture data that is absolute three-space location data of an instantaneous state of the poses and motion at a point in time and space, the detecting comprising identifying the poses and motion using only the gesture data; translating the poses and motion into a control signal using a gesture notation; and controlling a computer application using the control signal.
-
-
3. A method comprising:
-
automatically detecting a gesture of a body from gesture data received via a detector, the detecting comprising detecting a position of at least one marker on the body using the gesture data that is absolute three-space location data of an instantaneous state of the body at a point in time and space, the detecting comprising identifying the gesture using only the gesture data; translating the gesture to a gesture signal; and controlling a component coupled to a computer in response to the gesture signal.
-
-
4. The method of claim 3, wherein the detecting includes detecting a location of the body.
-
5. The method of claim 3, wherein the detecting includes detecting an orientation of the body.
-
6. The method of claim 3, wherein the detecting includes detecting motion of the body.
-
7. The method of claim 3, wherein the detecting comprises identifying the gesture, wherein the identifying includes identifying a pose and an orientation of a portion of the body.
-
8. The method of claim 3, wherein the detecting includes detecting at least one of a first set of appendages and a second set of appendages of the body.
-
9. The method of claim 8, wherein the body is a human, wherein the first set of appendages include at least one hand, wherein the second set of appendages include at least one finger of the at least one hand.
-
10. The method of claim 3, wherein the detecting includes optically detecting motion of the body.
-
11. The method of claim 3, wherein the at least one marker includes a set of markers.
-
12. The method of claim 11, wherein the detecting includes detecting position of the set of markers coupled to a part of the body.
-
13. The method of claim 11, wherein the set of markers form a plurality of patterns on the body.
-
14. The method of claim 11, wherein the detecting includes detecting position of a plurality of appendages of the body using the set of markers coupled to each of the appendages.
-
15. The method of claim 14, wherein a first set of markers is coupled to a first appendage, the first set of markers forming a first pattern common to a plurality of components of the first appendage and a second pattern unique to each of the components of the first appendage.
-
16. The method of claim 14, wherein a second set of markers is coupled to a second appendage, the second set of markers forming a third pattern common to a plurality of components of the second appendage and a fourth pattern unique to each of the components of the second appendage.
-
17. The method of claim 11, wherein the detecting comprises assigning a position of each marker to a subset of markers that form a tag.
-
18. The method of claim 17, wherein the detecting comprises identifying the subset of markers as a particular tag and labeling the subset of markers as the particular tag.
-
19. The method of claim 18, wherein the detecting comprises recovering a three-space location of the particular tag.
-
20. The method of claim 19, wherein the detecting comprises recovering a three-space orientation of the particular tag.
-
21. The method of claim 11, comprising a set of tags that include the set of markers.
-
22. The method of claim 21, wherein the detecting includes detecting position of the set of tags coupled to a part of the body.
-
23. The method of claim 22, wherein each tag of the set of tags includes a pattern, wherein the pattern comprises the set of markers, wherein each pattern of each tag of the set of tags is different than any pattern of any remaining tag of the set of tags.
-
24. The method of claim 23, wherein each tag includes a first pattern and a second pattern, wherein the first pattern is common to any tag of the set of tags and the second pattern is different between at least two tags of the set of tags.
-
25. The method of claim 22, wherein the set of tags form a plurality of patterns on the body.
-
26. The method of claim 21, wherein the detecting includes detecting position of a plurality of appendages of the body using the set of tags coupled to each of the appendages.
-
27. The method of claim 26, wherein a first set of tags is coupled to a first appendage, the first set of tags including a first plurality of tags, wherein each tag includes a first pattern common to the tags of the first plurality of tags and a second pattern unique to each tag of the first plurality of tags.
-
28. The method of claim 27, wherein a second set of tags is coupled to a second appendage, the second set of tags including a second plurality of tags, wherein each tag includes a third pattern common to the tags of the second plurality of tags and a fourth pattern unique to each tag of the second plurality of tags.
-
29. The method of claim 21, wherein the set of tags comprise at least one of an active tag and a passive tag.
-
30. The method of claim 21, wherein the detecting comprises assigning a position of each tag to a subset of points that form a single tag.
-
31. The method of claim 30, wherein the detecting comprises identifying the subset of points as a particular tag and labeling the subset of points as the particular tag.
-
32. The method of claim 31, wherein the detecting comprises recovering a three-space location of the particular tag.
-
33. The method of claim 32, wherein the detecting comprises recovering a three-space orientation of the particular tag.
-
34. The method of claim 3, wherein the detecting includes detecting motion of the body at least one of electromagnetic detection, magnetostatic detection, and detection of radio frequency identification information.
-
35. The method of claim 3, wherein the detecting comprises:
-
generating three-dimensional space point data representing the physical gesture; labeling the space point data.
-
-
36. The method of claim 35, wherein the translating includes translating the space point data into commands appropriate to a configuration of the computer.
-
37. The method of claim 3, wherein the translating comprises translating information of the gesture to a gesture notation.
-
38. The method of claim 37, wherein the gesture notation represents a gesture vocabulary, and the gesture signal comprises communications of the gesture vocabulary.
-
39. The method of claim 38, wherein the gesture vocabulary represents in textual form instantaneous pose states of kinematic linkages of the body.
-
40. The method of claim 38, wherein the gesture vocabulary represents in textual form an orientation of kinematic linkages of the body.
-
41. The method of claim 38, wherein the gesture vocabulary represents in textual form a combination of orientations of kinematic linkages of the body.
-
42. The method of claim 38, wherein the gesture vocabulary includes a string of characters that represent a state of kinematic linkages of the body.
-
43. The method of claim 42, wherein the kinematic linkage is at least one first appendage of the body.
-
44. The method of claim 43, comprising assigning each position in the string to a second appendage, the second appendage connected to the first appendage.
-
45. The method of claim 44, comprising assigning characters of a plurality of characters to each of a plurality of positions of the second appendage.
-
46. The method of claim 45, wherein the plurality of positions is established relative to a coordinate origin.
-
47. The method of claim 46, comprising establishing the coordinate origin using a position selected from a group consisting of an absolute position and orientation in space, a fixed position and orientation relative to the body irrespective of an overall position and heading of the body, and interactively in response to an action of the body.
-
48. The method of claim 45, comprising assigning characters of the plurality of characters to each of a plurality of orientations of the first appendage.
-
49. The method of claim 43, wherein controlling the component comprises controlling a three-space object in six degrees of freedom simultaneously by mapping the gesture of the first appendage to the three-space object.
-
50. The method of claim 43, wherein controlling the component comprises controlling a three-space object through three translational degrees of freedom and three rotational degrees of freedom.
-
51. The method of claim 50, wherein the controlling comprises a direct coupling between motion of the first appendage and the three-space object.
-
52. The method of claim 50, wherein the controlling includes an indirect coupling between motion of the first appendage and the three-space object.
-
53. The method of claim 50, wherein the three-space object is presented on a display device coupled to the computer.
-
54. The method of claim 50, wherein the three-space object is coupled to the computer.
-
55. The method of claim 50, comprising controlling movement of the three-space object by mapping a plurality of gestures of the first appendage to a plurality of object translations of the three-space object.
-
56. The method of claim 55, wherein the mapping includes a direct mapping between the plurality of gestures and the plurality of object translations.
-
57. The method of claim 55, wherein the mapping includes an indirect mapping between the plurality of gestures and the plurality of object translations.
-
58. The method of claim 55, wherein the mapping includes correlating positional offsets of the plurality of gestures to positional offsets of the object translations of the three-space object.
-
59. The method of claim 55, wherein the mapping includes correlating positional offsets of the first appendage to translational velocity of the object translations of the three-space object.
-
60. The method of claim 50, comprising controlling movement of the three-space object by mapping a linear gesture of the first appendage to a linear translation of the three-space object.
-
61. The method of claim 50, comprising controlling movement of the three-space object by mapping a rotational gesture of the first appendage to a rotational translation of the three-space object.
-
62. The method of claim 50, comprising controlling movement of the three-space object by mapping a linear gesture of the first appendage to a rotational translation of the three-space object.
-
63. The method of claim 50, comprising controlling movement of the three-space object by mapping a rotational gesture of the first appendage to a linear translation of the three-space object.
-
64. The method of claim 50, comprising controlling movement of the three-space object along an x-axis using left-right movement of the first appendage.
-
65. The method of claim 50, comprising controlling movement of the three-space object along a y-axis using up-down movement of the first appendage.
-
66. The method of claim 50, comprising controlling movement of the three-space object along a z-axis using forward-backward movement of the first appendage.
-
67. The method of claim 50, comprising controlling movement of the three-space object simultaneously along an x-axis and a y-axis using a first combination of left-right movement and up-down movement of the first appendage.
-
68. The method of claim 50, comprising controlling movement of the three-space object simultaneously along an x-axis and a z-axis using a second combination of left-right movement and forward-backward movement of the first appendage.
-
69. The method of claim 50, comprising controlling movement of the three-space object simultaneously along a y-axis and a z-axis using a third combination of up-down movement and forward-backward movement of the first appendage.
-
70. The method of claim 50, comprising controlling movement of the three-space object simultaneously along an x-axis, a y-axis, and a z-axis using a fourth combination of left-right movement, up-down movement, and forward-backward movement of the first appendage.
-
71. The method of claim 50, comprising controlling roll of the three-space object around a first axis using a rotational movement of the first appendage.
-
72. The method of claim 50, comprising controlling roll of the three-space object around a second axis using a rotational movement of the first appendage about a first of the second appendages.
-
73. The method of claim 50, comprising controlling roll of the three-space object around a third axis using a rotational movement of the first appendage about a second of the second appendages.
-
74. The method of claim 50, wherein the detecting comprises detecting when an extrapolated position of the object intersects virtual space, wherein the virtual space comprises space depicted on a display device coupled to the computer.
-
75. The method of claim 74, wherein controlling the component comprises controlling a virtual object in the virtual space when the extrapolated position intersects the virtual object.
-
76. The method of claim 75, wherein controlling the component comprises controlling a position of the virtual object in the virtual space in response to the extrapolated position in the virtual space.
-
77. The method of claim 75, wherein controlling the component comprises controlling attitude of the virtual object in the virtual space in response to the gesture.
-
78. The method of claim 3, comprising specifying the gesture at a plurality of levels.
-
79. The method of claim 78, comprising partially specifying the gesture using a portion of information of the gesture.
-
80. The method of claim 78, wherein the plurality of levels include a first level comprising a pose of a first appendage of the body.
-
81. The method of claim 80, comprising representing the pose as a string of relative orientations between at least one second appendage and a back portion of a first appendage of the body, wherein the second appendage is connected to the first appendage.
-
82. The method of claim 81, comprising quantizing the string of relative orientations into at least one discrete state.
-
83. The method of claim 78, wherein the plurality of levels include a second level comprising a combination of poses, the combination of poses comprising a first pose of a first appendage and a second pose of a second appendage of the body.
-
84. The method of claim 83, wherein the second level comprises a combination of positions, the combination of positions comprising a first position of the first appendage and a second position of the second appendage.
-
85. The method of claim 78, wherein the plurality of levels include a third level comprising a combination of poses and positions, the combination of poses comprising a third pose of at least one appendage of the body and a fourth pose of at least one appendage of a second body.
-
86. The method of claim 85, wherein the third level comprises a combination of positions, the combination of positions comprising a third position of the at least one appendage of the body and a fourth position of the at least one appendage of the second body.
-
87. The method of claim 78, wherein the plurality of levels include a fourth level comprising at least one sequence of gestures.
-
88. The method of claim 78, wherein the plurality of levels include a fifth level comprising a grapheme gesture, wherein the grapheme gesture comprises the body tracing a shape in free space.
-
89. The method of claim 78, comprising generating a registered gesture by registering the gesture as relevant to at least one application, wherein the application is coupled to the computer.
-
90. The method of claim 89, comprising:
-
parsing the registered gesture; identifying the registered gesture; and transferring to the at least one application an event corresponding to the registered gesture.
-
-
91. The method of claim 90, comprising prioritizing the registered gesture.
-
92. The method of claim 91, comprising assigning a state to the registered gesture.
-
93. The method of claim 92, wherein the state is selected from a group consisting of an entry state and a continuation state, wherein a priority of the continuation state is higher than a priority of the entry state.
-
94. The method of claim 90, wherein the parsing comprises:
-
marking missing data components of the gesture; interpolating the missing data components into one of last known states and most likely states, wherein the interpolating depends on an amount and context of the missing data.
-
-
95. The method of claim 94, wherein the identifying comprises using the last known state of the missing data components for the identifying when the last known state is available for analysis.
-
96. The method of claim 94, wherein the identifying comprises using a best guess of the missing data components for the identifying when the last known state is unavailable for analysis.
-
97. The method of claim 3, comprising controlling scaling of the detecting and controlling to generate coincidence between virtual space and physical space, wherein the virtual space comprises space depicted on a display device coupled to the computer, wherein the physical space comprises space inhabited by the body.
-
98. The method of claim 97, comprising determining dimensions, orientations, and positions in the physical space of the display device.
-
99. The method of claim 98, comprising dynamically mapping the physical space in which the display device is located as a projection into the virtual space of at least one application coupled to the computer.
-
100. The method of claim 97, comprising translating scale, angle, depth, and dimension between the virtual space and the physical space as appropriate to at least one application coupled to the computer.
-
101. The method of claim 97, comprising controlling at least one virtual object in the virtual space in response to movement of at least one physical object in the physical space.
-
102. The method of claim 97, comprising automatically compensating for movement of the display device.
-
103. The method of claim 97, comprising controlling rendering of graphics on the display device in response to position of the body in physical space relative to position of the display device.
-
104. The method of claim 97, comprising generating on the display device a display including a virtual version of a physical object present in the physical space, wherein generating the display includes generating coincidence between a virtual position of the virtual version of the physical object and the position of the physical object in the physical space.
-
105. The method of claim 3, comprising determining the gesture is valid.
-
106. The method of claim 3, wherein the controlling includes controlling a function of an application hosted on the computer.
-
107. The method of claim 3, wherein the controlling includes controlling a component displayed on the computer.
Specification