Adaptive tracking system for spatial input devices
First Claim
1. A method comprising:
- a tracking component of a gestural control system receiving feature data of a first object of a plurality of objects included in a spatial operating environment (SOE) from at least one sensor of a plurality of sensors of the SOE, wherein locations of each of the plurality of sensors define the SOE; and
the tracking component generating from the received feature data of the first object a coherent model of spatial relationships that includes a spatial relationship between the first object and a second object of the plurality of objects.
2 Assignments
0 Petitions
Accused Products
Abstract
An adaptive tracking system for spatial input devices provides real-time tracking of spatial input devices for human-computer interaction in a Spatial Operating Environment (SOE). The components of an SOE include gestural input/output; network-based data representation, transit, and interchange; and spatially conformed display mesh. The SOE comprises a workspace occupied by one or more users, a set of screens which provide the users with visual feedback, and a gestural control system which translates user motions into command inputs. Users perform gestures with body parts and/or physical pointing devices, and the system translates those gestures into actions such as pointing, dragging, selecting, or other direct manipulations. The tracking system provides the requisite data for creating an immersive environment by maintaining a model of the spatial relationships between users, screens, pointing devices, and other physical objects within the workspace.
138 Citations
20 Claims
-
1. A method comprising:
-
a tracking component of a gestural control system receiving feature data of a first object of a plurality of objects included in a spatial operating environment (SOE) from at least one sensor of a plurality of sensors of the SOE, wherein locations of each of the plurality of sensors define the SOE; and the tracking component generating from the received feature data of the first object a coherent model of spatial relationships that includes a spatial relationship between the first object and a second object of the plurality of objects.
-
-
2. The method of claim 1, wherein each sensor is a camera.
-
3. The method of claim 1, wherein at least one sensor is a camera.
-
4. The method of claim 1, wherein the tracking component receives the feature data of the first object from one sensor of the plurality of sensors, and wherein the one sensor is a camera.
-
5. The method of claim 1, wherein the plurality of sensors includes two or more cameras, and wherein the tracking component receiving the feature data of the first object comprises the tracking component receiving the feature data of the first object from two or more cameras of the plurality of sensors.
-
6. The method of claim 1 wherein the first object is a wand.
-
7. The method of claim 1 wherein the first object is an input device.
-
8. The method of claim 1 wherein the first object is display device.
-
9. The method of claim 1 wherein the first object is mobile screen.
-
10. The method of claim 1, further comprising the tracking component receiving feature data of the second object of the plurality of objects included in the SOE from at least one sensor of the plurality of sensors of the SOE.
-
11. The method of claim 1, wherein the first object is a wand and the second object is a wand.
-
12. The method of claim 1, wherein the first object is an input device and the second object is an input device.
-
13. The method of claim 1, wherein the first object is a wand and the second object is a display device.
-
14. The method of claim 1, wherein the first object is an input device and the second object is a display device.
-
15. The method of claim 1, wherein the first object is a wand and the second object is a mobile screen.
-
16. The method of claim 1, wherein the feature data of the first object is absolute three-space location data of an instantaneous state of the first object at a point in time and space.
-
17. The method of claim 1, wherein the tracking component receives the feature data of the first object from at least a first sensor and a second sensor of the plurality of sensors.
-
18. The method of claim 17, wherein the coherent model of spatial relationships includes a spatial relationship between the first sensor and the second sensor.
-
19. The method of claim 18, wherein the feature data of the first object is absolute three-space location data of an instantaneous state of the first object at a point in time and space.
-
20. The method of claim 1, wherein the gestural control system includes the plurality of sensors.
Specification