Systems and methods for gesture-based interaction
First Claim
1. A user interface comprising:
- one or more displays;
a depth sensor housing frame, the depth sensor housing frame comprising a first depth sensor associated with a first field of view passing through a first panel of the depth-sensor housing frame, the first panel substantially opaque to visual frequencies, and a second depth sensor associated with a second field of view passing through a second panel of the depth sensor housing frame, the second panel substantially opaque to visual frequencies, the first depth sensor mechanically coupled above the second depth sensor, the depth sensor housing frame above the one or more displays;
at least one processor; and
at least one memory comprising instructions configured, in conjunction with the at least one processor, to cause the user interface to perform a method comprising;
acquiring a first frame of depth data using the first depth sensor;
acquiring a second frame of depth data using the second depth sensor;
associating at least a portion of the first frame of depth data and the second frame of depth data with a classification corresponding to a first portion of a user'"'"'s body;
determining a gesture based, at least in part, upon the classification; and
notifying an application running on the user interface of the determined gesture.
3 Assignments
0 Petitions
Accused Products
Abstract
Various of the disclosed embodiments present depth-based user interaction systems facilitating natural and immersive user interactions. Particularly, various embodiments integrate immersive visual presentations with natural and fluid gesture motions. This integration facilitates more rapid user adoption and more precise user interactions. Some embodiments may take advantage of the particular form factors disclosed herein to accommodate user interactions. For example, dual depth sensor arrangements in a housing atop the interface'"'"'s display may facilitate depth fields of view accommodating more natural gesture recognition than may be otherwise possible. In some embodiments, these gestures may be organized into a framework for universal control of the interface by the user and for application-specific control of the interface by the user.
81 Citations
21 Claims
-
1. A user interface comprising:
-
one or more displays; a depth sensor housing frame, the depth sensor housing frame comprising a first depth sensor associated with a first field of view passing through a first panel of the depth-sensor housing frame, the first panel substantially opaque to visual frequencies, and a second depth sensor associated with a second field of view passing through a second panel of the depth sensor housing frame, the second panel substantially opaque to visual frequencies, the first depth sensor mechanically coupled above the second depth sensor, the depth sensor housing frame above the one or more displays; at least one processor; and at least one memory comprising instructions configured, in conjunction with the at least one processor, to cause the user interface to perform a method comprising; acquiring a first frame of depth data using the first depth sensor; acquiring a second frame of depth data using the second depth sensor; associating at least a portion of the first frame of depth data and the second frame of depth data with a classification corresponding to a first portion of a user'"'"'s body; determining a gesture based, at least in part, upon the classification; and notifying an application running on the user interface of the determined gesture. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A computer-implemented method running on a user interface, the method comprising:
-
acquiring a first frame of depth data from a first depth sensor; acquiring a second frame of depth data from a second depth sensor, the first depth sensor and the second depth sensor within a depth sensor housing frame above one or more displays, the first depth sensor associated with a first field of view passing through a first panel of the depth sensor housing frame, the first panel substantially opaque to visual frequencies, and the second depth sensor associated with a second field of view passing through a second panel substantially opaque to visual frequencies, the first depth sensor mechanically coupled above the second depth sensor; associating at least a portion of the first depth frame and the second depth frame with a classification corresponding to a first portion of a user'"'"'s body; determining a gesture based, at least in part, upon the classification; and notifying an application running on the user interface of the determined gesture. - View Dependent Claims (9, 10, 11, 12, 18, 19)
-
-
13. A non-transitory computer-readable medium comprising instructions configured to cause a user interface to perform a method, the method comprising:
-
acquiring a first frame of depth data from a first depth sensor; acquiring a second frame of depth data from a second depth sensor, the first depth sensor and the second depth sensor within a depth sensor housing frame above one or more displays, the first depth sensor associated with a first field of view passing through a first panel of the depth sensor housing frame, the first panel substantially opaque to visual frequencies, and the second depth sensor associated with a second field of view passing through a second panel, the second panel substantially opaque to visual frequencies, the first depth sensor mechanically coupled above the second depth sensor; associating at least a portion of the first depth frame and the second depth frame with a classification corresponding to a first portion of a user'"'"'s body; determining a gesture based, at least in part, upon the classification; and notifying an application running on the user interface of the determined gesture. - View Dependent Claims (14, 15, 16, 17, 20, 21)
-
Specification