Gesture detection based on information from multiple types of sensors
First Claim
Patent Images
1. An apparatus comprising:
- a first sensor configured to generate a first output;
a camera configured to generate a second output; and
a processor configured to;
detect a gesture based on at least one of the first output and the second output when a lighting level is greater than or equal to a threshold; and
deactivate at least a portion of the camera and perform gesture recognition based on the first output from the first sensor when the lighting level is less than the threshold.
1 Assignment
0 Petitions
Accused Products
Abstract
A method includes receiving a first output from a first sensor of an electronic device and receiving a second output from a second sensor of the electronic device. The first sensor has a first sensor type and the second sensor has a second sensor type that is different from the first sensor type. The method also includes detecting a gesture based on the first output and the second output according to a complementary voting scheme that is at least partially based on gesture complexity.
-
Citations
30 Claims
-
1. An apparatus comprising:
-
a first sensor configured to generate a first output; a camera configured to generate a second output; and a processor configured to; detect a gesture based on at least one of the first output and the second output when a lighting level is greater than or equal to a threshold; and deactivate at least a portion of the camera and perform gesture recognition based on the first output from the first sensor when the lighting level is less than the threshold. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method comprising:
-
receiving a first output from a first sensor of an electronic device, wherein the first sensor has a first sensor type; receiving a second output from a second sensor of the electronic device, wherein the second sensor has a second sensor type that is different from the first sensor type, and wherein the second sensor type comprises a camera sensor type; when a lighting level is greater than or equal to a threshold, detecting a gesture based on the first output and the second output according to a voting scheme that is at least partially based on gesture complexity; and when the lighting level is less than the threshold, deactivating at least a portion of the second sensor and performing gesture recognition based on the first output from the first sensor. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15)
-
-
16. An apparatus, comprising:
-
first sensor means for generating a first output and having a first sensor type; second sensor means for generating a second output and having a second sensor type that is different from the first sensor type; means for detecting a lighting level; means for selectively deactivating the second sensor means in response to determining that the lighting level is less than a threshold; and means for detecting a gesture, the means for detecting the gesture configured to, when the lighting level is greater than or equal to the threshold, perform gesture recognition based on the first output and the second output according to a voting scheme that is at least partially based on gesture complexity and configured to perform gesture recognition based on the first output from the first sensor means when the lighting level is less than the threshold. - View Dependent Claims (17, 18)
-
-
19. An apparatus, comprising:
-
an ultrasound sensor configured to generate a first output in accordance with a common data model and to provide the first output to an ultrasound processing path; a camera configured to generate a second output in accordance with the common data model and to provide the second output to an image processing path; a processor; and a gesture detection module executable by the processor to detect a gesture based on at least one of the first output and the second output, wherein the ultrasound processing path and the image processing path are configured to exchange data in accordance with the common data model, wherein the data includes a position of an object relative to the ultrasound sensor or the camera, and wherein the camera is configured to identify an area of interest based on the position of the object. - View Dependent Claims (20, 21, 22, 23)
-
-
24. An apparatus comprising:
-
an ultrasound sensor configured to provide a first output to an ultrasound processing path; a camera configured to provide a second output to an image processing path; a processor; and a gesture detection module executable by the processor to detect a gesture based on at least one of the first output and the second output, wherein the ultrasound sensor and the camera are each configured to self-adjust, independent of the processor, based on data exchanged between the ultrasound processing path and the image processing path. - View Dependent Claims (25, 26, 27)
-
-
28. A non-transitory processor-readable medium comprising instructions that, when executed by a processor, cause the processor to:
-
receive a first output from a first sensor of an electronic device, wherein the first sensor has a first sensor type; receive a second output from a second sensor of the electronic device, wherein the second sensor has a second sensor type that is different from the first sensor type, and wherein the second sensor type comprises a camera sensor type; when a lighting level is greater than or equal to a threshold, detect a gesture based on the first output and the second output according to a voting scheme that is at least partially based on gesture complexity; and when the lighting level is less than the threshold, deactivate at least a portion of the second sensor and perform gesture recognition based on the first output from the first sensor. - View Dependent Claims (29, 30)
-
Specification