Systems and methods for proximity sensor and image sensor based gesture detection
First Claim
1. A dual sensor control device, comprising:
- at least one processor for receiving information from a proximity sensor and an image sensor, the at least one processor being configured to;
receive first data from the proximity sensor while the image sensor is in a first state;
detect, using the received first data, a presence of an object in proximity to the proximity sensor;
output, based on the detected presence of the object, a signal to cause the image sensor to enter a second state different from the first state before the object reaches a field of view of the image sensor;
receive, from the image sensor in the second state, second data reflective of images of the object;
determine, based on a combination of the first data and the second data, a gesture performed by the object; and
output at least one of a message and a command associated with the determined gesture.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems, methods, and non-transitory computer-readable media are disclosed. For example, a dual sensor control device is disclosed that includes at least one processor for receiving information from a proximity sensor and an image sensor. The processor may be configured to receive first data from the proximity sensor while the image sensor is in a first state, determine, using the first data, a presence of an object in proximity to the proximity sensor. The processor may also be configured to output, based on the determined presence of the object in proximity to the proximity sensor, a signal to the image sensor to cause the image sensor to enter a second state, different from the first state. The processor may also be configured to receive second data from the image sensor in the second state, and output at least one of a message and a command associated with the second data.
10 Citations
35 Claims
-
1. A dual sensor control device, comprising:
-
at least one processor for receiving information from a proximity sensor and an image sensor, the at least one processor being configured to; receive first data from the proximity sensor while the image sensor is in a first state; detect, using the received first data, a presence of an object in proximity to the proximity sensor; output, based on the detected presence of the object, a signal to cause the image sensor to enter a second state different from the first state before the object reaches a field of view of the image sensor; receive, from the image sensor in the second state, second data reflective of images of the object; determine, based on a combination of the first data and the second data, a gesture performed by the object; and output at least one of a message and a command associated with the determined gesture. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. A non-transitory computer-readable medium comprising instructions that, when executed by at least one processor, cause the at least one processor to perform operations including:
-
receiving first data from a proximity sensor while an image sensor is in a first state; detecting, using the received first data, a presence of an object in proximity to the proximity sensor; outputting, based on the detected presence of the object, a signal to cause the image sensor to enter a second state different from the first state before the object reaches a field of view of the image sensor; receiving, from the image sensor in the second state, second data reflective of images of the object; determining, based on a combination of the first data and the second data, a gesture performed by the object; and outputting at least one of a message and a command associated with the determined gesture.
-
-
18. A three-dimensional control device, comprising:
-
at least one processor for receiving information from a proximity sensor and an image sensor, the at least one processor being configured to; receive first data, associated with a detected object, from the proximity sensor while the image sensor operates at a first level of power consumption, wherein the first data is reflective of at least a one-dimensional position of the object relative to the proximity sensor; output, based on the detected object, a signal to cause the image sensor to enter a second level of power consumption different from the first level of power consumption before the object reaches a field of view of the image sensor; receive second data associated with the detected object, from the image sensor while the image sensor operates at the second level of power consumption greater than the first level, wherein the second data is reflective of at least a two-dimensional position of the object relative to the image sensor; coordinate the first data and the second data to obtain three-dimensional information associated with the detected object; and determine, based on the obtained three-dimensional information, a gesture performed by the object. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31)
-
-
32. A non-transitory computer-readable medium comprising instructions that, when executed by at least one processor, cause the at least one processor to perform operations including:
-
receiving first data, associated with a detected object, from a proximity sensor while an image sensor operates at a first level of power consumption, wherein the first data is reflective of at least a one-dimensional position of the object relative to the proximity sensor; outputting, based on the detected object, a signal to cause the image sensor to enter a second level of power consumption different from the first level of power consumption before the object reaches a field of view of the image sensor; receiving second data associated with the detected object, from the image sensor while the image sensor operates at the second level of power consumption greater than the first level, wherein the second data is reflective of at least a two-dimensional position of the object relative to the image sensor; and coordinating the first data and the second data to obtain three-dimensional information associated with the detected object; and determining, based on the obtained three-dimensional information, a gesture performed by the object. - View Dependent Claims (33, 34, 35)
-
Specification