Multi-sensor based user interface
First Claim
Patent Images
1. An apparatus for gesture detection and recognition, the apparatus comprising:
- a processing element;
a radar sensor;
a depth sensor; and
an optical sensor, wherein the radar sensor, the depth sensor, and the optical sensor are coupled to the processing element, and wherein the radar sensor, the depth sensor, and the optical sensor are configured for short range gesture detection and the processing element is configured to identify a type of hand gesture by combining data acquired with the radar sensor, data acquired with the depth sensor, and data acquired with the optical sensor, wherein the data acquired with the radar sensor is registered to the data acquired with the depth sensor,wherein registering the data acquired with the radar sensor to the data acquired with the depth sensor comprises transforming three-dimension (3D) coordinates of the data acquired with the radar sensor to the depth sensor'"'"'s coordinate frame, wherein said registering further comprises;
observing 3D coordinates of a spherical volume concurrently with both the radar sensor and the depth sensor, determining a best-fit transformation function between the 3D coordinates of the spherical volume observed by both the radar sensor and the depth sensor, and using the transformation function to transform the 3D coordinates of the data acquired with the radar sensor to the depth sensor'"'"'s coordinate frame.
1 Assignment
0 Petitions
Accused Products
Abstract
An apparatus and method for gesture detection and recognition. The apparatus includes a processing element, a radar sensor, a depth sensor, and an optical sensor. The radar sensor, the depth sensor, and the optical sensor are coupled to the processing element, and the radar sensor, the depth sensor, and the optical sensor are configured for short range gesture detection and recognition. The processing element is further configured to detect and recognize a hand gesture based on data acquired with the radar sensor, the depth sensor, and the optical sensor.
-
Citations
17 Claims
-
1. An apparatus for gesture detection and recognition, the apparatus comprising:
-
a processing element; a radar sensor; a depth sensor; and an optical sensor, wherein the radar sensor, the depth sensor, and the optical sensor are coupled to the processing element, and wherein the radar sensor, the depth sensor, and the optical sensor are configured for short range gesture detection and the processing element is configured to identify a type of hand gesture by combining data acquired with the radar sensor, data acquired with the depth sensor, and data acquired with the optical sensor, wherein the data acquired with the radar sensor is registered to the data acquired with the depth sensor, wherein registering the data acquired with the radar sensor to the data acquired with the depth sensor comprises transforming three-dimension (3D) coordinates of the data acquired with the radar sensor to the depth sensor'"'"'s coordinate frame, wherein said registering further comprises;
observing 3D coordinates of a spherical volume concurrently with both the radar sensor and the depth sensor, determining a best-fit transformation function between the 3D coordinates of the spherical volume observed by both the radar sensor and the depth sensor, and using the transformation function to transform the 3D coordinates of the data acquired with the radar sensor to the depth sensor'"'"'s coordinate frame. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 17)
-
-
10. A system for hand gesture detection, the system comprising:
-
a processor; a first sensor comprising a radar; a second sensor comprising a depth sensor; and a third sensor comprising an optical sensor, wherein the first sensor, the second sensor, and third sensor are coupled to the processor, and wherein the first sensor, the second sensor, and the third sensor are configured for short range gesture detection and recognition and wherein further the processor is configured to identify a type of hand gesture by combining data acquired with the first sensor, data acquired with the second sensor, and data acquired with the third sensor, wherein the data acquired with the radar is registered to the data acquired with the depth sensor, wherein registering the data acquired with the radar to the data acquired with the depth sensor comprises transforming three-dimension (3D) coordinates of the data acquired with the radar to the depth sensor'"'"'s coordinate frame, wherein said registering further comprises;
observing 3D coordinates of a spherical volume concurrently with both the radar and the depth sensor, determining a best-fit transformation function between the 3D coordinates of the spherical volume observed by both the radar and the depth sensor, and using the transformation function to transform the 3D coordinates of the data acquired with the radar to the depth sensor'"'"'s coordinate frame. - View Dependent Claims (11, 12, 13)
-
-
14. A mobile apparatus comprising:
-
a processing element; a radar sensor; a depth sensor; and an optical sensor, wherein the radar sensor, the depth sensor, and the optical sensor are coupled to the processing element, and wherein the radar sensor, the depth sensor, and the optical sensor are configured for short range gesture detection and recognition and wherein further the processing element is configured to identify a type of hand gesture of a driver by combining data received from the radar sensor, data received from the depth sensor, and data received from the optical sensor, and wherein the processing element is configured to automatically determine the type of the hand gesture performed and a command associated with the hand gesture, wherein the data acquired with the radar sensor is registered to the data acquired with the depth sensor, wherein registering the data acquired with the radar sensor to the data acquired with the depth sensor comprises transforming three-dimension (3D) coordinates of the data acquired with the radar sensor to the depth sensor'"'"'s coordinate frame, wherein said registering further comprises;
observing 3D coordinates of a spherical volume concurrently with both the radar sensor and the depth sensor, determining a best-fit transformation function between the 3D coordinates of the spherical volume observed by both the radar sensor and the depth sensor, and using the transformation function to transform the 3D coordinates of the data acquired with the radar sensor to the depth sensor'"'"'s coordinate frame. - View Dependent Claims (15, 16)
-
Specification