NON-TACTILE INTERFACE SYSTEMS AND METHODS
First Claim
Patent Images
1. A machine-implemented method for processing an input, the method including:
- detecting a portion of a hand or other detectable object in a 3D sensor space;
determining a zone from among multiple zones within the of 3D sensor space in which the portion of the hand or other detectable object was detected; and
determining from the zone a correct way to interpret inputs detected by the 3D sensor as a position or a motion of the portion of the hand or other detectable object.
12 Assignments
0 Petitions
Accused Products
Abstract
Methods and systems for processing an input are disclosed that detect a portion of a hand and/or other detectable object in a region of space monitored by a 3D sensor. The method further includes determining a zone corresponding to the region of space in which the portion of the hand or other detectable object was detected. Also, the method can include determining from the zone a correct way to interpret inputs made by a position, shape or a motion of the portion of the hand or other detectable object.
117 Citations
20 Claims
-
1. A machine-implemented method for processing an input, the method including:
-
detecting a portion of a hand or other detectable object in a 3D sensor space; determining a zone from among multiple zones within the of 3D sensor space in which the portion of the hand or other detectable object was detected; and determining from the zone a correct way to interpret inputs detected by the 3D sensor as a position or a motion of the portion of the hand or other detectable object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A machine-implemented method to process an input, including:
-
detecting a portion of a hand or other detectable object in a 3D sensor space; determining a zone corresponding to the 3D sensor space in which the portion of the hand or other detectable object was detected, by; capturing an image using an imaging analysis system; analyzing the captured image to detect one or more edges of the object based on changes in at least one image parameter including brightness, by; comparing brightness of at least two pixels to a threshold; and detecting a transition in brightness from a low level to a high level across adjacent pixels; and determining position and/or motion of the object based upon the one or more edges, by selecting a zone to test for presence of the object; determining whether the object is within the selected zone; and adding the zone to a set of zones in which the object can be found when the object is determined to be within the selected zone; and determining from the zone a correct way to interpret inputs made by a position or a motion of the portion of the hand or other detectable object, by; interpreting a position or a motion as a command input to an active program when determined to be corresponding to a command input zone; and interpreting a position or a motion as a content input to an active program when determined to be corresponding to a content input zone; and interpreting a position or a motion as a modifier input modifying a concurrent input to an active program when determined to be corresponding to a modifier input zone; and interpreting a position or a motion as being ready to make an input to an active program when determined to be corresponding to a hover zone.
-
-
20. A system, including:
-
an image-capture device including at least one camera; and an image analyzer coupled to the camera that; detects a portion of a hand or other detectable object in a 3D sensor space; determines a zone corresponding to the 3D sensor space in which the portion of the hand or other detectable object was detected; and determines from the zone a correct way to interpret inputs made by a position or a motion of the portion of the hand or other detectable object.
-
Specification