METHOD AND SYSTEM FOR DETERMINING USER INPUT BASED ON GESTURE
First Claim
1. A method for determining user input, comprising:
- capturing an image of a field of view of a user, the image comprising a gesture created by the user;
analyzing the captured image to identify a set of points associated with the gesture;
comparing the set of identified points to a set of points associated with a database of predetermined gestures;
generating a scoring value for the set of identified points based on the comparison;
recognizing the gesture when the scoring value exceeds a threshold value; and
determining a user input based on the recognized gesture.
3 Assignments
0 Petitions
Accused Products
Abstract
A waveguide apparatus includes a planar waveguide and at least one optical diffraction element (DOE) that provides a plurality of optical paths between an exterior and interior of the planar waveguide. A phase profile of the DOE may combine a linear diffraction grating with a circular lens, to shape a wave front and produce beams with desired focus. Waveguide apparati may be assembled to create multiple focal planes. The DOE may have a low diffraction efficiency, and planar waveguides may be transparent when viewed normally, allowing passage of light from an ambient environment (e.g., real world) useful in AR systems. Light may be returned for temporally sequentially passes through the planar waveguide. The DOE(s) may be fixed or may have dynamically adjustable characteristics. An optical coupler system may couple images to the waveguide apparatus from a projector, for instance a biaxially scanning cantilevered optical fiber tip.
165 Citations
18 Claims
-
1. A method for determining user input, comprising:
-
capturing an image of a field of view of a user, the image comprising a gesture created by the user; analyzing the captured image to identify a set of points associated with the gesture; comparing the set of identified points to a set of points associated with a database of predetermined gestures; generating a scoring value for the set of identified points based on the comparison; recognizing the gesture when the scoring value exceeds a threshold value; and determining a user input based on the recognized gesture. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method of identifying a gesture, comprising:
-
capturing a plurality of images of respective fields of view of a user; analyzing the captured plurality of images using a rejection cascade comprising an earlier stage configured to remove easier candidates; and a later stage configured to analyze harder data.
-
-
8. A method of identifying a gesture, comprising:
-
capturing a plurality of images of respective fields of view of a user; generating a plurality of gesture candidates from the captured plurality of images; generating analysis data values corresponding to each of the plurality of gesture candidates; sorting the gesture candidates based on the respective analysis data values; and eliminating gesture candidates with analysis data values less than a minimum threshold.
-
-
9. A method for classifying a gesture, comprising:
-
capturing an image of a field of view of a user; performing depth segmentation on the capture image to generate a depth map; analyzing the depth map using a classifier mechanism to identify a part of a hand corresponding to a point in the depth map; skeletonizing the depth map based on the identification of the part of the hand; classifying the image as a gesture based on the skeletonized depth map. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16, 17, 18)
-
Specification