×

Wearable device with intelligent user-input interface

  • US 9,857,919 B2
  • Filed: 07/12/2016
  • Issued: 01/02/2018
  • Est. Priority Date: 05/17/2012
  • Status: Active Grant
First Claim
Patent Images

1. A method for receiving user inputs by a wearable device, the wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, the camera having an image sensor and having an optical center located at a distance of a focal length from the image sensor, the method comprising:

  • when a finger-like object having a tip is detected within a field of view (FOV) of the camera, determining a three-dimensional (3D) location of the tip so that one or more of the user inputs are determinable according to the 3D location of the tip;

    wherein the determining of the 3D location of the tip comprises;

    capturing a first image containing at least the tip in a presence of the plain sheet of light illuminating the object;

    from the first image, determining an on-sensor location of the tip, and an on-sensor length of a width of the object;

    estimating a nearest physical location and a farthest physical location of the tip according to a pre-determined lower bound and a pre-determined upper bound of the object'"'"'s physical width, respectively, and further according to the on-sensor location of the tip, the on-sensor length of the object'"'"'s width, and the focal length, whereby the tip is estimated to be physically located within a region-of-presence between the nearest and farthest physical locations;

    projecting the structured-light pattern to at least the region-of-presence such that a part of the object around the tip is illuminated with a first portion of the structured-light pattern while the region-of-presence receives a second portion of the structured-light pattern, wherein the light source configures the structured-light pattern to have the second portion not containing any repeated sub-pattern, enabling unique determination of the 3D location of the tip by uniquely identifying the first portion of the structured-light pattern inside the second portion of the structured-light pattern;

    capturing a second image containing at least the part of the object around the tip when the structured-light pattern is projected to the region-of-presence; and

    determining the 3D location of the tip by identifying, from the second image, the first portion of the structured-light pattern.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×