Wearable device with intelligent user-input interface
First Claim
1. A method for receiving user inputs by a wearable device, the wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, the camera having an image sensor and having an optical center located at a distance of a focal length from the image sensor, the method comprising:
- when a finger-like object having a tip is detected within a field of view (FOV) of the camera, determining a three-dimensional (3D) location of the tip so that one or more of the user inputs are determinable according to the 3D location of the tip;
wherein the determining of the 3D location of the tip comprises;
capturing a first image containing at least the tip in a presence of the plain sheet of light illuminating the object;
from the first image, determining an on-sensor location of the tip, and an on-sensor length of a width of the object;
estimating a nearest physical location and a farthest physical location of the tip according to a pre-determined lower bound and a pre-determined upper bound of the object'"'"'s physical width, respectively, and further according to the on-sensor location of the tip, the on-sensor length of the object'"'"'s width, and the focal length, whereby the tip is estimated to be physically located within a region-of-presence between the nearest and farthest physical locations;
projecting the structured-light pattern to at least the region-of-presence such that a part of the object around the tip is illuminated with a first portion of the structured-light pattern while the region-of-presence receives a second portion of the structured-light pattern, wherein the light source configures the structured-light pattern to have the second portion not containing any repeated sub-pattern, enabling unique determination of the 3D location of the tip by uniquely identifying the first portion of the structured-light pattern inside the second portion of the structured-light pattern;
capturing a second image containing at least the part of the object around the tip when the structured-light pattern is projected to the region-of-presence; and
determining the 3D location of the tip by identifying, from the second image, the first portion of the structured-light pattern.
1 Assignment
0 Petitions
Accused Products
Abstract
A wearable device, having a camera and a light source, receives user inputs by a first method of measuring a height of a finger-like object'"'"'s tip above a reference surface if this surface is present, or a second method of estimating a 3D location of the tip. In the first method, a plain sheet of light is projected to the object, casting a shadow on the surface. A camera-observed shadow length is used to compute the tip'"'"'s height above the surface. In the second method, the nearest and farthest locations of the tip are estimated according to pre-determined lower and upper bounds of the object'"'"'s physical width. The object is then illuminated with a structured-light pattern configured such that an area between the nearest and farthest locations receives a portion of the pattern where this portion does not contain a repeated sub-pattern, enabling unique determination of the tip'"'"'s 3D location.
-
Citations
22 Claims
-
1. A method for receiving user inputs by a wearable device, the wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, the camera having an image sensor and having an optical center located at a distance of a focal length from the image sensor, the method comprising:
-
when a finger-like object having a tip is detected within a field of view (FOV) of the camera, determining a three-dimensional (3D) location of the tip so that one or more of the user inputs are determinable according to the 3D location of the tip; wherein the determining of the 3D location of the tip comprises; capturing a first image containing at least the tip in a presence of the plain sheet of light illuminating the object; from the first image, determining an on-sensor location of the tip, and an on-sensor length of a width of the object; estimating a nearest physical location and a farthest physical location of the tip according to a pre-determined lower bound and a pre-determined upper bound of the object'"'"'s physical width, respectively, and further according to the on-sensor location of the tip, the on-sensor length of the object'"'"'s width, and the focal length, whereby the tip is estimated to be physically located within a region-of-presence between the nearest and farthest physical locations; projecting the structured-light pattern to at least the region-of-presence such that a part of the object around the tip is illuminated with a first portion of the structured-light pattern while the region-of-presence receives a second portion of the structured-light pattern, wherein the light source configures the structured-light pattern to have the second portion not containing any repeated sub-pattern, enabling unique determination of the 3D location of the tip by uniquely identifying the first portion of the structured-light pattern inside the second portion of the structured-light pattern; capturing a second image containing at least the part of the object around the tip when the structured-light pattern is projected to the region-of-presence; and determining the 3D location of the tip by identifying, from the second image, the first portion of the structured-light pattern. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22)
-
Specification