Wearable Device with Intelligent User-Input Interface
First Claim
1. A method for receiving user inputs in a wearable device, the wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, the method comprising:
- when a finger-like object having a tip is detected within a field of view (FOV) of the camera, and when a reference surface is also detected present within the FOV, determining a height of the tip above the reference surface so that one or more of the user inputs are determinable according to the tip'"'"'s height;
wherein the determining of the tip'"'"'s height above the reference surface comprises;
determining parameters of a surface-plane equation for geometrically characterizing the reference surface;
determining a height of the camera and a height of the light source above the reference surface according to the parameters of the surface-plane equation;
projecting the plain sheet of light to an area that substantially covers at least a region-of-interest (ROI) comprising an area surrounding and including the tip, such that the object around the tip is illuminated to form a shadow on the reference surface unless the object is substantially close to the reference surface;
estimating a camera-observed shadow length from a ROI-highlighted image, wherein the ROI-highlighted image is captured by the camera after the plain sheet of light is projected, and wherein the camera-observed shadow length is a length of a part of the shadow formed on the reference surface along a topographical surface line and observed by the camera;
estimating a shadow-light source distance from the ROI-highlighted image; and
estimating the tip'"'"'s height above the reference surface based on a set of data including the surface-plane equation, the camera-observed shadow length, the shadow-light source distance, a distance measured in a direction parallel to the reference surface between the light source and the camera, the height of the camera above the reference surface, and the height of the light source above the reference surface.
1 Assignment
0 Petitions
Accused Products
Abstract
A wearable device, having a camera and a light source, receives user inputs by a first method of measuring a height of a finger-like object'"'"'s tip above a reference surface if this surface is present, or a second method of estimating a 3D location of the tip. In the first method, a plain sheet of light is projected to the object, casting a shadow on the surface. A camera-observed shadow length is used to compute the tip'"'"'s height above the surface. In the second method, the nearest and farthest locations of the tip are estimated according to pre-determined lower and upper bounds of the object'"'"'s physical width. The object is then illuminated with a structured-light pattern configured such that an area between the nearest and farthest locations receives a portion of the pattern where this portion does not contain a repeated sub-pattern, enabling unique determination of the tip'"'"'s 3D location.
-
Citations
22 Claims
-
1. A method for receiving user inputs in a wearable device, the wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, the method comprising:
-
when a finger-like object having a tip is detected within a field of view (FOV) of the camera, and when a reference surface is also detected present within the FOV, determining a height of the tip above the reference surface so that one or more of the user inputs are determinable according to the tip'"'"'s height; wherein the determining of the tip'"'"'s height above the reference surface comprises; determining parameters of a surface-plane equation for geometrically characterizing the reference surface; determining a height of the camera and a height of the light source above the reference surface according to the parameters of the surface-plane equation; projecting the plain sheet of light to an area that substantially covers at least a region-of-interest (ROI) comprising an area surrounding and including the tip, such that the object around the tip is illuminated to form a shadow on the reference surface unless the object is substantially close to the reference surface; estimating a camera-observed shadow length from a ROI-highlighted image, wherein the ROI-highlighted image is captured by the camera after the plain sheet of light is projected, and wherein the camera-observed shadow length is a length of a part of the shadow formed on the reference surface along a topographical surface line and observed by the camera; estimating a shadow-light source distance from the ROI-highlighted image; and estimating the tip'"'"'s height above the reference surface based on a set of data including the surface-plane equation, the camera-observed shadow length, the shadow-light source distance, a distance measured in a direction parallel to the reference surface between the light source and the camera, the height of the camera above the reference surface, and the height of the light source above the reference surface. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
wherein the detecting of whether the reference surface is also present within the FOV comprises; in an absence of the light source illuminating the object, capturing the FOV by the camera to give a first image; in a presence of the light source illuminating the object with the plain sheet of light, capturing a second image containing the object by the camera; from each of the first and second images, determining an intensity level of a surrounding area outside and adjacent a boundary of the object; and determining that the reference surface is present if the intensity level determined from the first image is substantially different from the intensity level determined from the second image.
-
-
4. The method of claim 1, wherein the estimating of the tip'"'"'s height above the reference surface comprises computing
-
5. The method of claim 1, further comprising:
-
obtaining a surface profile of the reference surface, and a surface map configured to map any point on an image captured by the camera to a corresponding physical location on the reference surface; wherein; the camera-observed shadow length and the shadow-light source distance are estimated from the ROI-highlighted image by using the surface map; and the set of data further includes the surface profile.
-
-
6. A wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, wherein the wearable device is configured to execute a process for receiving user inputs according to the method of claim 1.
-
7. A wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, wherein the wearable device is configured to execute a process for receiving user inputs according to the method of claim 2.
-
8. A wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, wherein the wearable device is configured to execute a process for receiving user inputs according to the method of claim 3.
-
9. A wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, wherein the wearable device is configured to execute a process for receiving user inputs according to the method of claim 4.
-
10. A wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, wherein the wearable device is configured to execute a process for receiving user inputs according to the method of claim 5.
-
11. A method for receiving user inputs in a wearable device, the wearable device comprising one or more processors, a camera and a light source, the light source being configured to generate a plain sheet of light and a structured-light pattern, the camera having an image sensor and having an optical center located at a distance of a focal length from the image sensor, the method comprising:
-
when a finger-like object having a tip is detected within a field of view (FOV) of the camera, determining a three-dimensional (3D) location of the tip so that one or more of the user inputs are determinable according to the 3D location of the tip; wherein the determining of the 3D location of the tip comprises; capturing a first image containing at least the tip in a presence of the plain sheet of light illuminating the object; from the first image, determining an on-sensor location of the tip, and an on-sensor length of a width of the object; estimating a nearest physical location and a farthest physical location of the tip according to a pre-determined lower bound and a pre-determined upper bound of the object'"'"'s physical width, respectively, and further according to the on-sensor location of the tip, the on-sensor length of the object'"'"'s width, and the focal length, whereby the tip is estimated to be physically located within a region-of-presence between the nearest and farthest physical locations; projecting the structured-light pattern to at least the region-of-presence such that a part of the object around the tip is illuminated with a first portion of the structured-light pattern, wherein the light source configures the structured-light pattern such that the remaining portion of the structured-light pattern received by the region-of-presence does not contain any repeated sub-pattern to thereby enable unique determination of the 3D location of the tip by identifying the first portion of the structured-light pattern illuminating the object; capturing a second image containing at least the part of the object around the tip when the structured-light pattern is projected to the region-of-presence; and determining the 3D location of the tip by identifying, from the second image, the first portion of the structured-light pattern. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22)
-
Specification