Method and system for ergonomic touch-free interface
First Claim
1. A human interface device, comprising:
- a stereo camera configured to receive image information for user action;
a processor;
a display; and
a memory directed to store a software application;
wherein the software application directs the processor to;
generate depth maps from the image information received by the stereo camera;
utilize a color-based tracking algorithm to provide real-time skin segmentation;
localize at least one active part from the image information, wherein;
the active part is a body part of a user; and
the body part of the user includes fingertips;
utilize a model-based tracking algorithm to develop a three-dimensional model of the active parts of a user'"'"'s body;
localize the fingertips of the user using a plurality of methods including;
an optical flow algorithm;
shape and appearance descriptor identification;
a sliding window detector;
ora voting-based identification algorithm;
monitor the active part for a predetermined initiation gesture wherein the predetermined initiation gesture is the simultaneous touching of the user'"'"'s left thumb to left index finger and the user'"'"'s right thumb to right index finger;
activate additional options upon the use of the predetermined initiation gesture;
receive a first predetermined range of motion for the active part;
determine a second range of motion for the at least one active part, wherein the second range of motion is substantially less than the first predetermined range of motion of the at least one active part;
allowing for a manner by which to input the second range of motion by the user, and the second range of motion is chosen to allow use of the at least one active part to provide input to the device; and
generate a three-dimensional virtual workspace on the display wherein the three-dimensional virtual workspace;
represents a space including the second range of motion;
includes at least one virtual surface;
provides visual cues defining the location of the at least one virtual surface;
maps the active part onto the virtual workspace as a three-dimensional cursor containing anthropomorphic detail; and
provides a first type of response if the cursor is within the second range of motion and second type of response if the cursor is outside the second range of motion wherein the second type of response is less responsive than the first response.
3 Assignments
0 Petitions
Accused Products
Abstract
With the advent of touch-free interfaces such as described in the present disclosure, it is no longer necessary for computer interfaces to be in predefined locations (e.g., desktops) or configuration (e.g., rectangular keyboard). The present invention makes use of touch-free interfaces to encourage users to interface with a computer in an ergonomically sound manner. Among other things, the present invention implements a system for localizing human body parts such as hands, arms, shoulders, or even the fully body, with a processing device such as a computer along with a computer display to provide visual feedback on the display that encourages a user to maintain an ergonomically preferred position with ergonomically preferred motions. For example, the present invention encourages a user to maintain his motions within an ergonomically preferred range without have to reach out excessively or repetitively.
-
Citations
13 Claims
-
1. A human interface device, comprising:
-
a stereo camera configured to receive image information for user action; a processor; a display; and a memory directed to store a software application; wherein the software application directs the processor to; generate depth maps from the image information received by the stereo camera; utilize a color-based tracking algorithm to provide real-time skin segmentation; localize at least one active part from the image information, wherein; the active part is a body part of a user; and the body part of the user includes fingertips; utilize a model-based tracking algorithm to develop a three-dimensional model of the active parts of a user'"'"'s body; localize the fingertips of the user using a plurality of methods including; an optical flow algorithm; shape and appearance descriptor identification; a sliding window detector;
ora voting-based identification algorithm; monitor the active part for a predetermined initiation gesture wherein the predetermined initiation gesture is the simultaneous touching of the user'"'"'s left thumb to left index finger and the user'"'"'s right thumb to right index finger; activate additional options upon the use of the predetermined initiation gesture; receive a first predetermined range of motion for the active part; determine a second range of motion for the at least one active part, wherein the second range of motion is substantially less than the first predetermined range of motion of the at least one active part; allowing for a manner by which to input the second range of motion by the user, and the second range of motion is chosen to allow use of the at least one active part to provide input to the device; and generate a three-dimensional virtual workspace on the display wherein the three-dimensional virtual workspace; represents a space including the second range of motion; includes at least one virtual surface; provides visual cues defining the location of the at least one virtual surface; maps the active part onto the virtual workspace as a three-dimensional cursor containing anthropomorphic detail; and provides a first type of response if the cursor is within the second range of motion and second type of response if the cursor is outside the second range of motion wherein the second type of response is less responsive than the first response. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
Specification