System and method of real-time interactive operation of user interface
First Claim
Patent Images
1. A method of recognizing gestures for real-time interaction, comprising:
- capturing three-dimensional (3D) data on a subject;
detecting a pointing action by the subject from the 3D data;
computing an initial estimate of a target region from the pointing action, the initial estimate of the target region having a defined radius around a center point;
tracking the pointing action of the subject and performing a series of iterations wherein the defined radius of the target region changes based on the detected pointing action; and
providing feedback to the subject by highlighting a portion around the initial estimate and continuously modifying the highlighted portion until the highlighted portion shrinks to a single point indicating that a desired location has been reached.
1 Assignment
0 Petitions
Accused Products
Abstract
A method, a system, and a non-transitory computer readable medium are disclosed for real-time interaction with a user interface recognizing a gesture. The method including capturing three-dimensional (3D) data on a subject; detecting a pointing action by the subject from the 3D data; computing an initial estimate of a target region from the pointing action, the initial estimate of the target region having a defined radius around a center point; and tracking the pointing action of the subject and performing a series of iterations wherein the defined radius of the target region changes based on the detected pointing action.
-
Citations
23 Claims
-
1. A method of recognizing gestures for real-time interaction, comprising:
-
capturing three-dimensional (3D) data on a subject; detecting a pointing action by the subject from the 3D data; computing an initial estimate of a target region from the pointing action, the initial estimate of the target region having a defined radius around a center point; tracking the pointing action of the subject and performing a series of iterations wherein the defined radius of the target region changes based on the detected pointing action; and providing feedback to the subject by highlighting a portion around the initial estimate and continuously modifying the highlighted portion until the highlighted portion shrinks to a single point indicating that a desired location has been reached. - View Dependent Claims (2, 3, 4, 5, 21)
-
-
6. A system for recognizing gestures for real-time interaction, comprising:
-
a motion and depth sensor configured to capture three-dimensional (3D) data on a subject; and a processor configured to; capture three-dimensional (3D) data on a subject; detect a pointing action by the subject from the 3D data; compute an initial estimate of a target region from the pointing action, the initial estimate of the target region having a defined radius around a center point; track the pointing action of the subject and performing a series of iterations wherein the defined radius of the target region changes based on the detected pointing action; and provide feedback to the subject by highlighting a portion around the initial estimate and continuously modifying the highlighted portion until the highlighted portion shrinks to a single point indicating that a desired location has been reached. - View Dependent Claims (7, 8, 9, 10, 22)
-
-
11. A non-transitory computer readable medium containing a computer program storing computer readable code for recognizing gestures for real-time interaction, the program being executable by a computer to cause the computer to perform a process comprising:
-
capturing three-dimensional (3D) data on a subject; detecting a pointing action by the subject from the 3D data; computing an initial estimate of a target region from the pointing action, the initial estimate of the target region having a defined radius around a center point; tracking the pointing action of the subject and performing a series of iterations wherein the defined radius of the target region changes based on the detected pointing action; and providing feedback to the subject by highlighting a portion around the initial estimate and continuously modifying the highlighted portion until the highlighted portion shrinks to a single point indicating that a desired location has been reached. - View Dependent Claims (12, 13, 14, 15, 23)
-
-
16. A method of recognizing gestures for real-time interaction, comprising:
-
capturing three-dimensional (3D) data on a subject; detecting a pointing action by the subject from the 3D data to begin a pointing operation; determining a point of intersection of the pointing action on an actual screen; determining if one or more targets on the actual screen are within a defined radius around a computed point on the actual screen; if at least one target is present, determining if a number of targets is equal to one or greater than one, and wherein if the number of targets is equal to one, selecting a target, and if the number of targets is greater than one, reducing the defined radius to reduce the number of targets within the defined radius until a single target remains; and providing feedback to the subject by highlighting a portion of the actual screen around the initial estimate and continuously modifying the highlighted portion until the highlighted portion shrinks to a single point indicating that a desired location has been reached. - View Dependent Claims (17, 18, 19, 20)
-
Specification