Gaze detection in a 3D mapping environment
First Claim
Patent Images
1. A method, comprising:
- receiving a three-dimensional (3D) map of at least a part of a body of a user of a computerized system;
receiving a two dimensional (2D) image of the user, the image including an eye of the user;
extracting, from the 3D map and the 2D image, 3D coordinates of a head of the user;
identifying, based on the 3D coordinates of the head and the image of the eye, a direction of a gaze performed by the user; and
controlling a function of the computerized system responsively to the direction of the gaze by performing an action associated with an interactive item presented in the direction of the gaze on a display coupled to the computerized system upon receiving a first sequence of three-dimensional (3D) maps indicating a motion of a limb toward the display, receiving a second sequence of 3D maps indicating a deceleration of the motion of the limb toward the display, and receiving a third sequence of 3D maps indicating a motion of the limb away from the display.
3 Assignments
0 Petitions
Accused Products
Abstract
A method, including receiving a three-dimensional (3D) map of at least a part of a body of a user (22) of a computerized system, and receiving a two dimensional (2D) image of the user, the image including an eye (34) of the user. 3D coordinates of a head (32) of the user are extracted from the 3D map and the 2D image, and a direction of a gaze performed by the user is identified based on the 3D coordinates of the head and the image of the eye.
254 Citations
45 Claims
-
1. A method, comprising:
-
receiving a three-dimensional (3D) map of at least a part of a body of a user of a computerized system; receiving a two dimensional (2D) image of the user, the image including an eye of the user; extracting, from the 3D map and the 2D image, 3D coordinates of a head of the user; identifying, based on the 3D coordinates of the head and the image of the eye, a direction of a gaze performed by the user; and controlling a function of the computerized system responsively to the direction of the gaze by performing an action associated with an interactive item presented in the direction of the gaze on a display coupled to the computerized system upon receiving a first sequence of three-dimensional (3D) maps indicating a motion of a limb toward the display, receiving a second sequence of 3D maps indicating a deceleration of the motion of the limb toward the display, and receiving a third sequence of 3D maps indicating a motion of the limb away from the display. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22)
-
-
23. An apparatus, comprising:
-
a sensing device configured to receive a three dimensional (3D) map of at least a part of a body of a user and an image of an eye of the user, and to receive a two dimensional (2D) image of the user, the 2D image including an eye of the user; and a computer coupled to the sensing device and configured to extract, from the 3D map and the 2D image, 3D coordinates of a head of the user and to identify, based on the 3D coordinates of the head and the image of the eye, a direction of a gaze performed by the user, wherein the computer is configured to control a function of the apparatus responsively to the direction of the gaze by performing an action associated with an interactive item presented in the direction of the gaze on a display coupled to the apparatus, upon receiving a first sequence of three-dimensional (3D) maps indicating a motion of a limb toward the display, receiving a second sequence of 3D maps indicating a deceleration of the motion of the limb toward the display, and receiving a third sequence of 3D maps indicating a motion of the limb away from the display. - View Dependent Claims (24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44)
-
-
45. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to receive a three-dimensional (3D) map of at least a part of a body of a user of the computer, to receive a two dimensional (2D) image of the user, the image including an eye of the user, to extract, from the 3D map and the 2D image, 3D coordinates of a head of the user, and to identify, based on the 3D coordinates of the head and the image of the eye, a direction of a gaze performed by the user,
wherein the instructions cause the computer to control a function of the computer responsively to the direction of the gaze by performing an action associated with an interactive item presented in the direction of the gaze on a display coupled to the computerized system, upon receiving a first sequence of three-dimensional (3D) maps indicating a motion of a limb toward the display, receiving a second sequence of 3D maps indicating a deceleration of the motion of the limb toward the display, and receiving a third sequence of 3D maps indicating a motion of the limb away from the display.
Specification