Gaze-enhanced virtual touchscreen
First Claim
Patent Images
1. A method, comprising:
- presenting, by a computer, multiple interactive items on a display coupled to the computer;
projecting a light toward a scene that includes a user of the computer;
capturing and processing the projected light returned from the scene so as to reconstruct an initial three-dimensional (3D) map containing at least a head of the user of the computer;
capturing and processing a two dimensional (2D) image containing reflections of the projected light from a fundus and a cornea of an eye of the user;
extracting, from the initial 3D map, 3D coordinates of the head;
identifying, based on the 3D coordinates of the head and the reflections of the projected light from the fundus and the cornea of the eye, a direction of a gaze of the user;
selecting, in response to the gaze direction, one of the multiple interactive items;
subsequent to selecting the one of the interactive items, receiving a sequence of three-dimensional (3D) maps containing at least a hand of the user;
analyzing the 3D maps to detect a gesture performed by the user; and
performing an operation on the selected interactive item in response to the gesture.
3 Assignments
0 Petitions
Accused Products
Abstract
A method, including presenting, by a computer, multiple interactive items on a display coupled to the computer, receiving an input indicating a direction of a gaze of a user of the computer. In response to the gaze direction, one of the multiple interactive items is selected, and subsequent to the one of the interactive items being selected, a sequence of three-dimensional (3D) maps is received containing at least a hand of the user. The 3D maps are analyzed to detect a gesture performed by the user, and an operation is performed on the selected interactive item in response to the gesture.
-
Citations
30 Claims
-
1. A method, comprising:
-
presenting, by a computer, multiple interactive items on a display coupled to the computer; projecting a light toward a scene that includes a user of the computer; capturing and processing the projected light returned from the scene so as to reconstruct an initial three-dimensional (3D) map containing at least a head of the user of the computer; capturing and processing a two dimensional (2D) image containing reflections of the projected light from a fundus and a cornea of an eye of the user; extracting, from the initial 3D map, 3D coordinates of the head; identifying, based on the 3D coordinates of the head and the reflections of the projected light from the fundus and the cornea of the eye, a direction of a gaze of the user; selecting, in response to the gaze direction, one of the multiple interactive items; subsequent to selecting the one of the interactive items, receiving a sequence of three-dimensional (3D) maps containing at least a hand of the user; analyzing the 3D maps to detect a gesture performed by the user; and performing an operation on the selected interactive item in response to the gesture. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. An apparatus, comprising:
-
a sensing device, comprising; an illumination subassembly, which is configured to project a light toward a scene that includes a user of a computer; an imaging subassembly, which is configured to capture the projected light returned from the scene, including reflections of the projected light from a fundus and a cornea of an eye of the user; and a processor, which is configured to generate, based on the captured light, three dimensional (3D) maps containing at least a head and a hand of a user, and a two dimensional (2D) image containing reflections of the projected light from a fundus and a cornea of an eye of the user, to extract 3D coordinates of the head from the initial 3D map, and to identify, based on the 3D coordinates of the head and the reflections of the projected light from the fundus and the cornea of the eye, a direction of a gaze of the user; a display; and the computer, which is coupled to the sensing device and the display, and configured to present, on the display, multiple interactive items, to receive an input indicating a direction of a gaze performed by a user of the computer, to select, in response to the gaze direction, one of the multiple interactive items, to receive, subsequent to selecting the one of the interactive items, a sequence of the three-dimensional (3D) maps containing at least the hand of the user, to analyze the 3D maps to detect a gesture performed by the user, and to perform an operation on the selected one of the interactive items in response to the gesture. - View Dependent Claims (17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30)
-
Specification