Multi-modal touch screen emulator
First Claim
Patent Images
1. A method comprising:
- receiving a first image from a first front facing camera of a device having a front facing display;
identifying one or more eyes of a user of the device based on the first image;
determining a gaze location of the one or more eyes on the front facing display;
determining a change in the gaze location;
receiving a second image from a second front facing camera of the device;
identifying a point of interest on the front facing display based on gaze information, wherein the gaze information includes the gaze location and the change in the gaze location;
identifying a hand of the user based on the second image;
identifying one or more fingertips associated with the hand of the user based on the second image;
determining a movement of one or more of the hand and the one or more fingertips;
identifying a hand action based on gesture information, wherein the gesture information is to include the movement and the movement is one or more of a one-dimensional (1D) movement, a two-dimensional (2D) movement and a three-dimensional (3D) movement; and
initiating a device action with respect to the front facing display based on the point of interest and the hand action, wherein the device action is to emulate a multi-modal touchscreen function with respect to the front facing display that concurrently tracks and synchronizes the gaze information and the gesture information.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems and methods may provide for capturing a user input by emulating a touch screen mechanism. In one example, the method may include identifying a point of interest on a front facing display of the device based on gaze information associated with a user of the device, identifying a hand action based on gesture information associated with the user of the device, and initiating a device action with respect to the front facing display based on the point of interest and the hand action.
11 Citations
23 Claims
-
1. A method comprising:
-
receiving a first image from a first front facing camera of a device having a front facing display; identifying one or more eyes of a user of the device based on the first image; determining a gaze location of the one or more eyes on the front facing display; determining a change in the gaze location; receiving a second image from a second front facing camera of the device; identifying a point of interest on the front facing display based on gaze information, wherein the gaze information includes the gaze location and the change in the gaze location; identifying a hand of the user based on the second image; identifying one or more fingertips associated with the hand of the user based on the second image; determining a movement of one or more of the hand and the one or more fingertips; identifying a hand action based on gesture information, wherein the gesture information is to include the movement and the movement is one or more of a one-dimensional (1D) movement, a two-dimensional (2D) movement and a three-dimensional (3D) movement; and initiating a device action with respect to the front facing display based on the point of interest and the hand action, wherein the device action is to emulate a multi-modal touchscreen function with respect to the front facing display that concurrently tracks and synchronizes the gaze information and the gesture information. - View Dependent Claims (2, 3, 4)
-
-
5. A computer readable storage medium comprising a set of instructions which, if executed by a processor, cause a device to:
-
identify one or more eyes of a user of a device having a front facing display; determine a gaze location of the one or more eyes on the front facing display; identify a point of interest on the front facing display based on gaze information associated with the user of the device; determine a change in the gaze location, wherein the gaze information is to include the gaze location and a change in the gaze location; identify a hand action based on gesture information associated with the user of the device; initiate a device action with respect to the front facing display based on the point of interest and the hand action, wherein the device action is to emulate a multi-modal touchscreen function with respect to the front facing display based on the first image and the second image, and wherein the multi-modal touchscreen function is to concurrently track and synchronize the gaze information and the gesture information; receive image data from a front facing camera configuration of the device; identify the gaze information and the gesture information based on the image data; receive a first image from a first camera of the camera configuration; and receive a second image from a second camera of the camera configuration, wherein the image data is to include the first image and the second image, the gaze information is to be determined based on the first image, and the gesture information is to be determined based on the second image. - View Dependent Claims (6, 7, 8, 9)
-
-
10. An apparatus comprising:
-
a gaze module to identify a point of interest on a front facing display of a device based on gaze information associated with a user of the device, including; identifying one or more eyes of the user, determining a gaze location of the one or more eyes on the front facing display, and determining a change in the gaze location, wherein the gaze information is to include the gaze location and the change in the gaze location; a gesture module to identify a hand action based on gesture information associated with the user of the device, including; identifying a hand of the user, identifying one or more fingertips associated with the hand of the user, and determining a movement of one or more of the hand and the one or more fingertips, wherein the gesture information is to include the movement and the movement is to be one or more of a one-dimensional (1D) movement, a two-dimensional (2D) movement and a three-dimensional (3D) movement; an integration module to initiate a device action with respect to the front facing display based on the point of interest and the hand action, wherein the device action is to emulate a multi-modal touchscreen function with respect to the front facing display based on the first image and the second image, and wherein the multi-modal touchscreen function is to concurrently track and synchronize the gaze information and the gesture information; a camera interface to receive image data from a front facing camera configuration of the device, wherein the gaze module is to identify the gaze information based on the image data and the gesture module is to identify the gesture information based on the image data, wherein the camera interface is to; receive a first image from a first camera of the camera configuration, and receive a second image from a second camera of the camera configuration, wherein the image data is to include the first image and the second image, the gaze module is to determine the gaze information based on the first image, and the gesture module is to determine the gesture information based on the second image. - View Dependent Claims (11, 12, 13)
-
-
14. A device comprising:
-
a front facing display; and a multi-modal touch screen emulator including; a gaze module to identify a point of interest on the front facing display based on gaze information associated with a user of the device, including; identifying one or more eyes of the user, determining a gaze location of the one or more eyes on the front facing display, and determining a change in the gaze location, wherein the gaze information is to include the gaze location and the change in the gaze location, a gesture module to identify a hand action based on gesture information associated with a user of the device, and an integration module to initiate a device action with respect to the front facing display based on the point of interest and the hand action, wherein the device action is to emulate a multi-modal touchscreen function with respect to the front facing display based on the first image and the second image, and wherein the multi-modal touchscreen function is to concurrently track and synchronize the gaze information and the gesture information; and a front facing camera configuration, wherein the multi-modal touch screen emulator further includes a camera interface to receive image data from the front facing camera configuration, the gaze module is to identify the gaze information based on the image data, and the gesture module is to identify the gesture information based on the image data, and wherein the camera configuration includes a first camera and a second camera, and wherein the camera interface is to, receive a first image from the first camera of the camera configuration, and receive a second image from the second camera of the camera configuration, wherein the image data is to include the first image and the second image, the gaze module is to determine the gaze information based on the first image, and the gesture module is to determine the gesture information based on the second image. - View Dependent Claims (15, 16, 17, 18)
-
-
19. A method comprising:
-
identifying one or more eyes of a user of a device having a front facing display; determining a gaze location of the one or more eyes on the front facing display; identifying a point of interest on the front facing display based on gaze information associated with the user of the device; determining a change in the gaze location, wherein the gaze information is to include the gaze location and the change in the gaze location; identifying a hand action based on gesture information associated with the user of the device; initiating a device action with respect to the front facing display based on the point of interest and the hand action, wherein the device action is to emulate a multi-modal touchscreen function with respect to the front facing display based on the first image and the second image, and wherein the multi-modal touchscreen function is to concurrently track and synchronize the gaze information and the gesture information; receiving image data from a front facing camera configuration of the device; identifying the gaze information and the gesture information based on the image data; receiving a first image from a first camera of the camera configuration; and receiving a second image from a second camera of the camera configuration, wherein the image data is to include the first image and the second image, the gaze information is to be determined based on the first image, and the gesture information is to be determined based on the second image. - View Dependent Claims (20, 21, 22, 23)
-
Specification