EYE TRACKER BASED CONTEXTUAL ACTION
First Claim
1. A method for manipulating objects or parts of objects and performing contextual actions related to the objects presented on a display of a computer device associated with an eye tracking system, said method comprising:
- displaying objects on said display of said computer device;
providing an eye-tracking data signal describing a user'"'"'s gaze point on the display and/or relatively said display;
receiving activation input;
identifying an object or a part of an object on which said user is gazing at using said determined gaze point and/or said activation input;
determining said object or object part to be an object or object part of interest if current gaze conditions fulfil predetermined gaze conditions and/or said activation input fulfils predetermined conditions;
determining a specific contextual action based on the received activation input and the object or object part of interest; and
executing said specific contextual action.
2 Assignments
0 Petitions
Accused Products
Abstract
The present invention relates to systems and methods for assisting a user when interacting with a graphical user interface by combining eye based input with input for e.g. selection and activation of objects and object parts and execution of contextual actions related to the objects and object parts. The present invention also relates to such systems and methods in which the user can configure and customize specific combinations of eye data input and input that should result in a specific contextual action.
-
Citations
7 Claims
-
1. A method for manipulating objects or parts of objects and performing contextual actions related to the objects presented on a display of a computer device associated with an eye tracking system, said method comprising:
-
displaying objects on said display of said computer device; providing an eye-tracking data signal describing a user'"'"'s gaze point on the display and/or relatively said display; receiving activation input; identifying an object or a part of an object on which said user is gazing at using said determined gaze point and/or said activation input; determining said object or object part to be an object or object part of interest if current gaze conditions fulfil predetermined gaze conditions and/or said activation input fulfils predetermined conditions; determining a specific contextual action based on the received activation input and the object or object part of interest; and executing said specific contextual action. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A system for assisting a user in manipulating objects or parts of objects and performing contextual actions related to the objects presented on a display of a computer device associated with an eye tracking system, said system comprising:
-
an input module adapted to receive activation input from an input device associated with said computer device; an object identifier adapted to; receive an eye-tracking data signal describing a user'"'"'s gaze point on the display and/or relatively said display, identify an object or a part of an object on which said user is gazing at using said determined gaze point and/or said activation input, and determine said object to be an object of interest if current gaze conditions fulfil predetermined gaze conditions and/or if said activation input fulfils predetermined conditions; and an action determining module adapted to; determine a specific contextual action based on the received activation input and the object or object part of interest, and providing instructions for execution of said specific contextual action.
-
-
7. A method for manipulating objects or parts of objects and performing contextual actions related to the objects presented on a display of a computer device associated with an eye tracking system, said method comprising:
-
providing an eye-tracking data signal describing a user'"'"'s gaze point on the display; receiving first activation input; performing a zooming action, wherein an area around said gaze point or an object or object part of interest is gradually enlarged; receiving second activation input from said input device; determining a specific contextual action based on the received activation input and said enlarged area or object or object part of interest; and executing said specific contextual action.
-
Specification