Attention-based rendering and fidelity
First Claim
Patent Images
1. A method for attention-based rendering, the method comprising:
- detecting via one or more sensors when a user is exhibiting a reaction at a point in time during display of content on a display screen, wherein the sensors include at least an eye-tracking device that detects eye-tracking data corresponding to the point in time when the user reaction is detected, the detected eye-tracking data indicative of reflected light from one or more eyes of the user at the point in time during display of the content on the display screen;
analyzing the eye-tracking data corresponding to the point in time when the user reaction is detected;
identifying a gaze direction based on a vector of the reflected light and an eye rotation based on changes in the reflected light;
locating a focus area on the display screen corresponding to the time when the user reaction is detected, wherein identifying a location of the focus area is based on the identified gaze direction and eye rotation; and
increasing processing power used to render the located focus area relative to processing power used to render a remainder of the display screen based on the detected user reaction while the user is currently focusing on the located focus area.
3 Assignments
0 Petitions
Accused Products
Abstract
Methods and systems for attention-based rendering on an entertainment system are provided. A tracking device captures data associated with a user, which is used to determine that a user has reacted (e.g., visually or emotionally) to a particular part of the screen. The processing power is increased in this part of the screen, which increases detail and fidelity of the graphics and/or updating speed. The processing power in the areas of the screen that the user is not paying attention to is decreased and diverted from those areas, resulting in decreased detail and fidelity of the graphics and/or decreased updating speed.
23 Citations
20 Claims
-
1. A method for attention-based rendering, the method comprising:
-
detecting via one or more sensors when a user is exhibiting a reaction at a point in time during display of content on a display screen, wherein the sensors include at least an eye-tracking device that detects eye-tracking data corresponding to the point in time when the user reaction is detected, the detected eye-tracking data indicative of reflected light from one or more eyes of the user at the point in time during display of the content on the display screen; analyzing the eye-tracking data corresponding to the point in time when the user reaction is detected; identifying a gaze direction based on a vector of the reflected light and an eye rotation based on changes in the reflected light; locating a focus area on the display screen corresponding to the time when the user reaction is detected, wherein identifying a location of the focus area is based on the identified gaze direction and eye rotation; and increasing processing power used to render the located focus area relative to processing power used to render a remainder of the display screen based on the detected user reaction while the user is currently focusing on the located focus area. - View Dependent Claims (2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20)
-
-
8. A system for attention-based rendering, the system comprising:
-
a display screen; a graphics processing unit that generates a display of content at a point in time, wherein the generated display is displayed on the display screen; one or more sensors that detect that a user is exhibiting a reaction at the point in time to the content displayed on the display screen, wherein the sensors include at least a tracking device that detects eye-tracking data corresponding to the point in time when the user reaction is detected, the detected eye-tracking data of the user indicative of reflected light from one or more eyes of the user at the point in time during display of the content on the display screen; a memory; and a processor that executes instructions stored in the memory, wherein execution of the instructions by the processor; analyzes the eye-tracking data corresponding to the point in time when the user reaction is detected, identifies a gaze direction based on a vector of the reflected light and an eye rotation based on changes in the reflected light, locates a focus area on the display screen corresponding to the time when the user reaction is detected, wherein identifying a location of the focus area is based on the identified gaze direction and eye rotation, and increases processing power used to render the located focus area relative to processing power used to render a remainder of the display screen based on the detected user while the user is currently focusing on the located focus area. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory computer-readable storage medium having embodied thereon a program, the program being executable by a processor to perform a method for attention-based rendering, the method comprising:
-
detecting via one or more sensors when a user is exhibiting a reaction at a point in time during display of content on a display screen, wherein the sensors include at least an eye-tracking device that detects eye-tracking data corresponding to the point in time when the user reaction is detected, the eye-tracking data indicative of reflected light from one or more eyes of the user at the point in time during display of the content on the display screen; analyzing the eye-tracking data corresponding to the point in time when the user reaction is detected; identifying a gaze direction based on a vector of the reflected light and an eye rotation based on changes in the reflected light; locating a focus area on the display screen corresponding to the time when the user reaction is detected, wherein identifying a location of the focus area is based on the identified gaze direction and eye rotation; and increasing processing power used to render the located focus area relative to processing power used to render a remainder of the display screen based on the detected user reaction while the user is currently focusing on the located focus area.
-
Specification