System and method of gaze predictive rendering of a focal area of an animation
First Claim
1. A system configured for gaze-predictive rendering of a focal area of an animation presented on a display, wherein the animation includes a sequence of frames, the sequence of frames including a first frame and a plurality of subsequent frames, the system comprising:
- one or more physical processors configured by machine-readable instructions to;
obtain state information describing a state of a virtual space, the state at an individual point in time defining one or more virtual objects within the virtual space and their positions;
determine a field of view of the virtual space, the frames of the animation being images of the virtual space within the field of view, such that the first frame is an image of the virtual space within the field of view at a point in time that corresponds to the first frame;
apply the state information for a subsequent frame in the plurality of subsequent frames for the animation to a machine learning model to generate a prediction, the prediction relating to one or more expected saccades and one or more gaze directions of a user currently viewing the presented animation, wherein the machine learning model is trained based on statistical targets of eye fixation corresponding to the user'"'"'s foveal region, and wherein the statistical targets of eye fixation are precomputed on a database of previous eye tracked viewing sessions of the animation;
determine, prior to rendering the subsequent frame, the focal area of the subsequent frame within the field of view based on the prediction, such that the focal area includes a foveal region corresponding to the one or more expected saccades and the one or more gaze directions and includes one or more regions outside of the foveal region, wherein the foveal region is a region along the user'"'"'s line of sight that permits high visual acuity with respect to a periphery of the line of sight; and
render, from the state information, one or more images for the subsequent frame of the animation, the one or more images depicting the virtual space within the field of view determined at individual points in time, wherein an area outside of the focal area of the subsequent frame is rendered at a lower resolution than that of the focal area to reduce latency.
4 Assignments
0 Petitions
Accused Products
Abstract
Individual images for individual frames of an animation may be rendered to include individual focal areas. A focal area may include one or more of a foveal region corresponding to a gaze direction of a user, an area surrounding the foveal region, and/or other components. The foveal region may comprise a region along the user'"'"'s line of sight that permits high visual acuity with respect to a periphery of the line of sight. A focal area within an image may be rendered based on parameter values of rendering parameters that are different from parameter values for an area outside the focal area.
-
Citations
18 Claims
-
1. A system configured for gaze-predictive rendering of a focal area of an animation presented on a display, wherein the animation includes a sequence of frames, the sequence of frames including a first frame and a plurality of subsequent frames, the system comprising:
one or more physical processors configured by machine-readable instructions to; obtain state information describing a state of a virtual space, the state at an individual point in time defining one or more virtual objects within the virtual space and their positions; determine a field of view of the virtual space, the frames of the animation being images of the virtual space within the field of view, such that the first frame is an image of the virtual space within the field of view at a point in time that corresponds to the first frame; apply the state information for a subsequent frame in the plurality of subsequent frames for the animation to a machine learning model to generate a prediction, the prediction relating to one or more expected saccades and one or more gaze directions of a user currently viewing the presented animation, wherein the machine learning model is trained based on statistical targets of eye fixation corresponding to the user'"'"'s foveal region, and wherein the statistical targets of eye fixation are precomputed on a database of previous eye tracked viewing sessions of the animation; determine, prior to rendering the subsequent frame, the focal area of the subsequent frame within the field of view based on the prediction, such that the focal area includes a foveal region corresponding to the one or more expected saccades and the one or more gaze directions and includes one or more regions outside of the foveal region, wherein the foveal region is a region along the user'"'"'s line of sight that permits high visual acuity with respect to a periphery of the line of sight; and render, from the state information, one or more images for the subsequent frame of the animation, the one or more images depicting the virtual space within the field of view determined at individual points in time, wherein an area outside of the focal area of the subsequent frame is rendered at a lower resolution than that of the focal area to reduce latency. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
10. A method of gaze-predictive rendering of a focal area of an animation presented on a display, the animation including a sequence of frames, the sequence of frames including a first frame and a plurality of subsequent frames, the method being implemented in a computer system comprising one or more physical processors and storage media storing machine-readable instructions, the method comprising:
-
obtaining state information describing state of a virtual space, the state at an individual point in time defining one or more virtual objects within the virtual space and their positions; determining a field of view of the virtual space, the frames of the animation being images of the virtual space within the field of view, such that the first frame is an image of the virtual space within the field of view at a point in time that corresponds to the first frame; applying the state information for a subsequent frame in the plurality of subsequent frames for the animation to a machine learning model to generate a prediction, the prediction relating to one or more expected saccades and one or more gaze directions of a user currently viewing the presented animation, wherein the machine learning model is trained based on statistical targets of eye fixation corresponding to the user'"'"'s foveal region, and wherein the statistical targets of eye fixation are precomputed on a database of previous eye tracked viewing sessions of the animation; determining, prior to rendering the subsequent frame, the focal area of the subsequent frame within the field of view based on the prediction, such that the focal area includes a foveal region corresponding to the one or more expected saccades and the one or more gaze directions and includes one or more regions outside of the foveal region, wherein the foveal region is a region along the user'"'"'s line of sight that permits high visual acuity with respect to a periphery of the line of sight; and rendering from the state information, one or more images for the subsequent frame of the animation, the one or more images depicting the virtual space within the field of view determined at individual points in time, wherein an area outside of the focal area of the subsequent frame is rendered at a lower resolution than that of the focal area to reduce latency. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18)
-
Specification