Systems and methods for providing audio to a user based on gaze input
First Claim
Patent Images
1. A method for providing visual depth of field adjustment to a scene on a display device comprising:
- displaying a scene on a display device, the scene comprising at least a first portion and a second portion;
determining a gaze direction of a user with an eye tracking device;
determining a gaze target where the gaze direction intersects the scene, wherein the gaze target comprises an object in the first portion of the scene;
determining a particular sound channel that is associated with the first portion of the scene;
altering a portion of a displayed scene instead of altering an entirety of the scene, modifying a depth of the second portion to be different than a depth of the first portion,wherein modifying the depth of the second portion comprises;
simulating a ray being projected from the object in the first portion of the scene to an object in the second portion of the scene,determining a distance between the object in the first portion of the scene and the object in the second portion of the scene,altering a depth of the object in the second portion of the scene differently than the object in the first portion of the scene, andcausing audio to be produced to the user, wherein the audio is produced by the particular sound channel associated with the object in the first portion of the scene, and content of the audio is based at least in part on (i) the gaze target, (ii) head information of the user, and (iii) at least one or more virtual sounds produced by the object in the first portion of the scene.
1 Assignment
0 Petitions
Accused Products
Abstract
According to the invention, a method for providing audio to a user is disclosed. The method may include determining, with an eye tracking device, a gaze point of a user on a display. The method may also include causing, with a computer system, an audio device to produce audio to the user, where content of the audio may be based at least in part on the gaze point of the user on the display.
49 Citations
11 Claims
-
1. A method for providing visual depth of field adjustment to a scene on a display device comprising:
-
displaying a scene on a display device, the scene comprising at least a first portion and a second portion; determining a gaze direction of a user with an eye tracking device; determining a gaze target where the gaze direction intersects the scene, wherein the gaze target comprises an object in the first portion of the scene; determining a particular sound channel that is associated with the first portion of the scene; altering a portion of a displayed scene instead of altering an entirety of the scene, modifying a depth of the second portion to be different than a depth of the first portion, wherein modifying the depth of the second portion comprises; simulating a ray being projected from the object in the first portion of the scene to an object in the second portion of the scene, determining a distance between the object in the first portion of the scene and the object in the second portion of the scene, altering a depth of the object in the second portion of the scene differently than the object in the first portion of the scene, and causing audio to be produced to the user, wherein the audio is produced by the particular sound channel associated with the object in the first portion of the scene, and content of the audio is based at least in part on (i) the gaze target, (ii) head information of the user, and (iii) at least one or more virtual sounds produced by the object in the first portion of the scene.
-
-
2. The method of claim 1, wherein determining the distance and the direction between the gaze target and the first portion of the scene further comprises:
-
dividing the scene into horizontal sections; and determining a distance between at least two of the horizontal sections.
-
-
3. The method of claim 2, wherein altering display of the first portion of the scene comprises:
altering display of each of at least one of the horizontal sections to simulate a depth of field.
-
4. The method of claim 1 wherein the altering of the display comprises altering display of the first portion of the scene based at least in part on the distance and the direction.
-
5. The method of claim 1 wherein the distance is measured from a sound producing object in the second portion of the display to the gaze target in the first portion of the display and the direction is measured from the gaze target from the sound producing object.
-
6. A system for providing visual depth of field adjustment to a scene comprising:
-
a display device for displaying a scene comprising a first portion and a second portion; an eye tracking device for determining a gaze direction of a user; and at least one processor for; determining a gaze target where the gaze direction intersects the first portion of the scene and determining a particular sound channel is associated with the first portion of the scene; and altering display of the first portion of the scene instead of altering an entirety of the scene, wherein the altering comprises modifying a depth of the second portion to be different than a depth of the first portion, wherein modifying the depth of the second portion comprises; simulating a ray being projected from the object in the first portion of the scene to an object in the second portion of the scene, determining a distance between the object in the first portion of the scene and the object in the second portion of the scene, altering a depth of the object in the second portion of the scene differently than the object in the first portion of the scene; and causing audio to be produced to the user, wherein the audio is produced by the particular sound channel associated with the object in the first portion of the scene, and content of the audio is based at least in part on (i) the gaze target, (ii) head information of the user, and (iii) at least one or more virtual sounds produced by the object in the first portion of the scene.
-
-
7. The system of claim 6, wherein altering display of the first portion of the scene comprises:
enlarging the gaze target relative to the first portion of the scene.
-
8. The system of claim 6, wherein determining the distance and the direction between the gaze target and the first portion of the scene comprises:
-
dividing the scene into a plurality of horizontal sections; determining a distance between at least two of the horizontal sections; determining which of the horizontal sections contains the gaze target; determining which of the horizontal sections contains the first portion of the scene; and determining the distance and the direction between the gaze target and the first portion of the scene based on at least the distance between the horizontal section that contains the gaze target and the horizontal section that contains the first portion of the scene.
-
-
9. A non-transitory computer-readable medium having instructions stored thereon executable by a computing device to cause the computing device to perform operations comprising:
-
displaying a scene on a display device, the scene comprising a first portion and a second portion; determining a gaze direction of a user with an eye tracking device; determining a gaze target where the gaze direction intersects the first portion of the scene and determining a particular sound channel that is associated with the first portion of the scene; and altering display of the first portion of the scene instead of altering an entirety of the scene, wherein the altering comprises or modifying a depth of the second portion to be different than a depth of the first portion, wherein modifying the depth of the second portion comprises; simulating a ray being projected from the object in the first portion of the scene to an object in the second portion of the scene, determining a distance between the object in the first portion of the scene and the object in the second portion of the scene, altering a depth of the object in the second portion of the scene differently than the object in the first portion of the scene; and causing audio to be produced to the user, wherein the audio is produced by the particular sound channel associated with the object in the first portion of the scene, and content of the audio is based at least in part on (i) the gaze target, (ii) head information of the user, and (iii) at least one or more virtual sounds produced by the object in the first portion of the scene.
-
-
10. The non-transitory computer-readable medium of claim 9, wherein determining the distance and the direction between the gaze target and the portion of the scene comprises:
-
simulating a first ray extending from the gaze target to the first portion of the scene; and determining the distance and the direction based on a distance and a direction traversed by the first ray.
-
-
11. The non-transitory computer-readable medium of claim 9, wherein altering display of the portion of the scene comprises:
altering a rendering characteristic of the portion of the scene.
Specification