Systems and methods for providing audio to a user based on gaze input
First Claim
Patent Images
1. A non-transitory machine readable medium with instructions stored thereon, the instructions executable by at least one processor for at least:
- determining a gaze point of a user on a display;
determining a virtual distance and a virtual direction from each of a plurality of virtual sound sources to the gaze point of the user on the display;
determining a number of sound channels produced by an audio device;
determining an area of the display associated with each sound channel; and
causing the audio device to produce audio, wherein the audio is based on at least one sound channel associated with an area of the display device in which the gaze point of the user is located, and each virtual sound source is produced in a manner based at least in part on the virtual distance and the virtual direction from each virtual sound source to the gaze point of the user.
1 Assignment
0 Petitions
Accused Products
Abstract
According to the invention, a method for providing audio to a user is disclosed. The method may include determining, with an eye tracking device, a gaze point of a user on a display. The method may also include causing, with a computer system, an audio device to produce audio to the user, where content of the audio may be based at least in part on the gaze point of the user on the display.
35 Citations
20 Claims
-
1. A non-transitory machine readable medium with instructions stored thereon, the instructions executable by at least one processor for at least:
-
determining a gaze point of a user on a display; determining a virtual distance and a virtual direction from each of a plurality of virtual sound sources to the gaze point of the user on the display; determining a number of sound channels produced by an audio device; determining an area of the display associated with each sound channel; and causing the audio device to produce audio, wherein the audio is based on at least one sound channel associated with an area of the display device in which the gaze point of the user is located, and each virtual sound source is produced in a manner based at least in part on the virtual distance and the virtual direction from each virtual sound source to the gaze point of the user.
-
-
2. The non-transitory machine readable medium of claim 1, wherein each virtual sound source being produced in a manner based at least in part on the virtual distance and the virtual direction from each virtual sound source to the gaze point of the user comprises:
varying a volume level of each virtual sound source based on the virtual distance of each virtual sound source to the gaze point of the user.
-
3. The non-transitory machine readable medium of claim 1, wherein the virtual direction comprises at least one selection from a group consisting of:
-
a X-direction; a Y-direction; and a Z-direction.
-
-
4. The non-transitory machine readable medium of claim 1, wherein each virtual sound source being produced in a manner based at least in part on the virtual distance and the virtual direction from each virtual sound source to the gaze point of the user comprises:
producing multiple virtual audio sources at multiple dynamic volumes.
-
5. The non-transitory machine readable medium of claim 1, wherein the instructions are further executable by at least one processor for at least:
-
determining head information associated with the user; and wherein content of the audio is further based at least in part on the head information associated with the user.
-
-
6. The non-transitory machine readable medium of claim 5, wherein head information comprises at least one selection from a group consisting of:
-
position of the user'"'"'s head; angle of the user'"'"'s head; orientation of the user'"'"'s head; and size of the user'"'"'s head.
-
-
7. The non-transitory machine readable medium of claim 5, wherein determining head information associated with the user comprises:
determining a position and/or orientation of the user'"'"'s eyes.
-
8. A system, wherein the system comprises:
-
an eye tracking device for at least determining a gaze point of a user on a display; and a processor for at least; determining a virtual distance and direction from each of a plurality of virtual sound sources to the gaze point of the user on the display; determining a number of sound channels produced by an audio device; determining an area of the display associated with each sound channel; and causing the audio device to produce audio, wherein the audio is based on at least one sound channel associated with an area of the display device in which the gaze point of the user is located, and each virtual sound source is produced in a manner based at least in part on the virtual distance and direction from each virtual sound source to the gaze point of the user.
-
-
9. The system of claim 8, wherein each virtual sound source being produced in a manner based at least in part on the virtual distance and the virtual direction from each virtual sound source to the gaze point of the user comprises:
varying a volume level of each virtual sound source based on the virtual distance of each virtual sound source to the gaze point of the user.
-
10. The system of claim 8, wherein the virtual direction comprises at least one selection from a group consisting of:
-
a X-direction; a Y-direction; and a Z-direction.
-
-
11. The system of claim 8, wherein each virtual sound source being produced in a manner based at least in part on the virtual distance and the virtual direction from each virtual sound source to the gaze point of the user comprises:
producing multiple virtual audio sources at multiple dynamic volumes.
-
12. The system of claim 8, wherein the processor is further for at least:
-
determining head information associated with the user; and wherein content of the audio is further based at least in part on the head information associated with the user.
-
-
13. The system of claim 12, wherein head information comprises at least one selection from a group consisting of:
-
position of the user'"'"'s head; angle of the user'"'"'s head; orientation of the user'"'"'s head; and size of the user'"'"'s head.
-
-
14. The system of claim 12, wherein determining head information associated with the user comprises:
determining a position and/or orientation of the user'"'"'s eyes.
-
15. A method, comprising:
-
determining a gaze point of a user on a display; determining a virtual distance and direction from each of a plurality of virtual sound sources to the gaze point of the user on the display; determining a number of sound channels produced by an audio device; determining an area of the display associated with each sound channel; and causing the audio device to produce audio, wherein the audio is based on at least one sound channel associated with an area of the display device in which the gaze point of the user is located, and each virtual sound source is produced in a manner based at least in part on the virtual distance and direction from each virtual sound source to the gaze point of the user.
-
-
16. The method of claim 15, wherein each virtual sound source being produced in a manner based at least in part on the virtual distance and the virtual direction from each virtual sound source to the gaze point of the user comprises:
varying a volume level of each virtual sound source based on the virtual distance of each virtual sound source to the gaze point of the user.
-
17. The method of claim 15, wherein the virtual direction comprises at least one selection from a group consisting of:
-
a X-direction; a Y-direction; and a Z-direction.
-
-
18. The method of claim 15, wherein each virtual sound source being produced in a manner based at least in part on the virtual distance and the virtual direction from each virtual sound source to the gaze point of the user comprises:
producing multiple virtual audio sources at multiple dynamic volumes.
-
19. The method of claim 15, wherein the method further comprises:
-
determining head information associated with the user; and wherein content of the audio is further based at least in part on the head information associated with the user.
-
-
20. The method of claim 19, wherein determining head information associated with the user comprises:
determining a position and/or orientation of the user'"'"'s eyes.
Specification