Systems and methods for providing audio to a user based on gaze input
First Claim
Patent Images
1. A system for providing audio to a user gazing at a display, the system comprising:
- an eye tracking device for at least determining a gaze point of a user on a display;
a head tracking device for at least determining head information of the user;
a processor configured for at least;
determining a plurality of virtual sound sources in a virtual environment, each of the virtual sound sources producing one or more virtual sounds in the virtual environment;
determining how many sound channels are produced by an audio device;
determining a particular area of the display which is associated with a virtual location of a particular one of the virtual sound sources;
determining a particular one of the sound channels that is associated with the particular area of the display;
causing audio to be produced to the user, wherein the audio is produced by the particular sound channel associated with the particular area of the display, and content of the audio is based at least in part on (i) the gaze point of the user on the display, (ii) the head information of the user, and (iii) at least one of the one or more virtual sounds produced by the particular virtual sound source at the virtual location;
determining a change in head position of the user; and
causing a change in the content of the audio produced to the user, wherein the change is based at least in part on the change in head position of the user and the changed content is produced at least by the particular sound channel associated with the particular area of the display.
1 Assignment
0 Petitions
Accused Products
Abstract
According to the invention, a method for providing audio to a user is disclosed. The method may include determining, with an eye tracking device, a gaze point of a user on a display. The method may also include causing, with a computer system, an audio device to produce audio to the user, where content of the audio may be based at least in part on the gaze point of the user on the display.
71 Citations
20 Claims
-
1. A system for providing audio to a user gazing at a display, the system comprising:
-
an eye tracking device for at least determining a gaze point of a user on a display; a head tracking device for at least determining head information of the user; a processor configured for at least; determining a plurality of virtual sound sources in a virtual environment, each of the virtual sound sources producing one or more virtual sounds in the virtual environment; determining how many sound channels are produced by an audio device; determining a particular area of the display which is associated with a virtual location of a particular one of the virtual sound sources; determining a particular one of the sound channels that is associated with the particular area of the display; causing audio to be produced to the user, wherein the audio is produced by the particular sound channel associated with the particular area of the display, and content of the audio is based at least in part on (i) the gaze point of the user on the display, (ii) the head information of the user, and (iii) at least one of the one or more virtual sounds produced by the particular virtual sound source at the virtual location; determining a change in head position of the user; and causing a change in the content of the audio produced to the user, wherein the change is based at least in part on the change in head position of the user and the changed content is produced at least by the particular sound channel associated with the particular area of the display.
-
-
2. The system for providing audio to the user gazing at the display of claim 1, further comprising:
a wearable device, wherein the wearable device is configured to be worn by the user and comprises; the eye tracking device; and the display.
-
3. The system for providing audio to the user gazing at the display of claim 2, further comprising:
the audio device for producing audio to the user.
-
4. The system for providing audio to the user gazing at the display of claim 3, wherein the wearable device further comprises:
the audio device.
-
5. The system for providing audio to the user gazing at the display of claim 3, wherein:
causing the change in the content of the audio produced to the user is further based at least in part on a number of sound channels produced by the audio device.
-
6. The system for providing audio to the user gazing at the display of claim 5, wherein the sound channels comprise:
real sound channels or simulated sound channels.
-
7. The system for providing audio to the user gazing at the display of claim 1, wherein the head position of the user comprises at least one selection from a group consisting of:
-
position of the user'"'"'s head; tilt of the user'"'"'s head; and orientation of the user'"'"'s head.
-
-
8. The system for providing audio to the user gazing at the display of claim 1, wherein determining the head position of the user or the change in head position of the user comprises
determining a position and/or orientation of the user'"'"'s eyes, or a change in the position and/or orientation of the user'"'"'s eyes.
-
9. A method for providing audio to a user gazing at a display, the method comprising:
-
determining a gaze point of a user on a display; determining head information of the user; determining a plurality of virtual sound sources in a virtual environment, each of the virtual sound sources producing one or more virtual sounds in the virtual environment; determining how many sound channels are produced by an audio device; determining a particular area of the display which is associated with a virtual location of a particular one of the virtual sound sources; determining a particular one of the sound channels that is associated with the particular area of the display; causing audio to be produced to the user, wherein the audio is produced by the particular sound channel associated with the particular area of the display, and content of the audio is based at least in part on (i) the gaze point of the user on the display (ii) the head information of the user, and (iii) at least one or more of the virtual sounds produced by the particular virtual sound source at the virtual location; determining a change in head position of the user; and causing a change in the content of the audio produced to the user, wherein the change is based at least in part on the change in head position of the user and the changed content is produced at least by the particular sound channel associated with the particular area of the display.
-
-
10. The method for providing audio to the user gazing at the display of claim 9, further comprising:
producing audio to the user.
-
11. The method for providing audio to the user gazing at the display of claim 9, wherein:
causing the change in the content of the audio produced to the user is further based at least in part on a number of sound channels produced by the audio device.
-
12. The method for providing audio to the user gazing at the display of claim 11, wherein the sound channels comprise:
real sound channels or simulated sound channels.
-
13. The method for providing audio to the user gazing at the display of claim 9, wherein the head position of the user comprises at least one selection from a group consisting of:
-
position of the user'"'"'s head; tilt of the user'"'"'s head; and orientation of the user'"'"'s head.
-
-
14. The method for providing audio to the user gazing at the display of claim 9, wherein determining the head position of the user or the change in head position of the user comprises
determining a position and/or orientation of the user'"'"'s eyes, or a change in the position and/or orientation of the user'"'"'s eyes.
-
15. A non-transitory machine readable medium having instructions stored thereon for providing audio to a user gazing at a display, the instructions executable by one or more processors for at least:
-
determining a gaze point of a user on a display; determining head information of the user; determining a plurality of virtual sound sources in a virtual environment, each of the virtual sound sources producing one or more virtual sounds in the virtual environment; determining how many sound channels are produced by an audio device; determining a particular area of the display which is associated with a virtual location of a particular one of the virtual sound sources; determining a particular one of the sound channels that is associated with the particular area of the display; causing audio to be produced to the user, wherein the audio is produced by the particular sound channel associated with the particular area of the display, and content of the audio is based at least in part on (i) the gaze point of the user on the display (ii) the head information of the user, and (iii) at least one or more of the virtual sounds produced by the particular virtual sound source at the virtual location; determining a change in head position of the user; and causing a change in the content of the audio produced to the user, wherein the change is based at least in part on the change in head position of the user and the changed content is produced at least by the particular sound channel associated with the particular area of the display.
-
-
16. The non-transitory machine readable medium of claim 15, wherein the instructions are further executable for at least:
producing audio to the user.
-
17. The non-transitory machine readable medium of claim 15, wherein:
causing the change in the content of the audio produced to the user is further based at least in part on a number of sound channels produced by the audio device.
-
18. The non-transitory machine readable medium of claim 17, wherein the sound channels comprise:
real sound channels or simulated sound channels.
-
19. The non-transitory machine readable medium of claim 15, wherein the head position of the user comprises at least one selection from a group consisting of:
-
position of the user'"'"'s head; tilt of the user'"'"'s head; and orientation of the user'"'"'s head.
-
-
20. The non-transitory machine readable medium of claim 15, wherein determining the head position of the user or the change in head position of the user comprises
determining a position and/or orientation of the user'"'"'s eyes, or a change in the position and/or orientation of the user'"'"'s eyes.
Specification