Systems and methods for providing audio to a user based on gaze input
First Claim
Patent Images
1. A system for modifying a volume of audio provided to a user gazing at a display, the system comprising:
- an eye tracking device for at least determining a gaze point of a user on a display;
a processor configured for at least;
determining audio content associated with a virtual sound source in a virtual environment, wherein the audio content has a first volume and includes one or more virtual sounds in the virtual environment;
determining a particular area of the display which is associated with the audio content;
determining a virtual distance and a virtual direction from the virtual sound source to the gaze point of the user on the display;
determining a sound channel produced by an audio device, wherein the sound channel is associated with the particular area of the display;
causing the audio content to be produced to the user, wherein the audio content is produced at the first volume via the sound channel, and the first volume is based at least in part on (i) the gaze point of the user on the display, and (ii) at least one of the virtual distance or the virtual direction;
determining a change in the gaze point of the user;
determining, based on the changed gaze point, a modified virtual distance and a modified virtual direction from the virtual sound source to the changed gaze point; and
causing a change in the first volume of the audio content, wherein the change is based at least in part on the modified virtual distance and the modified virtual direction.
1 Assignment
0 Petitions
Accused Products
Abstract
According to the invention, a method for providing audio to a user is disclosed. The method may include determining, with an eye tracking device, a gaze point of a user on a display. The method may also include causing, with a computer system, an audio device to produce audio to the user, where content of the audio may be based at least in part on the gaze point of the user on the display.
78 Citations
20 Claims
-
1. A system for modifying a volume of audio provided to a user gazing at a display, the system comprising:
-
an eye tracking device for at least determining a gaze point of a user on a display; a processor configured for at least; determining audio content associated with a virtual sound source in a virtual environment, wherein the audio content has a first volume and includes one or more virtual sounds in the virtual environment; determining a particular area of the display which is associated with the audio content; determining a virtual distance and a virtual direction from the virtual sound source to the gaze point of the user on the display; determining a sound channel produced by an audio device, wherein the sound channel is associated with the particular area of the display; causing the audio content to be produced to the user, wherein the audio content is produced at the first volume via the sound channel, and the first volume is based at least in part on (i) the gaze point of the user on the display, and (ii) at least one of the virtual distance or the virtual direction; determining a change in the gaze point of the user; determining, based on the changed gaze point, a modified virtual distance and a modified virtual direction from the virtual sound source to the changed gaze point; and causing a change in the first volume of the audio content, wherein the change is based at least in part on the modified virtual distance and the modified virtual direction.
-
-
2. The system for modifying the volume of audio provided to the user gazing at the display of claim 1, the system further comprising:
a wearable device, wherein the wearable device is configured to be worn by the user and comprises; the eye tracking device; and the display.
-
3. The system for modifying the volume of audio provided to the user gazing at the display of claim 2, the system further comprising:
the audio device for producing audio to the user.
-
4. The system for modifying the volume of audio provided to the user gazing at the display of claim 3, wherein the wearable device further comprises:
the audio device.
-
5. The system for modifying the volume of audio provided to the user gazing at the display of claim 3, wherein:
causing the change in the first volume of the audio content is further based at least in part on a number of sound channels produced by the audio device.
-
6. The system for modifying the volume of audio provided to the user gazing at the display of claim 5, wherein the sound channels comprise:
real sound channels or simulated sound channels.
-
7. The system for modifying the volume of audio provided to the user gazing at the display of claim 1, the system further comprising a head-tracking device for at least determining head information of the user, and
wherein processor is further configured for determining a head position of the user, based on the determined head information, wherein the head position of the user comprises at least one selection from a group consisting of: - position of the user'"'"'s head, tilt of the user'"'"'s head, and orientation of the user'"'"'s head.
-
8. The system for modifying the volume of audio provided to the user gazing at the display of claim 7, wherein determining the head position of the user comprises determining a position and/or orientation of the user'"'"'s eyes, or a change in the position and/or orientation of the user'"'"'s eyes.
-
9. A method for providing audio to a user gazing at a display, the method comprising:
-
determining a gaze point of a user on a display; determining audio content associated with a virtual sound source in a virtual environment, wherein the audio content has a first volume and includes one or more virtual sounds in the virtual environment; determining a particular area of the display which is associated with the audio content; determining a virtual distance and a virtual direction from the virtual sound source to the gaze point of the user on the display; determining a sound channel produced by an audio device, wherein the sound channel is associated with the particular area of the display; causing the audio content to be produced to the user, wherein the audio content is produced at the first volume via the sound channel, and the first volume is based at least in part on (i) the gaze point of the user on the display, and (ii) at least one of the virtual distance or the virtual direction; determining a change in the gaze point of the user; determining, based on the changed gaze point, a modified virtual distance and a modified virtual direction from the virtual sound source to the changed gaze point; and causing a change in the first volume of the audio content, wherein the change is based on at least one of the modified virtual distance or the modified virtual direction.
-
-
10. The method for providing audio to the user gazing at the display of claim 9, further comprising:
producing the audio content to the user.
-
11. The method for providing audio to the user gazing at the display of claim 9, wherein:
causing the change in the first volume of the audio content is further based at least in part on a number of sound channels produced by the audio device.
-
12. The method for providing audio to the user gazing at the display of claim 11, wherein the sound channels comprise:
real sound channels or simulated sound channels.
-
13. The method for providing audio to the user gazing at the display of claim 9, the method further comprising determining head information of the user,
wherein processor is further configured for determining a head position of the user, based on the determined head information, and wherein the head position of the user comprises at least one selection from a group consisting of: - position of the user'"'"'s head, tilt of the user'"'"'s head, and orientation of the user'"'"'s head.
-
14. The method for providing audio to the user gazing at the display of claim 13, wherein determining the head position of the user or the change in head position of the user comprises determining a position and/or orientation of the user'"'"'s eyes, or a change in the position and/or orientation of the user'"'"'s eyes.
-
15. A non-transitory machine readable medium having instructions stored thereon for providing audio to a user gazing at a display, the instructions executable by one or more processors for at least:
-
determining a gaze point of a user on a display; determining audio content associated with a virtual sound source in a virtual environment, wherein the audio content has a first volume and includes one or more virtual sounds in the virtual environment; determining a particular area of the display which is associated with the audio content; determining a virtual distance and a virtual direction from the virtual sound source to the gaze point of the user on the display; determining a sound channel produced by an audio device, wherein the sound channel is associated with the particular area of the display; causing the audio content to be produced to the user, wherein the audio content is produced at the first volume via the sound channel, and the first volume is based at least in part on (i) the gaze point of the user on the display, and (ii) at least one of the virtual distance or the virtual direction; determining a change in the gaze point of the user; determining, based on the changed gaze point, a modified virtual distance and a modified virtual direction from the virtual sound source to the changed gaze point; and causing a change in the first volume of the audio content, wherein the change is based on at least one of the modified virtual distance or the modified virtual direction.
-
-
16. The non-transitory machine readable medium of claim 15, wherein the instructions are further executable for at least:
producing the audio content to the user.
-
17. The non-transitory machine readable medium of claim 15, wherein:
causing the change in the first volume of the audio content is further based at least in part on a number of sound channels produced by the audio device.
-
18. The non-transitory machine readable medium of claim 17, wherein the sound channels comprise:
real sound channels or simulated sound channels.
-
19. The non-transitory machine readable medium of claim 15, the instructions further comprising determining head information of the user,
wherein processor is further configured for determining a head position of the user, based on the determined head information, and wherein the head position of the user comprises at least one selection from a group consisting of: - position of the user'"'"'s head, tilt of the user'"'"'s head, and orientation of the user'"'"'s head.
-
20. The non-transitory machine readable medium of claim 19, wherein determining the head position of the user or the change in head position of the user comprises determining a position and/or orientation of the user'"'"'s eyes, or a change in the position and/or orientation of the user'"'"'s eyes.
Specification