Methods and systems for generating spatialized audio during a virtual experience
First Claim
1. A method comprising:
- identifying, by a spatialized audio presentation system for a virtual avatar of a user engaged in a virtual experience within a virtual space, an orientation of the virtual avatar with respect to a virtual sound source that is located within the virtual space and that generates a sound to be presented to the user while the user is engaged in the virtual experience;
selecting, by the spatialized audio presentation system based on the identified orientation of the virtual avatar with respect to the virtual sound source, a head-related impulse response from a library of head-related impulse responses corresponding to different potential orientations of the virtual avatar with respect to the virtual sound source, the selected head-related impulse response including a left-side component and a right-side component;
generating, by the spatialized audio presentation system for presentation to the user at a left ear of the user while the user is engaged in the virtual experience, a left-side version of the sound by applying the left-side component of the selected head-related impulse response to the sound; and
generating, by the spatialized audio presentation system for presentation to the user at a right ear of the user while the user is engaged in the virtual experience, a right-side version of the sound by applying the right-side component of the selected head-related impulse response to the sound.
1 Assignment
0 Petitions
Accused Products
Abstract
A spatialized audio presentation system identifies an orientation of a virtual avatar, associated with a user engaged in a virtual experience, with respect to a virtual sound source within a virtual space. Within the virtual space, the virtual sound source generates a sound to be presented to the user while the user is engaged in the virtual experience. Based on the identified orientation of the virtual avatar, the system selects a head-related impulse response from a library of head-related impulse responses corresponding to different potential orientations of the virtual avatar with respect to the virtual sound source. The system then generates respective versions of the sound for presentation to the user at the left and right ears of the user by applying, respectively, a left-side component and a right-side component of the selected head-related impulse response to the sound. Corresponding methods are also disclosed.
-
Citations
20 Claims
-
1. A method comprising:
-
identifying, by a spatialized audio presentation system for a virtual avatar of a user engaged in a virtual experience within a virtual space, an orientation of the virtual avatar with respect to a virtual sound source that is located within the virtual space and that generates a sound to be presented to the user while the user is engaged in the virtual experience; selecting, by the spatialized audio presentation system based on the identified orientation of the virtual avatar with respect to the virtual sound source, a head-related impulse response from a library of head-related impulse responses corresponding to different potential orientations of the virtual avatar with respect to the virtual sound source, the selected head-related impulse response including a left-side component and a right-side component; generating, by the spatialized audio presentation system for presentation to the user at a left ear of the user while the user is engaged in the virtual experience, a left-side version of the sound by applying the left-side component of the selected head-related impulse response to the sound; and generating, by the spatialized audio presentation system for presentation to the user at a right ear of the user while the user is engaged in the virtual experience, a right-side version of the sound by applying the right-side component of the selected head-related impulse response to the sound. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method comprising:
- identifying, by a spatialized audio presentation system for a virtual avatar of a user engaged in a virtual experience within a virtual space, a first orientation, at a first point in time, of the virtual avatar with respect to a virtual sound source that is located within the virtual space and that generates a sound to be presented to the user while the user is engaged in the virtual experience;
selecting, by the spatialized audio presentation system based on the first orientation of the virtual avatar with respect to the virtual sound source, a first head-related impulse response from a library of head-related impulse responses corresponding to different potential orientations of the virtual avatar with respect to the virtual sound source, the first head-related impulse response including a left-side component and a right-side component;
generating, by the spatialized audio presentation system for presentation to the user at a left ear of the user while the user is engaged in the virtual experience, a left-side version of the sound by applying the left-side component of the first head-related impulse response to the sound;
generating, by the spatialized audio presentation system for presentation to the user at a right ear of the user while the user is engaged in the virtual experience, a right-side version of the sound by applying the right-side component of the first head-related impulse response to the sound;
identifying, by the spatialized audio presentation system for the virtual avatar, a second orientation, at a second point in time subsequent to the first point in time, of the virtual avatar with respect to the virtual sound source;
selecting, by the spatialized audio presentation system based on the second orientation of the virtual avatar with respect to the virtual sound source, a second head-related impulse response from the library of head-related impulse responses, the second head-related impulse response including a left-side component and a right-side component;
updating, by the spatialized audio presentation system, the left-side version of the sound by cross-fading the application of the left-side component of the first head-related impulse response to an application of the left-side component of the second head-related impulse response to the sound; and
updating, by the spatialized audio presentation system, the right-side version of the sound by cross-fading the application of the right-side component of the first head-related impulse response to an application of the right-side component of the second head-related impulse response to the sound. - View Dependent Claims (14)
- identifying, by a spatialized audio presentation system for a virtual avatar of a user engaged in a virtual experience within a virtual space, a first orientation, at a first point in time, of the virtual avatar with respect to a virtual sound source that is located within the virtual space and that generates a sound to be presented to the user while the user is engaged in the virtual experience;
-
15. A system comprising:
at least one physical computing device that identifies, for a virtual avatar of a user engaged in a virtual experience within a virtual space, an orientation of the virtual avatar with respect to a virtual sound source that is located within the virtual space and that generates a sound to be presented to the user while the user is engaged in the virtual experience; selects, based on the identified orientation of the virtual avatar with respect to the virtual sound source, a head-related impulse response from a library of head-related impulse responses corresponding to different potential orientations of the virtual avatar with respect to the virtual sound source, the selected head-related impulse response including a left-side component and a right-side component; generates, for presentation to the user at a left ear of the user while the user is engaged in the virtual experience, a left-side version of the sound by applying the left-side component of the selected head-related impulse response to the sound; and generates, for presentation to the user at a right ear of the user while the user is engaged in the virtual experience, a right-side version of the sound by applying the right-side component of the selected head-related impulse response to the sound. - View Dependent Claims (16, 17, 18, 19, 20)
Specification