Microphone array for generating virtual sound field
First Claim
1. A method comprising:
- receiving independent recordings from a plurality of microphones disposed in a tetrahedral arrangement around a recording device;
generating a virtual sound field by mapping velocity vectors to a determined spatial orientation of the recording device, wherein the velocity vectors are generated by employing a transfer function accounting for an angular difference between each direction and the plurality of microphones disposed around the recording device;
merging the virtual sound field with an integrated image of a surrounding environment by mapping the virtual sound field to the integrated image; and
isolating a portion of the virtual sound field and a portion of the integrated image corresponding to a predicted spatial orientation of a user.
1 Assignment
0 Petitions
Accused Products
Abstract
Certain aspects of the technology disclosed herein include generating a virtual sound field based on data from an ambisonic recording device. The ambisonic device records sound of a surrounding environment using at least four microphones having a tetrahedral orientation. An omnidirectional microphone having an audio-isolated portion can be used to isolate sound from a particular direction. Sound received from the plurality of microphones can be used to generate a virtual sound field. The virtual sound field include a dataset indicating a pressure signal and a plurality of velocity vectors. The ambisonic recording device can include a wide angle camera and generate wide angle video corresponding to the virtual sound field.
33 Citations
17 Claims
-
1. A method comprising:
-
receiving independent recordings from a plurality of microphones disposed in a tetrahedral arrangement around a recording device; generating a virtual sound field by mapping velocity vectors to a determined spatial orientation of the recording device, wherein the velocity vectors are generated by employing a transfer function accounting for an angular difference between each direction and the plurality of microphones disposed around the recording device; merging the virtual sound field with an integrated image of a surrounding environment by mapping the virtual sound field to the integrated image; and isolating a portion of the virtual sound field and a portion of the integrated image corresponding to a predicted spatial orientation of a user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
-
Specification