Surround sound in a sensory immersive motion capture simulation environment
First Claim
1. A computer program product tangibly embodied in a non-transitory storage medium and comprising instructions that when executed by a processor perform a method, the method comprising:
- receiving, by a wearable computing device of a first entity, audio data that is generated responsive to a second entity triggering an audio event in a capture volume;
receiving, by the wearable computing device of the first entity, three dimensional (3D) motion data of a virtual representation of the first entity in a simulated virtual environment, the 3D motion data of the virtual representation of the first entity is calculated based on 3D motion data of the first entity in the capture volume;
receiving, by the wearable computing device of the first entity, 3D motion data of a virtual representation of the second entity in the simulated virtual environment; and
processing the audio data, the 3D motion data of the virtual representation of the first entity and the 3D motion data of a virtual representation of the second entity to generate multi-channel audio output data customized to a perspective of the virtual representation of the first entity in the simulated virtual environment,wherein the multi-channel audio output data is associated with the audio event, andwherein generating the multi-channel audio output data comprises;
updating, at a sound library module of the wearable computing device, the 3D motion data of the virtual representation of the first entity in the simulated virtual environment,updating, at the sound library module, the 3D motion data of the virtual representation of the second entity in the simulated virtual environment,calculating, by the sound library module, a distance between the virtual representation of the first entity and the virtual representation of the second entity in the simulated virtual environment, andcalculating, by the sound library module, a direction of the virtual representation of the second entity in reference to the virtual representation the first entity in the simulated virtual environment,wherein the direction and the distance is calculated based on based on at least one of the updated 3D motion data of the virtual representation of the first entity and the updated 3D motion data of the virtual representation of the second entity.
1 Assignment
0 Petitions
Accused Products
Abstract
A wearable computing device of the listener entity can receive 3D motion data of a virtual representation of the listener entity, 3D motion data of a virtual representation of a sound emitter entity and audio data. The audio data may be associated with an audio event triggered by the sound emitter entity in a capture volume. The wearable computing device of the listener entity can process the 3D motion data of the virtual representation of a listener entity, the 3D motion data of the virtual representation of the sound emitter entity and the audio data to generate a multi channel audio output data customized to the perspective of the virtual representation of a first entity. The multi channel audio output data may be associated with the audio event. The multi channel audio output data can be communicated to the listener entity through a surround sound audio output device.
-
Citations
26 Claims
-
1. A computer program product tangibly embodied in a non-transitory storage medium and comprising instructions that when executed by a processor perform a method, the method comprising:
-
receiving, by a wearable computing device of a first entity, audio data that is generated responsive to a second entity triggering an audio event in a capture volume; receiving, by the wearable computing device of the first entity, three dimensional (3D) motion data of a virtual representation of the first entity in a simulated virtual environment, the 3D motion data of the virtual representation of the first entity is calculated based on 3D motion data of the first entity in the capture volume; receiving, by the wearable computing device of the first entity, 3D motion data of a virtual representation of the second entity in the simulated virtual environment; and processing the audio data, the 3D motion data of the virtual representation of the first entity and the 3D motion data of a virtual representation of the second entity to generate multi-channel audio output data customized to a perspective of the virtual representation of the first entity in the simulated virtual environment, wherein the multi-channel audio output data is associated with the audio event, and wherein generating the multi-channel audio output data comprises; updating, at a sound library module of the wearable computing device, the 3D motion data of the virtual representation of the first entity in the simulated virtual environment, updating, at the sound library module, the 3D motion data of the virtual representation of the second entity in the simulated virtual environment, calculating, by the sound library module, a distance between the virtual representation of the first entity and the virtual representation of the second entity in the simulated virtual environment, and calculating, by the sound library module, a direction of the virtual representation of the second entity in reference to the virtual representation the first entity in the simulated virtual environment, wherein the direction and the distance is calculated based on based on at least one of the updated 3D motion data of the virtual representation of the first entity and the updated 3D motion data of the virtual representation of the second entity. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A computer program product tangibly embodied in a non-transitory storage medium and comprising instructions that when executed by a processor perform a method, the method comprising:
-
receiving, by a wearable computing device of a first entity, audio data that is generated responsive to a second entity triggering an audio event in a simulated virtual environment; receiving, by the wearable computing device of the first entity, three dimensional (3D) motion data of a virtual representation of the first entity in the simulated virtual environment, the 3D motion data of the virtual representation of the first entity is calculated based on 3D motion data of the first entity in a capture volume; receiving, by the wearable computing device of the first entity, 3D motion data of the second entity in the simulated virtual environment, wherein the second entity is at least one of a virtual object and a virtual character in the simulated virtual environment, and wherein the virtual character is generated based on an artificial intelligence algorithm of a simulator engine; and processing the audio data, the 3D motion data of the virtual representation of the first entity and the 3D motion data of the second entity to generate multi-channel audio output data customized to a perspective of the virtual representation of the first entity, wherein the multi-channel audio output data is associated with the audio event, and wherein generating the multi-channel audio output data comprises; updating, at a sound library module of the wearable computing device, the 3D motion data of the virtual representation of the first entity in the simulated virtual environment, updating, at the sound library module, the 3D motion data of the second entity in the simulated virtual environment, calculating, by the sound library module, a distance between the virtual representation of the first entity and the second entity in the simulated virtual environment, and calculating, by the sound library module, a direction of the second entity in reference to the virtual representation the first entity in the simulated virtual environment, wherein the direction and the distance is calculated based on based on at least one of the updated 3D motion data of the virtual representation of the first entity and the updated 3D motion data of the second entity. - View Dependent Claims (10, 11, 12, 13, 14)
-
-
15. A method comprising:
-
receiving, by a wearable computing device of a first entity, audio data that is generated responsive to a second entity triggering an audio event in a simulated virtual environment; wherein the first entity is located in a physical capture volume and the second entity is located in the simulated virtual environment, and wherein the virtual representation of the first entity and the second entity are co-located in the simulated virtual environment, and receiving, by the wearable computing device of the first entity, three dimensional (3D) motion data of a virtual representation of the first entity in the simulated virtual environment, the 3D motion data of the virtual representation of the first entity is calculated based on 3D motion data of the first entity in a capture volume; receiving, by the wearable computing device of the first entity, 3D motion data of the second entity in the simulated virtual environment, wherein the second entity is at least one of a virtual object and a virtual character in the simulated virtual environment, wherein the virtual character is generated based on an artificial intelligence algorithm of a simulator engine, and wherein the 3D motion data of the second entity comprises a position of the second entity in the simulated virtual environment, an orientation of the second entity in the simulated virtual environment and a velocity of motion of the second entity in the simulated virtual environment; and processing the audio data, the 3D motion data of the virtual representation of the first entity and the 3D motion data of the second entity to generate multi-channel audio output data customized to a perspective of the virtual representation of the first entity, wherein the multi-channel audio output data is associated with the audio event, wherein triggering the audio event by the second entity represents the virtual character triggering the audio event in the simulated virtual environment.
-
-
16. A wearable computing device, comprising:
-
an audio reception module configured to receive audio data that is generated responsive to a second entity triggering an audio event; a listener position module configured to receive three dimensional (3D) motion data of a virtual representation of the first entity in a simulated virtual environment, the 3D motion data of the virtual representation of the first entity is calculated based on 3D motion data of the first entity in a capture volume; a relative position module configured to receive 3D motion data of a virtual representation of the second entity in the simulated virtual environment; an audio mixing module configured to process the audio data, the 3D motion data of the virtual representation of the first entity and the 3D motion data of a virtual representation of the second entity to generate multi-channel audio output data customized to a perspective of the virtual representation of the first entity, wherein the multi-channel audio output data is associated with the audio event, wherein the audio mixing module comprises; a motion update module configured to update, at the sound library, the 3D motion data of the virtual representation of the first entity in the simulated virtual environment and the 3D motion data of the virtual representation of the second entity in the simulated virtual environment; and a position update module configured to calculate, at the sound library, a distance between the virtual representation of the first entity and the virtual representation of the second entity in the simulated virtual environment and a direction of the virtual representation of the second entity in reference to the virtual representation the first entity in the simulated virtual environment, wherein the direction and the distance is calculated based on at least one of the updated 3D motion data of the virtual representation of the first entity and the updated 3D motion data of the virtual representation of the second entity; and a sound card module configured to communicate the multi-channel audio output data to the first entity through a surround sound audio output device of the wearable computing device of the first entity. - View Dependent Claims (17, 18, 19, 20)
-
-
21. A computer program product tangibly embodied in a non-transitory storage medium and comprising instructions that when executed by a processor perform a method, the method comprising:
-
updating, at a sound library module, 3D motion data of a virtual representation of a first entity in a simulated virtual environment, wherein the first entity is a human being, and wherein the 3D motion data of the first entity comprises at least one of a position of the first entity'"'"'s head in the capture volume, an orientation of the first entity'"'"'s head in the capture volume and a velocity of motion of the first entity'"'"'s head in the capture volume; updating, at the sound library module, 3D motion data of a virtual representation of a second entity in the simulated virtual environment, wherein the second entity is at least one of another human being, animate object, and an inanimate object comprising at least one of a weapon and a model of a weapon, wherein the 3D motion data of the virtual representation of the second entity comprises at least one of a position of the virtual representation of the second entity in the simulated virtual environment, an orientation of the of the virtual representation of the second entity in the simulated virtual environment, a velocity of motion of the virtual representation of the second entity in the simulated virtual environment, a position of the inanimate object associated with virtual representation of the second entity in the simulated virtual environment, an orientation of the inanimate object associated with virtual representation of the second entity in the simulated virtual environment and a velocity of motion of the inanimate object associated with virtual representation of the second entity in the simulated virtual environment, wherein the 3D motion data of the virtual representation of the second entity is determined based on 3D motion data of the second entity in the capture volume, and wherein the 3D motion data of the second entity comprises at least one of a position of the second entity in the capture volume, an orientation of the second entity in the capture volume and a velocity of motion of the second entity in the capture volume calculating, by the sound library module, a distance between the virtual representation of the first entity and the virtual representation of the second entity in the simulated virtual environment, calculating, by the sound library module, a direction of the virtual representation of the second entity in reference to the virtual representation the first entity in the simulated virtual environment, wherein the direction and the distance are calculated based on based on at least one of the updated 3D motion data of the virtual representation of the first entity and the updated 3D motion data of the virtual representation of the second entity; and processing, based on the distance and the direction, an audio data associated with an audio event triggered by the second entity to generate multi-channel audio output data customized to a perspective of the virtual representation of the first entity. - View Dependent Claims (22, 23)
-
-
24. A system, comprising:
-
a motion capture device configured to motion capture at least one of a first entity and a second entity in a capture volume based on trackable objects coupled to the first entity and the second entity, the first entity and the second entity being co-located in the capture volume, wherein the first entity is a human being and the second entity is at least one of another human being, animate object and an inanimate object comprising at least one of a weapon and a model of a weapon; a tracking device coupled to the motion capture device, configured to determine 3D motion data of at least of the first entity and the second entity; a simulation engine coupled to the motion capture device, configured to transmit 3D motion data of at least one of a virtual representation of the first entity in a simulated virtual environment, the 3D motion data of the first entity, a virtual representation of the second entity in a simulated virtual environment and audio data; and a wearable computing device of a first entity communicatively coupled to the simulation engine, wherein the wearable computing device of the first entity is configured to; receive from the simulator engine, the audio data generated by the second entity responsive to the second entity triggering an audio event that is reflected in the simulated virtual environment as the virtual representation of the second entity triggering the audio event; receive from the simulator engine, the 3D motion data of the virtual representation of the first entity; receive from the simulator engine, the 3D motion data of the virtual representation of the second entity; process the audio data, the 3D motion data of the virtual representation of the first entity and the 3D motion data of the virtual representation of the second entity to generate a multi-channel audio output data customized to a perspective of virtual representation of the first entity, wherein the multi-channel audio output data is associated with the audio event; and communicate the multi-channel audio output data to the first entity through a surround sound audio output device of the wearable computing device of the first entity, wherein a motion capture device in the capture volume captures an image of the capture volume that comprises the first entity and the second entity, the image can be used to determine the 3D motion data of at least one of the first entity and the second entity, wherein the virtual representation of the first entity and the virtual representation of the second entity are co-located in the simulated virtual environment, wherein the 3D motion data of the first entity comprises at least one of a position of the first entity'"'"'s head in the capture volume, an orientation of the first entity'"'"'s head in the capture volume and a velocity of motion of the first entity'"'"'s head in the capture volume, wherein the 3D motion data of the virtual representation of the second entity comprises at least one of a position of the virtual representation of the second entity in the simulated virtual environment, an orientation of the of the virtual representation of the second entity in the simulated virtual environment, a velocity of motion of the virtual representation of the second entity in the simulated virtual environment, a position of the inanimate object associated with virtual representation of the second entity in the simulated virtual environment, an orientation of the inanimate object associated with virtual representation of the second entity in the simulated virtual environment and a velocity of motion of the inanimate object associated with virtual representation of the second entity in the simulated virtual environment, wherein the 3D motion data of the virtual representation of the second entity is determined based on 3D motion data of the second entity in the capture volume, and wherein the 3D motion data of the second entity comprises at least one of a position of the second entity in the capture volume, an orientation of the second entity in the capture volume and a velocity of motion of the second entity in the capture volume. - View Dependent Claims (25, 26)
-
Specification