TAGGING A SOUND IN A VIRTUAL ENVIRONMENT
First Claim
1. An apparatus comprising:
- a display device;
a processor coupled to the display device, the processor configured to;
generate a first virtual scene comprising a virtual object;
generate a user option to insert a virtual microphone into the first virtual scene, the user option enabling user selection of a location of the virtual microphone; and
generate a second virtual scene; and
a speaker coupled to the processor, the speaker configured to;
output a tagged sound associated with the virtual object while the display device displays the first virtual scene; and
output the tagged sound while the display device displays the second virtual scene in response to a determination that the virtual microphone is inserted into the first virtual scene, wherein properties of the tagged sound, while the display device displays the second virtual scene, are based on the location of the virtual microphone.
1 Assignment
0 Petitions
Accused Products
Abstract
A method includes generating, at a processor, a first virtual scene that includes a virtual object. The method also includes generating a user option to insert a virtual microphone into the first virtual scene. The user option enables user selection of a location of the virtual microphone. The method further includes generating a second virtual scene. The method also includes outputting a tagged sound associated with the virtual object while a display device displays the first virtual scene. The method further includes outputting the tagged sound while the display device displays the second virtual scene in response to a determination that the virtual microphone is inserted into the first virtual scene. Properties of the tagged sound are based on the location of the virtual microphone while the display device displays the second virtual scene.
-
Citations
21 Claims
-
1. An apparatus comprising:
-
a display device; a processor coupled to the display device, the processor configured to; generate a first virtual scene comprising a virtual object; generate a user option to insert a virtual microphone into the first virtual scene, the user option enabling user selection of a location of the virtual microphone; and generate a second virtual scene; and a speaker coupled to the processor, the speaker configured to; output a tagged sound associated with the virtual object while the display device displays the first virtual scene; and output the tagged sound while the display device displays the second virtual scene in response to a determination that the virtual microphone is inserted into the first virtual scene, wherein properties of the tagged sound, while the display device displays the second virtual scene, are based on the location of the virtual microphone. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method comprising:
-
generating, at a processor, a first virtual scene comprising a virtual object; generating a user option to insert a virtual microphone into the first virtual scene, the user option enabling user selection of a location of the virtual microphone; generating a second virtual scene; outputting a tagged sound associated with the virtual object while a display device displays the first virtual scene; and outputting the tagged sound while the display device displays the second virtual scene in response to a determination that the virtual microphone is inserted into the first virtual scene, wherein properties of the tagged sound, while the display device displays the second virtual scene, are based on the location of the virtual microphone. - View Dependent Claims (10, 11, 12, 13, 14)
-
-
15. A non-transitory computer-readable medium comprising instructions that, when executed by a processor, cause the processor to perform operations comprising:
-
generating a first virtual scene comprising a virtual object; generating a user option to insert a virtual microphone into the first virtual scene, the user option enabling user selection of a location of the virtual microphone; generating a second virtual scene; outputting a tagged sound associated with the virtual object while a display device displays the first virtual scene; and outputting the tagged sound while the display device displays the second virtual scene in response to a determination that the virtual microphone is inserted into the first virtual scene, wherein properties of the tagged sound, while the display device displays the second virtual scene, are based on the location of the virtual microphone. - View Dependent Claims (16, 17, 18, 19)
-
-
20. An apparatus:
-
means for generating a first virtual scene and a second virtual scene, the first virtual scene comprising a virtual object; means for generating a user option to insert a virtual microphone into the first virtual scene, the user option enabling user selection of a location of the virtual microphone; and means for outputting a tagged sound associated with the virtual object, the tagged sound outputted while means for displaying a virtual scene displays the first virtual scene, and the tagged sound outputted while the means for displaying displays the second virtual scene in response to a determination that the virtual microphone is inserted into the first virtual scene, wherein properties of the tagged sound, while the means for displaying displays the second virtual scene, are based on the location of the virtual microphone. - View Dependent Claims (21)
-
Specification