Multiple audio/video data stream simulation
First Claim
1. A method, comprising:
- receiving, by a computing system, a first audio data stream, wherein said first audio data stream comprises first speech data associated with a first person;
receiving, by said computing system, a second audio data stream, wherein said second audio data stream comprises second speech data associated with a second person;
monitoring, by said computing system, said first audio data stream and said second audio data stream;
identifying, by said computing system in response to said monitoring said first audio data stream, first emotional attributes comprised by said first audio data stream;
generating, by said computing system in response to said identifying said first emotional attributes, a third audio data stream associated with said first audio data stream, wherein said third audio data stream comprises said first speech data, and wherein said third audio data stream does not comprise said first emotional attributes;
identifying, by said computing system in response to said monitoring said second audio data stream, second emotional attributes comprised by said second audio data stream;
identifying, by said computing system, a first emotional attribute of said second emotional attributes;
associating, by said computing system, a first audible portion of said second audio data stream with said first emotional attribute;
generating, by said computing system, a first audible label for said first audible portion of said second audio data stream, wherein said first audible label indicates said first emotional attribute;
applying, by said computing system, said first audible label to said first audible portion of said second audio data stream;
generating, by said computing system in response to said applying said first audible portion, a fourth audio data stream associated with said second audio data stream, wherein said fourth audio data stream comprises said second emotional attributes, said second audio data stream, and said first audible portion of said second audio data stream comprising said first audible label;
combining, by said computing system, said fourth audio data stream with said third audio data stream;
generating, by said computing system in response to said combining, a fifth audio data stream, wherein said fifth audio data stream comprises said fourth audio data stream and said third audio data stream; and
storing, by said computing system, said fifth audio data stream.
0 Assignments
0 Petitions
Accused Products
Abstract
A multiple audio/video data stream simulation method and system. A computing system receives first audio and/or video data streams. The first audio and/or video data streams include data associated with a first person and a second person. The computing system monitors the first audio and/or video data streams. The computing system identifies emotional attributes comprised by the first audio and/or video data streams. The computing system generates second audio and/or video data streams associated with the first audio and/or video data streams. The second audio and/or video data streams include the first audio and/or video data streams data without the emotional attributes. The computing system stores the second audio and/or video data streams.
-
Citations
13 Claims
-
1. A method, comprising:
-
receiving, by a computing system, a first audio data stream, wherein said first audio data stream comprises first speech data associated with a first person; receiving, by said computing system, a second audio data stream, wherein said second audio data stream comprises second speech data associated with a second person; monitoring, by said computing system, said first audio data stream and said second audio data stream; identifying, by said computing system in response to said monitoring said first audio data stream, first emotional attributes comprised by said first audio data stream; generating, by said computing system in response to said identifying said first emotional attributes, a third audio data stream associated with said first audio data stream, wherein said third audio data stream comprises said first speech data, and wherein said third audio data stream does not comprise said first emotional attributes; identifying, by said computing system in response to said monitoring said second audio data stream, second emotional attributes comprised by said second audio data stream; identifying, by said computing system, a first emotional attribute of said second emotional attributes; associating, by said computing system, a first audible portion of said second audio data stream with said first emotional attribute; generating, by said computing system, a first audible label for said first audible portion of said second audio data stream, wherein said first audible label indicates said first emotional attribute; applying, by said computing system, said first audible label to said first audible portion of said second audio data stream; generating, by said computing system in response to said applying said first audible portion, a fourth audio data stream associated with said second audio data stream, wherein said fourth audio data stream comprises said second emotional attributes, said second audio data stream, and said first audible portion of said second audio data stream comprising said first audible label; combining, by said computing system, said fourth audio data stream with said third audio data stream; generating, by said computing system in response to said combining, a fifth audio data stream, wherein said fifth audio data stream comprises said fourth audio data stream and said third audio data stream; and storing, by said computing system, said fifth audio data stream. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A tangible computer program product, comprising a computer storage device storing a computer readable program code, said computer readable program code configured to perform a method upon being executed by a processor of a computing system, said method comprising:
-
receiving, by said computing system, a first audio data stream, wherein said first audio data stream comprises first speech data associated with a first person; receiving, by said computing system, a second audio data stream, wherein said second audio data stream comprises second speech data associated with a second person; monitoring, by said computing system, said first audio data stream and said second audio data stream; identifying, by said computing system in response to said monitoring said first audio data stream, first emotional attributes comprised by said first audio data stream; generating, by said computing system in response to said identifying said first emotional attributes, a third audio data stream associated with said first audio data stream, wherein said third audio data stream comprises said first speech data, and wherein said third audio data stream does not comprise said first emotional attributes; identifying, by said computing system in response to said monitoring said second audio data stream, second emotional attributes comprised by said second audio data stream; identifying, by said computing system, a first emotional attribute of said second emotional attributes; associating, by said computing system, a first audible portion of said second audio data stream with said first emotional attribute; generating, by said computing system, a first audible label for said first audible portion of said second audio data stream, wherein said first audible label indicates said first emotional attribute; applying, by said computing system, said first audible label to said first audible portion of said second audio data stream; generating, by said computing system in response to said applying said first audible portion, a fourth audio data stream associated with said second audio data stream, wherein said fourth audio data stream comprises said second emotional attributes, said second audio data stream, and said first audible portion of said second audio data stream comprising said first audible label; combining, by said computing system, said fourth audio data stream with said third audio data stream; generating, by said computing system in response to said combining, a fifth audio data stream, wherein said fifth audio data stream comprises said fourth audio data stream and said third audio data stream; and storing, by said computing system, said fifth audio data stream. - View Dependent Claims (7, 8, 9)
-
-
10. A computing system comprising a processor coupled to a computer-readable memory unit, said memory unit comprising a computer readable code configured to be executed by the processor to perform a method comprising:
-
receiving, by said computing system, a first audio data stream, wherein said first audio data stream comprises first speech data associated with a first person; receiving, by said computing system, a second audio data stream, wherein said second audio data stream comprises second speech data associated with a second person; monitoring, by said computing system, said first audio data stream and said second audio data stream; identifying, by said computing system in response to said monitoring said first audio data stream, first emotional attributes comprised by said first audio data stream; generating, by said computing system in response to said identifying said first emotional attributes, a third audio data stream associated with said first audio data stream, wherein said third audio data stream comprises said first speech data, and wherein said third audio data stream does not comprise said first emotional attributes; identifying, by said computing system in response to said monitoring said second audio data stream, second emotional attributes comprised by said second audio data stream; identifying, by said computing system, a first emotional attribute of said second emotional attributes; associating, by said computing system, a first audible portion of said second audio data stream with said first emotional attribute; generating, by said computing system, a first audible label for said first audible portion of said second audio data stream, wherein said first audible label indicates said first emotional attribute; applying, by said computing system, said first audible label to said first audible portion of said second audio data stream; generating, by said computing system in response to said applying said first audible portion, a fourth audio data stream associated with said second audio data stream, wherein said fourth audio data stream comprises said second emotional attributes, said second audio data stream, and said first audible portion of said second audio data stream comprising said first audible label; combining, by said computing system, said fourth audio data stream with said third audio data stream; generating, by said computing system in response to said combining, a fifth audio data stream, wherein said fifth audio data stream comprises said fourth audio data stream and said third audio data stream; and storing, by said computing system, said fifth audio data stream. - View Dependent Claims (11, 12, 13)
-
Specification