MULTIPLE AUDIO/VIDEO DATA STREAM SIMULATION METHOD AND SYSTEM
First Claim
1. A method, comprising:
- receiving, by a computing system, a first audio data stream, wherein said first audio data stream comprises first speech data associated with a first person;
receiving, by said computing system, a second audio data stream, wherein said second audio data stream comprises second speech data associated with a second person;
monitoring, by said computing system, said first audio data stream and said second audio data stream;
identifying, by said computing system in response to said monitoring said first audio data stream, first emotional attributes comprised by said first audio data stream;
generating, by said computing system in response to said identifying said first emotional attributes, a third audio data stream associated with said first audio data stream, wherein said third audio data stream comprises said first speech data, and wherein said third audio data stream does not comprise said first emotional attributes;
identifying, by said computing system in response to said monitoring said second audio data stream, second emotional attributes comprised by said second audio data stream;
identifying, by said computing system, a first emotional attribute of said second emotional attributes;
associating, by said computing system, a first audible portion of said second audio data stream with said first emotional attribute;
generating, by said computing system, a first audible label for said first audible portion of said second audio data stream, wherein said first audible label indicates said first emotional attribute;
applying, by said computing system, said first audible label to said first audible portion of said second audio data stream;
generating, by said computing system in response to said applying said first audible portion, a fourth audio data stream associated with said second audio data stream, wherein said fourth audio data stream comprises said second emotional attributes, said second audio data stream, and said first audible portion of said second audio data stream comprising said first audible label;
combining, by said computing system, said fourth audio data stream with said third audio data stream;
generating, by said computing system in response to said combining, a fifth audio data stream, wherein said fifth audio data stream comprises said fourth audio data stream and said third audio data stream; and
storing, by said computing system, said fifth audio data stream.
1 Assignment
0 Petitions
Accused Products
Abstract
A multiple audio/video data stream simulation method and system. A computing system receives first audio and/or video data streams. The first audio and/or video data streams include data associated with a first person and a second person. The computing system monitors the first audio and/or video data streams. The computing system identifies emotional attributes comprised by the first audio and/or video data streams. The computing system generates second audio and/or video data streams associated with the first audio and/or video data streams. The second audio and/or video data streams include the first audio and/or video data streams data without the emotional attributes. The computing system stores the second audio and/or video data streams.
-
Citations
22 Claims
-
1. A method, comprising:
-
receiving, by a computing system, a first audio data stream, wherein said first audio data stream comprises first speech data associated with a first person; receiving, by said computing system, a second audio data stream, wherein said second audio data stream comprises second speech data associated with a second person; monitoring, by said computing system, said first audio data stream and said second audio data stream; identifying, by said computing system in response to said monitoring said first audio data stream, first emotional attributes comprised by said first audio data stream; generating, by said computing system in response to said identifying said first emotional attributes, a third audio data stream associated with said first audio data stream, wherein said third audio data stream comprises said first speech data, and wherein said third audio data stream does not comprise said first emotional attributes; identifying, by said computing system in response to said monitoring said second audio data stream, second emotional attributes comprised by said second audio data stream; identifying, by said computing system, a first emotional attribute of said second emotional attributes; associating, by said computing system, a first audible portion of said second audio data stream with said first emotional attribute; generating, by said computing system, a first audible label for said first audible portion of said second audio data stream, wherein said first audible label indicates said first emotional attribute; applying, by said computing system, said first audible label to said first audible portion of said second audio data stream; generating, by said computing system in response to said applying said first audible portion, a fourth audio data stream associated with said second audio data stream, wherein said fourth audio data stream comprises said second emotional attributes, said second audio data stream, and said first audible portion of said second audio data stream comprising said first audible label; combining, by said computing system, said fourth audio data stream with said third audio data stream; generating, by said computing system in response to said combining, a fifth audio data stream, wherein said fifth audio data stream comprises said fourth audio data stream and said third audio data stream; and storing, by said computing system, said fifth audio data stream. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method, comprising:
-
receiving, by a computing system, a first video data stream, wherein said first video data stream comprises first video data associated with a first person; receiving, by said computing system, a second video data stream, wherein said second video data stream comprises second video data associated with a second person;
.monitoring, by said computing system, said first video data stream and said second video data stream; identifying, by said computing system in response to said monitoring said first video data stream, first emotional attributes comprised by said first video data; generating, by said computing system in response to said identifying said first emotional attributes, a third video data stream associated with said first video data stream, wherein said third video data stream comprises third video data associated with said first person, and wherein said third video data does not comprise said first emotional attributes; identifying, by said computing system in response to said monitoring said second video data stream, second emotional attributes comprised by said second video data; identifying, by said computing system, a first emotional attribute of said second emotional attributes; associating, by said computing system, a first visual object of said second video data stream with said first emotional attribute; and generating, by said computing system, a first viewable label for said first visual object, wherein said first viewable label indicates said first emotional attribute; applying, by said computing system, said first viewable label to said first visual object; generating, by said computing system in response to said applying said first viewable label, a fourth video data stream associated with said second video data stream, wherein said fourth video data stream comprises second emotional attributes, said second video data, and said first visual object comprising said first viewable label; first combining, by said computing system, said fourth video data stream with said third video data stream; generating, by said computing system in response to said first combining, a fifth video data stream, wherein said fifth video data stream comprises said fourth video data stream and said third video data stream; and storing, by said computing system, said fifth video data stream. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. A method, comprising:
-
receiving, by a computing system, a first audio/video data stream; extracting, by said computing system from said first audio/video data stream, a first audio/video data sub-stream and a second audio/video data sub-stream; extracting, by said computing system from said first audio/video data sub-stream, a first video data stream and a first audio data stream, wherein said first video data stream comprises first video data associated with a first person, and wherein said first audio data stream comprises first speech data associated with said first person; extracting, by said computing system from said second audio/video data sub-stream, a second video data stream and a second audio data stream, wherein said second video data stream comprises second video data associated with a second person, and wherein said second audio data stream comprises second speech data associated with said second person; monitoring, by said computing system, said first video data stream and said second video data stream; identifying, by said computing system in response to said monitoring said first video data stream, first emotional attributes comprised by said first video data; generating, by said computing system in response to said identifying said first emotional attributes, a third video data stream associated with said first video data stream, wherein said third video data stream comprises third video data associated with said first person, and wherein said third video data does not comprise said first emotional attributes; identifying, by said computing system in response to said monitoring said second video data stream, second emotional attributes comprised by said second video data; identifying, by said computing system, a first emotional attribute of said second emotional attributes; associating, by said computing system, a first visual object of said second video data stream with said first emotional attribute; generating, by said computing system, a first viewable label for said first visual object, wherein said first viewable label indicates said first emotional attribute; applying, by said computing system, said first viewable label to said first visual object; generating, by said computing system in response to said applying said first viewable label, a fourth video data stream associated with said second video data stream, wherein said fourth video data stream comprises second emotional attributes, said second video data, and said first visual object comprising said first viewable label; first combining, by said computing system, said fourth video data stream with said third video data stream; generating, by said computing system in response to said first combining, a fifth video data stream, wherein said fifth video data stream comprises said fourth video data stream and said third video data stream; and storing, by said computing system, said fifth video data stream. - View Dependent Claims (18, 19, 20, 21, 22)
-
Specification