×

Multiple audio/video data stream simulation method and system

  • US 8,259,992 B2
  • Filed: 06/13/2008
  • Issued: 09/04/2012
  • Est. Priority Date: 06/13/2008
  • Status: Expired due to Fees
First Claim
Patent Images

1. A method, comprising:

  • receiving, by a computing system, a first video data stream, wherein said first video data stream comprises first video data associated with a first person;

    receiving, by said computing system, a second video data stream, wherein said second video data stream comprises second video data associated with a second person;

    monitoring, by said computing system, said first video data stream and said second video data stream;

    identifying, by said computing system in response to said monitoring said first video data stream, first emotional attributes comprised by said first video data;

    generating, by said computing system in response to said identifying said first emotional attributes, a third video data stream associated with said first video data stream, wherein said third video data stream comprises third video data associated with said first person, and wherein said third video data does not comprise said first emotional attributes;

    identifying, by said computing system in response to said monitoring said second video data stream, second emotional attributes comprised by said second video data;

    identifying, by said computing system, a first emotional attribute of said second emotional attributes;

    associating, by said computing system, a first visual object of said second video data stream with said first emotional attribute of said second emotional attributes; and

    generating, by said computing system, a first viewable label for said first visual object, wherein said first viewable label indicates said first emotional attribute of said second emotional attributes;

    applying, by said computing system, said first viewable label to said first visual object;

    generating, by said computing system in response to said applying said first viewable label, a fourth video data stream associated with said second video data stream, wherein said fourth video data stream comprises said second emotional attributes, said second video data, and said first visual object comprising said first viewable label;

    first combining, by said computing system, said fourth video data stream with said third video data stream;

    generating, by said computing system in response to said first combining, a fifth video data stream, wherein said fifth video data stream comprises said fourth video data stream and said third video data stream;

    storing, by said computing system, said fifth video data stream;

    receiving, by said computing system, a first audio data stream, wherein said first audio data stream comprises first speech data associated with said first person;

    receiving, by said computing system, a second audio data stream, wherein said second audio data stream comprises second speech data associated with said second person;

    monitoring, by said computing system, said first audio data stream and said second audio data stream;

    identifying, by said computing system in response to said monitoring said first audio data stream, third emotional attributes comprised by said first audio data stream;

    generating, by said computing system in response to said identifying said third emotional attributes, a third audio data stream associated with said first audio data stream, wherein said third audio data stream comprises said first speech data, and wherein said third audio data stream does not comprise said third emotional attributes;

    identifying, by said computing system in response to said monitoring said second audio data stream, fourth emotional attributes comprised by said second audio data stream;

    identifying, by said computing system, a second emotional attribute of said third emotional attributes;

    associating, by said computing system, a first audible portion of said second audio data stream with said second emotional attribute of said third emotional attributes;

    generating, by said computing system, a first audible label for said first audible portion of said second audio data stream, wherein said first audible label indicates said second emotional attribute of said third emotional attributes;

    applying, by said computing system, said first audible label to said first audible portion of said second audio data stream;

    generating, by said computing system in response to said applying said first audible portion, a fourth audio data stream associated with said second audio data stream, wherein said fourth audio data stream comprises said forth emotional attributes, said second audio data stream, and said first audible portion of said second audio data stream comprising said first audible label;

    second combining, by said computing system, said fourth audio data stream with said third audio data stream;

    generating, by said computing system in response to said second combining, a fifth audio data stream, wherein said fifth audio data stream comprises said fourth audio data stream and said third audio data stream; and

    storing, by said computing system, said fifth audio data stream.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×