Method and apparatus for predicting events in video conferencing and other applications
First Claim
1. A method for tracking a speaker in a video processing system, said video processing system processing audio and video information, the method comprising the steps of:
- estimating an emotional state of a first speaker currently speaking from acoustic and prosodic features to predict when the first speaker is about to end speaking;
processing both said audio and video information to identify one of a plurality of cues defining behavior characteristics that suggest that a second person is about to speak;
maintaining a profile for at least one person that establishes a threshold for at least one of said plurality of cues; and
obtaining an image of said second person associated with said identified cue.
4 Assignments
0 Petitions
Accused Products
Abstract
Methods and apparatus are disclosed for predicting events using acoustic and visual cues. The present invention processes audio and video information to identify one or more (i) acoustic cues, such as intonation patterns, pitch and loudness, (ii) visual cues, such as gaze, facial pose, body postures, hand gestures and facial expressions, or (iii) a combination of the foregoing, that are typically associated with an event, such as behavior exhibited by a video conference participant before he or she speaks. In this manner, the present invention allows the video processing system to predict events, such as the identity of the next speaker. The predictive speaker identifier operates in a learning mode to learn the characteristic profile of each participant in terms of the concept that the participant “will speak” or “will not speak” under the presence or absence of one or more predefined visual or acoustic cues. The predictive speaker identifier operates in a predictive mode to compare the learned characteristics embodied in the characteristic profile to the audio and video information and thereby predict the next speaker.
-
Citations
19 Claims
-
1. A method for tracking a speaker in a video processing system, said video processing system processing audio and video information, the method comprising the steps of:
-
estimating an emotional state of a first speaker currently speaking from acoustic and prosodic features to predict when the first speaker is about to end speaking;
processing both said audio and video information to identify one of a plurality of cues defining behavior characteristics that suggest that a second person is about to speak;
maintaining a profile for at least one person that establishes a threshold for at least one of said plurality of cues; and
obtaining an image of said second person associated with said identified cue. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A system for tracking a speaker in a video processing system, said video processing system processing audio and video information, comprising:
-
a memory for storing computer readable code; and
a processor operatively coupled to said memory, said processor configured to;
process acoustic and prosodic features of said audio information to predict when a first person is about to end speaking;
process both said audio and video information to identify one of a plurality of cues defining behavior characteristics that suggest that a second person is about to speak;
maintain a profile for at least one person that establishes a threshold for at least one of said plurality of cues; and
obtain an image of said second person associated with said identified cue. - View Dependent Claims (15, 16)
-
-
17. An article of manufacture for tracking a speaker in a video processing system, said video processing system processing audio and video information, comprising:
a computer readable medium having a computer readable code means embodied thereon capable of execution by a processor, said computer readable code means comprising;
a step to process both of said audio and video information to identify one of a plurality of cues defining behavior characteristics that suggest that a first person is about to speak;
a step to process both of said audio and video information to identify another of the plurality of cues defining behavior characteristics that suggest that a second person is about to end speaking;
a step to maintain a profile for at least one person that establishes a threshold for at least one of said plurality of cues; and
a step to obtain an image of said first person based on said identified cues. - View Dependent Claims (18, 19)
Specification