System and method of providing conversational visual prosody for talking heads
First Claim
Patent Images
1. A method of controlling a virtual agent movement when speaking to a user, the method comprising:
- receiving speech data to be spoken by the virtual agent to the user;
performing a prosodic analysis of the speech data; and
controlling the virtual agent movement according to the prosodic analysis.
17 Assignments
0 Petitions
Accused Products
Abstract
A system and method of controlling the movement of a virtual agent while the agent is speaking to a human user during a conversation is disclosed. The method comprises receiving speech data to be spoken by the virtual agent, performing a prosodic analysis of the speech data, selecting matching prosody patterns from a speaking database and controlling the virtual agent movement according to the selected prosody patterns.
-
Citations
45 Claims
-
1. A method of controlling a virtual agent movement when speaking to a user, the method comprising:
-
receiving speech data to be spoken by the virtual agent to the user; performing a prosodic analysis of the speech data; and controlling the virtual agent movement according to the prosodic analysis. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method of controlling a virtual agent movement when speaking to a user, the method comprising:
-
receiving speech data to be spoken by the virtual agent to the user; selecting segments of matching visual prosody patterns from an audio-visual database of recorded speech, where both audio and video are recorded of a person speaking text; and controlling the virtual agent movements according to the movements of the person in selected audio-visual recorded speech segments.
-
-
14. A method of controlling movement of a animated entity while the animated entity is speaking to a user, the method comprising:
-
performing a prosodic analysis of speech data to be spoken by the animated entity; and controlling movement of the animated entity according to the prosody analysis. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21)
-
-
22. An apparatus for controlling movement of a virtual agent while speaking to a user, the apparatus comprising:
-
a speaking module that receives speech to be spoken by the virtual agent and performs a prosodic analysis on the speech input; a selection module that selects virtual agent control data from a speaking database; and a rendering module for controlling the movement of the virtual agent while it is speaking to the user according to the prosodic analysis. - View Dependent Claims (23)
-
-
24. A method of controlling a virtual agent movement while speaking to a user, the method comprising:
-
at a client device, performing a prosodic analysis of speech data to be spoken by the virtual agent; and controlling the virtual agent movement according to the prosodic analysis while the virtual agent speaks to the user. - View Dependent Claims (25, 26, 27, 28, 29, 30, 31, 32, 33)
-
-
34. A method of controlling a virtual agent movement when speaking to a user, the method comprising, at a client device:
-
receiving speech data to be spoken by the virtual agent; receiving the speech data over a network from a server for generating virtual agent movement data based on a prosodic analysis of the speech data; and receiving the virtual agent movement data from the server for controlling the virtual agent movement while the virtual agent speaks to the user. - View Dependent Claims (35, 36, 37, 38)
-
-
39. A method of controlling virtual agent movement on a client device while speaking to a user, the method comprising, at a server:
-
transmitting speech data to be spoken by the virtual agent to the client device over a network; generating virtual agent movement data based on a prosodic analysis of the user speech data; and transmitting the virtual agent movement data to the client device over the network for controlling the virtual agent movement while speaking to the user. - View Dependent Claims (40, 41, 42, 43)
-
-
44. A method of controlling movements of a virtual animated entity during a transition from speaking to listening, the method comprising:
-
as the virtual animated entity is concluding a speaking segment, selecting transition movement data from a transition movement database; and controlling the movement of the virtual animated entity between the time the animated entity has approximately finished speaking until the animated entity stops speaking according to the transition movement data.
-
-
45. A method of controlling movements of a virtual animated entity during a transition from talking to listening, the method comprising:
-
approximately at the end of the virtual animated entity talking, selecting transition movement data from a transition database; and controlling the movement of the virtual animated entity to indicate that the virtual animated entity is approximately finished talking and will soon listen for speech data from a user.
-
Specification