Script control for lip animation in a scene generated by a computer rendering engine
First Claim
1. A method for controlling a rendering engine to produce audio, the method comprising:
- receiving commands at a user computing device specifying one or more character actions for a computer-modeled character displayed on a display device of the user computing device,wherein the commands are from a content provider at a first location and the user computing device is at a second location remote from the first location,wherein the commands comprise predetermined instructions in a control script for the one or more character actions and an electronic representation of human speech, andwherein one or more of the predetermined instructions include a specified duration; and
animating lip movement of the computer-modeled character displayed on the display device of the user computing device at the second location,wherein the animating is performed according to the predetermined instructions in the control script received from the content provider and uses one or more pre-computed lip movements associated with selected phonetic sounds,wherein the animating comprises synchronizing an audio representation of a human speech with the animated lip movement of the computer modeled character, andwherein the synchronizing is performed by maintaining a constant playback rate by indicating a start time and duration time corresponding to a set of one or more of the predetermined instructions in the control script such that each resulting animation that comprises a pre-computed lip movement fits into the corresponding specified duration.
4 Assignments
0 Petitions
Accused Products
Abstract
A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user'"'"'s computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech.
45 Citations
20 Claims
-
1. A method for controlling a rendering engine to produce audio, the method comprising:
-
receiving commands at a user computing device specifying one or more character actions for a computer-modeled character displayed on a display device of the user computing device, wherein the commands are from a content provider at a first location and the user computing device is at a second location remote from the first location, wherein the commands comprise predetermined instructions in a control script for the one or more character actions and an electronic representation of human speech, and wherein one or more of the predetermined instructions include a specified duration; and animating lip movement of the computer-modeled character displayed on the display device of the user computing device at the second location, wherein the animating is performed according to the predetermined instructions in the control script received from the content provider and uses one or more pre-computed lip movements associated with selected phonetic sounds, wherein the animating comprises synchronizing an audio representation of a human speech with the animated lip movement of the computer modeled character, and wherein the synchronizing is performed by maintaining a constant playback rate by indicating a start time and duration time corresponding to a set of one or more of the predetermined instructions in the control script such that each resulting animation that comprises a pre-computed lip movement fits into the corresponding specified duration. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A computer-readable storage medium having stored thereon computer-executable instructions that, if executed by a computing device, cause the computing device to perform operations for controlling a rendering engine, the operations comprising:
-
receiving commands at a computer specifying one or more character actions for a computer-modeled character displayed at the computer, wherein the commands are from a content provider at a first location and the computer is at a second location remote from the first location, wherein the commands comprise predetermined instructions in a control script for the one or more character actions and an electronic representation of human speech; and wherein one or more of the predetermined instructions include a specified duration; and animating lip movement of the computer-modeled character displayed at the computer at the second location, wherein the animating is performed according to the predetermined instructions in the control script, wherein the animating comprises synchronizing an audio representation of a human speech with the animated lip movement of the computer modeled character, and wherein the synchronizing is performed by maintaining a constant playback rate by indicating a start time and duration time corresponding to a set of one or more of the predetermined instructions in the control script such that each resulting animation that comprises a pre-computed lip movement fits into the corresponding specified duration. - View Dependent Claims (16, 17, 18, 19)
-
-
20. A method, comprising:
-
transmitting commands from a content provider computer at a first location to a digital computing device at a second location remote from the first location, wherein the commands are predetermined instructions in a control script specifying one or more character actions and speech specified by text for a computer-modeled character displayed on the digital computing device, and wherein one or more of the predetermined instructions include a specified duration, wherein the digital computing device is configured to perform speech synthesis at the second location in response to the speech specified by text in the control script, wherein the digital computing device is further configured to animate lip movement of the computer-modeled character based, at least in part, on the predetermined instructions in the control script and the speech synthesis, the animating lip movement of the computer-modeled character comprising— for the predetermined instructions including specified durations, synchronizing audio representation of a human speech with the animated lip movement of the computer-modeled character such that a constant playback rate is maintained by indicating a start time and duration time such that each resulting animation fits into the corresponding specified duration; and wherein the speech synthesis comprises specifying tone, pitch, accent, and/or emotional intensity for animation of the lip movement of the computer-modeled character to achieve the text-specified speech.
-
Specification