Authoring and use systems for sound synchronized animation
First Claim
1. Apparatus for generating and displaying user created animated objects having synchronized visual and audio characteristics, said apparatus comprising:
- a program-controlled microprocessor;
first means coupled to said microprocessor and responsive to user input signals for generation a first set of signals defining visual characteristics of a desired animated object;
second means coupled to said microprocessor and to said first means and responsive to user input signals for generating a second set of signals defining audio characteristics of said desired animated object; and
controller means coupled to said first and second means and to said microprocessor for generating a set of instructions collating and synchronizing said visual characteristics with said audio characteristics thereby defining said animated object having synchronized visual and audio characteristics.
3 Assignments
0 Petitions
Accused Products
Abstract
A general purpose computer, such as a personal computer, is programmed for sound-synchronized random access and display of synthesized actors ("synactors") on a frame-by-frame basis. The interface between a user and the animation system is defined as a stage or acting metaphor. The user interface provides the capability to create files defining individually accessible synactors representing real or imaginary persons, animated characters and objects or scenes which can be programmed to perform speech synchronized action. Synactor speech is provided by well-known speech synthesis techniques or, alternatively, by inputting speech samples and communication characteristics to define a digital model of the speech and related animation for a particular synactor. A synactor is defined as combination of sixteen predefined images; eight images to be synchronized with speech and eight images to provide additional animated expression. Once created, a synactor may be manipulated similarly to a file or document in any application. Once created, a synactor is controlled with scripts defined and edited by a user via the user interface.
-
Citations
13 Claims
-
1. Apparatus for generating and displaying user created animated objects having synchronized visual and audio characteristics, said apparatus comprising:
-
a program-controlled microprocessor; first means coupled to said microprocessor and responsive to user input signals for generation a first set of signals defining visual characteristics of a desired animated object; second means coupled to said microprocessor and to said first means and responsive to user input signals for generating a second set of signals defining audio characteristics of said desired animated object; and controller means coupled to said first and second means and to said microprocessor for generating a set of instructions collating and synchronizing said visual characteristics with said audio characteristics thereby defining said animated object having synchronized visual and audio characteristics. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method for generating user created animated objects having synchronized visual and audio characteristics, said method comprising the steps of:
-
generating a first set of signals defining visual characteristics of a desired animated object in response to user input signals; generating a second set of signals defining audio characteristics of said desired animated object in response to user input signals; and generating a set of instructions collating and synchronizing said visual characteristics with said audio characteristics thereby defining said desired animated object having synchronized visual and audio characteristics. - View Dependent Claims (10)
-
-
11. A method of synchronizing sound with visual images of animated objects pronouncing the sound, said method comprising the steps of:
-
defining a text string representing a desired sound to be synchronized with visual images of a speaking animated object; translating said text string into a phonetic text string representative of said text string; and translating said phonetic text string into a recite command, said recite command including phonetic/timing pairs, each of said phonetic/timing pairs comprising a phonetic code corresponding to an associated phonetic code of said phonetic text string and a number defining a predetermined time value, said phonetic code representative of a sound element to be pronounced and an associated image to be displayed while said sound element is being pronounced and said predetermined time value defines the amount of time said associated image is to be displayed. - View Dependent Claims (12, 13)
-
Specification