Automatic generation of animation of synthetic characters
First Claim
1. A method for generating animation comprising the steps of:
- acquiring a data set of data samples representative of dynamic positional information regarding a plurality of points;
labeling the data set based on a plurality of parameters;
creating a mathematical model of a relation between the labels and the data set of data samples; and
generating animation by applying the mathematical model to a motion plan.
3 Assignments
0 Petitions
Accused Products
Abstract
The present invention provides a technique for acquiring motion samples, labeling motion samples with labels based on a plurality of parameters, using the motion samples and the labels to learn a function that maps labels to motions generally, and using the function to synthesize arbitrary motions. The synthesized motions may be portrayed through computer graphic images to provide realistic animation. The present invention allows the modeling of labeled motion samples in a manner that can accommodate the synthesis of motion of arbitrary location, speed, and style. The modeling can provide subtle details of the motion through the use of probabilistic sub-modeling incorporated into the modeling process. Motion samples may be labeled according to any relevant parameters. Labels may be used to differentiate between different styles to yield different models, or different styles of a motion may be consolidated into a single baseline model with the labels used to embellish the baseline model. The invention allows automation of labeling to increase the efficiency of processing a large variety of motion samples. The invention also allows automation of the animation of synthetic characters by generating the animation based on a general description of the motion desired along with a specification of any embellishments desired.
162 Citations
83 Claims
-
1. A method for generating animation comprising the steps of:
-
acquiring a data set of data samples representative of dynamic positional information regarding a plurality of points;
labeling the data set based on a plurality of parameters;
creating a mathematical model of a relation between the labels and the data set of data samples; and
generating animation by applying the mathematical model to a motion plan. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41)
applying a goal label to the data set based on a goal parameter descriptive of a goal to which the data set is directed;
applying a style label to the data set based on a style parameter descriptive of a style expressed in the data set; and
applying a state label to a data sample to indicate a state of the dynamic positional information represented by the data sample.
-
-
3. The method of claim 1 wherein the step of creating a model based on the data set of data samples further comprises the steps of:
-
generating local sub-models by mathematically analyzing the labeled data set; and
combining the local sub-models to yield the model.
-
-
4. The method of claim 3 wherein the step of creating a model based on the data set of data samples further comprises the step of:
-
characterizing variations in dynamic position information according to a probabilistic sub-model;
incorporating the probabilistic sub-model into the model.
-
-
5. The method of claim 1 wherein the step of generating animation by applying the model to a motion plan further comprises the steps of:
-
designating joint coordinates using learned functions; and
using the joint coordinates to render a synthetic character.
-
-
6. The method of claim 5 further comprising the step of:
combining joint coordinates of different desired motions.
-
7. The method of claim 1 wherein the step of generating animation by applying the model to a motion plan further comprises the steps of:
-
defining virtual markers correlated to the plurality of points;
designating a desired motion of the virtual markers;
calculating joint coordinates based on the desired motion of the virtual markers; and
using the joint coordinates to render a synthetic character.
-
-
8. The method claim 7 further comprising the step of:
combining joint coordinates of different desired motions.
-
9. The method of claim 1 further comprising the steps of:
-
acquiring a second data set of second data samples representative of second dynamic positional information regarding the plurality of points;
labeling the second data set based on the plurality of parameters;
creating a second model based on the second data set of second data samples;
wherein the step of generating animation by applying the model to the motion plan also applies the second model to the motion plan.
-
-
10. The method of claim 1 wherein the step of generating animation by applying the model to the motion plan is performed according to a script.
-
11. The method of claim 1 wherein the step of generating animation by applying the model to the motion plan is performed according to a random process.
-
12. The method of claim 1 wherein the step of generating animation by applying the model to the motion plan is performed according to a behavioral model.
-
13. The method of claim 12 wherein the step of generating animation by applying the model to the motion plan further comprises the step of:
animating agents for interacting with users over a network.
-
14. The method of claim 12 wherein the step of generating animation by applying the model to the motion plan further comprises the step of:
animating agents in computer games for interacting with users.
-
15. The method of claim 1 wherein the step of generating animation by applying the model to the motion plan is performed in response to an input received from a user input device.
-
16. The method of claim 15 wherein the step of generating animation by applying the model to the motion plan further comprises the step of:
animating a user'"'"'s character in an interactive computer game in response to an input received from a user input device.
-
17. The method of claim 15 wherein the step of generating animation by applying the model to the motion plan further comprises the step of:
generating animation of a synthetic character awakened in response to the input received from the user input device.
-
18. The method of claim 15 wherein the step of generating animation by applying the model to the motion plan further comprises the step of:
adapting the motion plan based on historical user interaction.
-
19. The method claim 18 wherein the step of adapting the motion plan based on historical user interaction further comprises the step of:
incorporating a random variation component into the motion plan.
-
20. The method of claim 1 wherein the step of creating a model based on the data set of data samples further comprises the step of:
creating the model based on the plurality of parameters.
-
21. The method of claim 1 wherein the step of generating animation by applying the model to the motion plan further comprises the steps of:
-
dividing a complex motion plan into a plurality of basic motion plans;
applying a corresponding model to each of the basic motion plans to generate a plurality of animation components; and
combining the animation components to yield animation according to the complex motion plan.
-
-
22. The method of claim 1 wherein the step of generating animation by applying the model to the motion plan further comprises the step of:
generating animation based on an input received from a sensor.
-
23. The method of claim 22 wherein the step of generating animation based on the input received from the sensor further comprises the step of:
animating a character to imitate the motion of a user observed using the sensor.
-
24. The method of claim 22 wherein the step of generating animation based on an input received from a sensor comprises the steps of:
-
receiving the input from the sensor;
determining an activity of a subject based on the input from the sensor; and
generating animation based on the activity.
-
-
25. The method of claim 24 wherein the step of generating animation based on the activity occurs when a computer coupled to the sensor is in a quiescent mode.
-
26. The method of claim 24 wherein the step of generating animation based on the activity occurs when a computer coupled to the sensor is in an active mode.
-
27. The method of claim 1 wherein the step of generating animation by applying the model to a motion plan comprises the step of:
extracting animation information from audio information.
-
28. The method of claim 27 wherein the step of extracting animation information from audio information comprises the step of:
obtaining audio information from a microphone.
-
29. The method of claim 27 wherein the step of extracting animation information from audio information comprises the step of:
obtaining audio information from a communication circuit.
-
30. The method of claim 27 wherein the step of extracting animation information from audio information comprises the step of:
obtaining audio information from an audio recording.
-
31. The method of claim 27 wherein the step of extracting animation information from audio information comprises the step of:
analyzing music to obtain animation information.
-
32. The method of claim 31 wherein the step of analyzing music to obtain animation information comprises the step of:
extracting rhythm information from the music.
-
33. The method of claim 27 wherein the step of generating animation by applying the model to a motion plan further comprises the step of:
applying a specified command according to the animation information.
-
34. The method of claim 27 wherein the step of extracting animation information from audio information comprises the step of:
analyzing speech to obtain animation information.
-
35. The method of claim 1 wherein the step of generating animation by applying the model to a motion plan comprises the step of:
extracting animation information from motion information.
-
36. The method of claim 1 wherein the step of extracting animation information from motion information comprises the step of:
extracting animation information from an example of human motion.
-
37. The method of claim 36 wherein the step of extracting animation information from the example of human motion is dependent on a velocity of the example of human motion.
-
38. The method of claim 1 wherein the step of extracting animation information from motion information comprises the step of:
extracting animation information from an example of non-human motion.
-
39. The method of claim 38 wherein the step of extracting animation information from the example of non-human motion is dependent on a velocity of the example of non-human motion.
-
40. The method of claim 1 wherein the step of creating a model based on the data set of data samples further comprises the step of:
learning a style based on a residual motion component with respect to a baseline motion component.
-
41. The method of claim 1 wherein the step of generating animation by applying the model to a motion plan further comprises the step of:
combining a residual motion animation component to a baseline motion animation component.
-
42. A method for generating animation of a synthetic character comprising the steps of:
-
acquiring a data set of data samples representative of dynamic positional information regarding a plurality of points, the plurality of points denoting a structural relationship of a subject;
labeling the data set based on a plurality of parameters;
creating a model based on the data set of data samples; and
applying the model to generate animation of a synthetic character, the synthetic character having reference points corresponding a second structural relationship of the synthetic character, the reference points differing in configuration from the plurality of points. - View Dependent Claims (43, 44)
determining the reference points by adjusting a scale of the plurality of points.
-
-
44. The method of claim 42 wherein the step of applying the model to generate animation of a synthetic character further comprises the step of:
determining the reference points by adjusting an angular relationship of the plurality of points.
-
45. A method for controlling motion of apparatus comprising the steps of:
-
acquiring a data set of data samples representative of dynamic positional information regarding a plurality of points;
labeling the data set based on a plurality of parameters;
creating a mathematical model of a relation between the labels and the data set of data samples; and
controlling the motion of the apparatus by applying the mathematical model to a motion plan. - View Dependent Claims (46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75)
applying a goal label to the data set based on a goal parameter descriptive of a goal to which the data set is directed;
applying a style label to the data set based on a style parameter descriptive of a style expressed in the data set; and
applying a state label to a data sample to indicate a state of the dynamic positional information represented by the data sample.
-
-
47. The method of claim 45 wherein the step of creating a model based on the data set of data samples further comprises the steps of:
-
generating local sub-models by mathematically analyzing the labeled data set; and
combining the local sub-models to yield the model.
-
-
48. The method of claim 47 wherein the step of creating a model based on the data set of data samples further comprises the step of:
-
characterizing variations in dynamic position information according to a probabilistic sub-model;
incorporating the probabilistic sub-model into the model.
-
-
49. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan further comprises the steps of:
-
defining virtual markers correlated to the plurality of points;
designating a desired motion of the virtual markers;
calculating joint coordinates based on the desired motion of the virtual markers; and
using the joint coordinates to control the motion of the apparatus.
-
-
50. The method of claim 45 further comprising the steps of:
-
acquiring a second data set of second data samples representative of second dynamic positional information regarding the plurality of points;
labeling the second data set based on the plurality of parameters;
creating a second model based on the second data set of second data samples;
wherein the step of controlling the motion of the apparatus by applying the model to a motion plan also applies the second model to the motion plan.
-
-
51. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan is performed according to a script.
-
52. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan is performed according to a random process.
-
53. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan is performed according to a behavioral model.
-
54. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan is performed in response to an input received from a user input device.
-
55. The method of claim 54 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan further comprises the step of:
adapting the motion plan based on historical user interaction.
-
56. The method claim 55 wherein the step of adapting the motion plan based on historical user interaction further comprises the step of:
incorporating a random variation component into the motion plan.
-
57. The method of claim 45 wherein the step of creating a model based on the data set of data samples further comprises the step of:
creating the model based on the plurality of parameters.
-
58. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan further comprises the steps of:
-
dividing a complex motion plan into a plurality of basic motion plans;
applying a corresponding model to each of the basic motion plans to generate a plurality of animation components; and
combining the animation components to yield control of the motion according to the complex motion plan.
-
-
59. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan further comprises the step of:
controlling the motion of the apparatus based on an input received from a sensor.
-
60. The method of claim 59 wherein the step of controlling the motion of the apparatus based on an input received from a sensor comprises the steps of:
-
receiving the input from the sensor;
determining an activity of a subject based on the input from the sensor; and
controlling the motion of the apparatus based on the activity.
-
-
61. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan comprises the step of:
extracting motion control information from audio information.
-
62. The method of claim 61 wherein the step of extracting motion control information from audio information comprises the step of:
obtaining audio information from a microphone.
-
63. The method of claim 61 wherein the step of extracting motion control information from audio information comprises the step of:
obtaining audio information from a communication circuit.
-
64. The method of claim 61 wherein the step of extracting motion control information from audio information comprises the step of:
obtaining audio information from an audio recording.
-
65. The method of claim 61 wherein the step of extracting motion control information from audio information comprises the step of:
analyzing music to obtain motion control information.
-
66. The method of claim 65 wherein the step of analyzing music to obtain motion control information comprises the step of:
extracting rhythm information from the music.
-
67. The method of claim 61 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan further comprises the step of:
applying a specified command according to the motion control information.
-
68. The method of claim 61 wherein the step of extracting the motion control information from audio information comprises the step of:
analyzing speech to obtain motion control information.
-
69. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan comprises the step of:
extracting motion control information from motion information.
-
70. The method of claim 45 wherein the step of extracting motion control information from motion information comprises the step of:
extracting motion control information from an example of human motion.
-
71. The method of claim 70 wherein the step of extracting motion control information from the example of human motion is dependent on a velocity of the example of human motion.
-
72. The method of claim 45 wherein the step of extracting motion control information from motion information comprises the step of:
extracting motion control information from an example of non-human motion.
-
73. The method of claim 72 wherein the step of extracting motion control information from the example of non-human motion is dependent on a velocity of the example of non-human motion.
-
74. The method of claim 45 wherein the step of creating a model based on the data set of data samples further comprises the step of:
learning a style based on a residual motion component with respect to a baseline motion component.
-
75. The method of claim 45 wherein the step of controlling the motion of the apparatus by applying the model to a motion plan further comprises the step of:
combining a residual motion control component to a baseline motion control component.
-
76. A method for providing computer vision comprising the steps of:
-
acquiring a data set of data samples representative of dynamic positional information regarding a plurality of points of a subject;
labeling the data set based on a plurality of parameters;
creating a mathematical model of a relation between the labels and the data set of data samples. - View Dependent Claims (77, 78, 79, 80, 81)
comparing the model to a plurality of previously obtained models for recognizing a particular action.
-
-
78. The method of claim 76 further comprising the step of:
identifying a previously obtained model similar to the model for identifying the subject.
-
79. The method of claim 76 further comprising the step of:
estimating a goal of the subject based on the data set and the model.
-
80. The method of claim 76 further comprising the step of:
assessing a kinesiological condition based on the model.
-
81. The method of claim 76 further comprising the step of:
assessing a quality of movement of the subject based on the model.
-
82. A method for generating animation comprising:
-
acquiring a plurality of motion samples;
assigning a label to each acquired motion sample;
generating a mathematical function mapping the labels to the acquired motion samples; and
generating a desired motion based on the learned function. - View Dependent Claims (83)
assigning to the label a value representing the desired motion; and
computing the desired motion using the label as input to the learned function.
-
Specification