Parameterized animation compression methods and arrangements
First Claim
1. A method comprising:
- generating image data associated with each point in a modeled parameter space of a computer-generated animation; and
selectively inferring texture data for each defined object within the parameter space.
2 Assignments
0 Petitions
Accused Products
Abstract
Methods and arrangements are provided for real-time rendering of scenes having various light sources and objects having differing specular surfaces. An offline encoder is employed to parameterize images by two or more arbitrary variables allowing view, lighting, and object changes. The parameterized images are encoded as a set of per-object parameterized textures based on shading models, camera parameters, and the scene'"'"'s geometry. Texture maps are inferred from a ray-tracer'"'"'s segmented imagery to provide the best match when applied to specific graphics hardware. The parameterized textures are encoded as a multidimensional Laplacian pyramid on fixed size blocks of parameter space. This technique captures the coherence in parameterized animations and decodes directly into texture maps that are easy to load into conventional graphics hardware.
-
Citations
85 Claims
-
1. A method comprising:
-
generating image data associated with each point in a modeled parameter space of a computer-generated animation; and
selectively inferring texture data for each defined object within the parameter space. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23)
generating scene geometry data associated with the parameter space; and
wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of per-object texture maps using at least the scene geometry data and the image data.
-
-
8. The method as recited in claim 1, further comprising:
-
generating scene lighting data associated with the parameter space; and
wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of per-object texture maps using at least the scene lighting data and the image data.
-
-
9. The method as recited in claim 1, further comprising:
-
generating scene viewing data associated with the parameter space; and
wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of per-object texture maps using at least the scene viewing data and the image data.
-
-
10. The method as recited in claim 1, further comprising:
-
generating scene geometry data, scene lighting data and scene viewing data associated with the parameter space; and
wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of per-object texture maps using the scene geometry data, the scene lighting data, the scene viewing data, and the image data.
-
-
11. The method as recited in claim 1, wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of parameter-dependent texture maps for each of the defined objects.
-
12. The method as recited in claim 11, wherein the parameter-dependent texture maps include texture information based on at least two parameters.
-
13. The method as recited in claim 12, wherein the texture information based on the at least two parameters includes radiance information.
-
14. The method as recited in claim 12, wherein at least one of the two parameters is associated with a parameter selected from a group comprising a time parameter, a light source position parameter, a viewpoint parameter, a surface reflectance parameter, and an object position parameter.
-
15. The method as recited in claim 12, wherein at least one of the two parameters is associated with a modeled parameter that is configured to provide an arbitrary-dimensional parameterized animation over a sequence of generated images.
-
16. The method as recited in claim 11, further comprising:
compressing at least a portion of the plurality of parameter-dependent texture maps.
-
17. The method as recited in claim 16, wherein compressing at least a portion of the plurality of parameter-dependent texture maps further includes:
selectively encoding the portion of the plurality of parameter-dependent texture maps as a multidimensional Laplacian pyramid based on blocks of the parameter space.
-
18. The method as recited in claim 17, wherein selectively encoding the portion of the plurality of parameter-dependent texture maps as a multidimensional Laplacian pyramid further includes adaptively splitting the parameter space.
-
19. The method as recited in claim 18, wherein the parameter space is adaptively split based on differences in coherence across different parameter dimensions.
-
20. The method as recited in claim 19, wherein the parameter space is adaptively split based on separate diffuse and specular lighting layers.
-
21. The method as recited in claim 11, further comprising:
transporting at least a portion of the plurality of parameter-dependent texture maps.
-
22. The method as recited in claim 11, further comprising:
selectively rendering a two-dimensional image of at least a portion of the parameter space using the plurality of parameter-dependent texture maps.
-
23. The method as recited in claim 22, wherein selectively rendering a two-dimensional image of at least a portion of the parameter space using the plurality of parameter-dependent texture maps further includes rendering one frame at a time at one point of the parameter space, such that a sequence of images can be generated for user navigation through the parameter space.
-
24. A computer-readable medium having computer-executable instructions for causing at least one processing unit to perform steps comprising:
-
generating image data associated with each point in a modeled parameter space of a computer-generated animation; and
selectively inferring texture data for each defined object within the parameter space. - View Dependent Claims (25, 26, 27, 28, 29, 30, 31, 32, 33, 34)
generating scene geometry data associated with the parameter space; and
wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of per-object texture maps using at least the scene geometry data and the image data.
-
-
27. The computer-readable medium as recited in claim 24, further comprising computer-executable instructions for causing the at least one processing unit to perform steps comprising:
-
generating scene lighting data associated with the parameter space; and
wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of per-object texture maps using at least the scene lighting data and the image data.
-
-
28. The computer-readable medium as recited in claim 24, further comprising computer-executable instructions for causing the at least one processing unit to perform steps comprising:
-
generating scene viewing data associated with the parameter space; and
wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of per-object texture maps using at least the scene viewing data and the image data.
-
-
29. The computer-readable medium as recited in claim 24, further comprising computer-executable instructions for causing the at least one processing unit to perform steps comprising:
-
generating scene geometry data, scene lighting data and scene viewing data associated with the parameter space; and
wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of per-object texture maps using the scene geometry data, the scene lighting data, the scene viewing data, and the image data.
-
-
30. The computer-readable medium as recited in claim 24, wherein selectively inferring texture data for each defined object within the parameter space further includes producing a plurality of parameter-dependent texture maps for each of the defined objects.
-
31. The computer-readable medium as recited in claim 30, further comprising computer-executable instructions for causing the at least one processing unit to perform steps comprising:
compressing at least a portion of the plurality of parameter-dependent texture maps.
-
32. The computer-readable medium as recited in claim 31, wherein compressing at least a portion of the plurality of parameter-dependent texture maps further includes:
selectively encoding the portion of the plurality of parameter-dependent texture maps as a multidimensional Laplacian pyramid based on blocks of the parameter space.
-
33. The computer-readable medium as recited in claim 30, further comprising computer-executable instructions for causing the at least one processing unit to perform steps comprising:
transporting at least a portion of the plurality of parameter-dependent texture maps.
-
34. The computer-readable medium as recited in claim 30, further comprising computer-executable instructions for causing the at least one processing unit to perform steps comprising:
selectively rendering a two-dimensional image of at least a portion of the parameter space using the plurality of parameter-dependent texture maps.
-
35. An apparatus comprising:
-
a first renderer configured to generate image data associated with each point in a modeled parameter space of a computer-generated animation; and
a compiler operatively coupled to the first renderer and configured to selectively infer texture data for each defined object within the parameter space. - View Dependent Claims (36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51)
the first renderer is further configured to generate scene geometry data associated with the parameter space; and
the compiler is further configured to produce a plurality of per-object texture maps using at least the scene geometry data and the image data.
-
-
42. The apparatus as recited in claim 35, wherein:
-
the first renderer is further configured to generate scene lighting data associated with the parameter space; and
the compiler is further configured to produce a plurality of per-object texture maps using at least the scene lighting data and the image data.
-
-
43. The apparatus as recited in claim 35, wherein:
-
the first renderer is further configured to generate scene viewing data associated with the parameter space; and
the compiler is further configured to produce a plurality of per-object texture maps using at least the scene viewing data and the image data.
-
-
44. The apparatus as recited in claim 35, wherein:
-
the first renderer is further configured to generate scene geometry data, scene lighting data and scene viewing data associated with the parameter space; and
the compiler is further configured to produce a plurality of per-object texture maps using the scene geometry data, the scene lighting data, the scene viewing data, and the image data.
-
-
45. The apparatus as recited in claim 35, wherein the compiler is further configured to produce a plurality of parameter-dependent texture maps for each of the defined objects.
-
46. The apparatus as recited in claim 45, further comprising:
an encoder operatively coupled to the compiler and configured to compress at least a portion of the plurality of parameter-dependent texture maps.
-
47. The apparatus as recited in claim 46, wherein the encoder is further configured to selectively encode the portion of the plurality of parameter-dependent texture maps as a multidimensional Laplacian pyramid based on blocks of the parameter space.
-
48. The apparatus as recited in claim 45, further comprising:
a communication media operatively coupled to the encoder and configured to transport at least a portion of the plurality of parameter-dependent texture maps.
-
49. The apparatus as recited in claim 48, further comprising:
a second renderer operatively coupled to the communication media and configured to selectively render a two-dimensional image of at least a portion of the parameter space using the plurality of transported parameter-dependent texture maps.
-
50. The apparatus as recited in claim 49, wherein the second renderer selectively renders one frame at a time at one point of the parameter space, such that a sequence of images can be generated for user navigation through the parameter space.
-
51. The apparatus as recited in claim 49, wherein the second renderer is further configured to decode and decompress the plurality of transported parameter-dependent texture maps, when applicable.
-
52. A method for rendering an arbitrary-dimensional parameterized animation, the method comprising:
-
for at least one object within a scene, parameterizing a radiance field based on at least one parameter selected from a group comprising time, lighting, viewpoint, reflectance, object positions, and degrees of freedom in a scene, resulting in an arbitrary-dimensional parameterized animation; and
encoding image data associated with the parameterized animation. - View Dependent Claims (53, 54, 55)
selectively decoding at least a portion of the transported encoded image data; and
rendering a visually explorable image based on the decoded image data.
-
-
55. The method as recited in claim 54, wherein parameterizing the radiance field further includes selectively inferring parameter-dependent texture maps for individual objects.
-
56. A method for encoding ray-traced images for each point in a parameter space associated with an n-dimensional frame sequence as generated by a high quality renderer, the method comprising:
-
providing image data to a compiler along with related scene geometry information, lighting model information, and viewing parameter information; and
using a compression engine that is configured to implement a multi-dimensional compression scheme to encode the complied image data. - View Dependent Claims (57, 58, 59)
providing at least a portion of the encoded image data to a decoder;
with the decoder, decoding the portion of the encoded image data using a texture decompression engine, and rendering decoded image data using a rendering engine.
-
-
60. A method for inferring, for each geometric object, a parameterized texture based on ray-traced images, the method comprising:
-
segmenting ray-traced images into per-object portions by generating a per-object mask image as well as a combined image, each at supersampled resolutions;
for each object, filtering a relevant portion of the combined image as indicated by the object'"'"'s respective mask and dividing by a fractional coverage computed by applying a filter to the object'"'"'s mask. - View Dependent Claims (61)
-
-
62. A method for use with a computer, the method comprising the steps of:
-
developing a matrix A that is an ns×
nt matrix, where ns is an integer equal to a number of screen pixels in which an object is visible with in a graphically depicted scene, and nt is an integer that is equal to a number of texels in a corresponding texture MIPMAP pyramid for the object; and
solving for a texture represented by a vector x by minimizing a function f(x) defined as - View Dependent Claims (63, 64, 65, 66, 67, 68, 69, 70)
-
-
71. A parameterized texture compression method comprising:
-
generating at least one block of parameterized texture for at least one object within a multidimensional parameter space, wherein the texture is parameterized by a plurality of spatial parameters; and
encoding the parameter space using a block-based compression scheme configured to exploit spatial coherence within each of the parameterized textures. - View Dependent Claims (72, 73, 74, 75)
providing automatic storage allocation, whereby during the encoding of the Laplacian pyramid, storage of information is assigned to the various levels of the pyramid so as to minimize the sum of mean squared errors (MSEs) since a texture image at a given point in parameter space can be reconstructed as a sum of images from each level.
-
-
75. The method as recited in claim 74, further comprising:
for objects with both specular and diffuse reflectance information, encoding separate lighting layers for which storage is allocated.
-
76. A method of compensation for gamma correction in image-based rendering, the method comprising:
-
selectively splitting an object'"'"'s lighting layers into a sum of two terms L1 and L2 such that the two terms conflict with a gamma correction, since γ
(L1+L2)≠
γ
(L1)+γ
(L2) wherein γ
(x)=x1/g is a nonlinear gamma correction function; and
selectively encoding corresponding object image data based on a gamma corrected signals, γ
(Ll), by controlling compression errors associated with dark regions in the object'"'"'s image.- View Dependent Claims (77)
selectively decoding the encoding corresponding image data using an inverse gamma correction function γ
−
1(x)=xg.
-
- 78. An apparatus configured to cache encoded texture images selectively decode the cached texture images and apply encoded affine transformations to vertex texture coordinates.
Specification