Lighting and shadowing methods and arrangements for use in computer graphic simulations
DC CAFCFirst Claim
1. A shadow rendering method for use in a computer system, the method comprising the steps of:
- providing observer data of a simulated multi-dimensional scene;
providing lighting data associated with a plurality of simulated light sources arranged to illuminate said scene, said lighting data including light image data;
for each of said plurality of light sources, comparing at least a portion of said observer data with at least a portion of said lighting data to determine if a modeled point within said scene is illuminated by said light source and storing at least a portion of said light image data associated with said point and said light source in a light accumulation buffer; and
then combining at least a portion of said light accumulation buffer with said observer data; and
displaying resulting image data to a computer screen.
3 Assignments
Litigations
3 Petitions
Accused Products
Abstract
The effects of lighting and resulting shadows within a computer simulated three-dimensional scene are modeled by rendering a light depth image and a light color image for each of the light sources. The light depth images are compared to a camera depth image to determine if a point within the scene is lighted by the various light sources. An accumulated light image is produced by combining those portions of the light color images determined to be lighting the scene. The resulting accumulated light image is then combined with a camera color image to produce a lighted camera image that can be further processed and eventually displayed on a computer display screen. The light color image can be static or dynamic. Transformations between different perspective and/or coordinate systems can be precalculated for fixed cameras or light sources. The various images and manipulations can include individual pixel data values, multiple-pixel values, polygon values, texture maps, and the like.
79 Citations
48 Claims
-
1. A shadow rendering method for use in a computer system, the method comprising the steps of:
-
providing observer data of a simulated multi-dimensional scene;
providing lighting data associated with a plurality of simulated light sources arranged to illuminate said scene, said lighting data including light image data;
for each of said plurality of light sources, comparing at least a portion of said observer data with at least a portion of said lighting data to determine if a modeled point within said scene is illuminated by said light source and storing at least a portion of said light image data associated with said point and said light source in a light accumulation buffer; and
thencombining at least a portion of said light accumulation buffer with said observer data; and
displaying resulting image data to a computer screen. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. An arrangement configured to render shadows in a simulated multi-dimensional scene, the arrangement comprising:
-
a display screen configured to display image data;
memory for storing data including observer data associated with a simulated multi-dimensional scene, and lighting data associated with a plurality of simulated light sources arranged to illuminate said scene, said lighting data including light image data, said memory further including a light accumulation buffer portion and a frame buffer portion;
at least one processor coupled to said memory and said display screen and operatively configured to, for each of said plurality of light sources, compare at least a portion of said observer data with at least a portion of said lighting data to determine if a modeled point within said scene is illuminated by said light source and storing at least a portion of said light image data associated with said point and said light source in said light accumulation buffer, then combining at least a portion of said light accumulation buffer with said observer data, and displaying resulting image data in said frame buffer, and outputting at least a portion of said image data in said frame buffer to said display screen. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. A method for simulating light falling on a modeled object in a computer generated multi-dimensional graphics simulation, the method comprising the steps of:
-
for a simulated camera, rendering a camera view of at least one modeled object that is at least partially optically opaque, to produce a camera depth array comprising camera depth data values and a corresponding camera image array comprising camera image data values;
for a first simulated light, rendering a first light view of said modeled object to produce a first light depth array comprising first light depth data values and a corresponding first light image array comprising first light image data values;
transforming at least a portion of said camera depth data values to said first light view, thereby generating a first transformed camera array comprising first transformed camera depth data values;
for each data value therein, comparing said first light depth array to said first transformed camera array to determine if said data value in said first light depth array is closer to said first simulated light, and if so, adding a corresponding data value from said first light image array to a light accumulation array comprising light accumulation data values; and
for each data value therein, multiplying said camera image array by a corresponding data value from said light accumulation array to produce a lighted camera image array comprising lighted camera image values. - View Dependent Claims (22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32)
for a second simulated light, rendering a second light view of said modeled object to produce a second light depth array comprising second light depth data values and a corresponding second light image array comprising second light image data values;
transforming at least a portion of said camera depth data values to said second light view, thereby generating a second transformed camera array comprising second transformed camera depth data values; and
for each data value therein, comparing said second light depth array to said second transformed camera array to determine if said data value in said second light depth array is closer to said second simulated light, and if so, adding a corresponding data value from said second light image array to said light accumulation array.
-
-
23. The method as recited in claim 21, wherein each of said camera depth values includes z-buffer data associated with a different pixel selected from a plurality of pixels on a computer display screen.
-
24. The method as recited in claim 23, wherein each of said first light depth values includes z-buffer data associated with a different pixel selected from a plurality of pixels on a computer display screen.
-
25. The method as recited in claim 21, wherein each of said camera depth values includes z-buffer data associated with a different set of pixels selected from a plurality of pixels on a computer display screen.
-
26. The method as recited in claim 25, wherein each of said first light depth values includes z-buffer data associated with a different set of pixels selected from a plurality of pixels on a computer display screen.
-
27. The method as recited in claim 21, wherein said camera image data and said first light image data each include color data associated with at least one pixel on a computer screen.
-
28. The method as recited in claim 21, further comprising the steps of:
-
repeating the steps recited in claim 21 at a frame rate; and
sequentially displaying a plurality of frames of data on a computer screen at said frame rate, wherein subsequent frames of data include subsequently processed lighted camera image data, and wherein said step of rendering said first light view further comprises dynamically changing at least one of said first light image data values between said subsequent frames of data.
-
-
29. The method as recited in claim 28 wherein at least a portion of said first light image data values represent dynamically changing color data selected from a set comprising motion picture data, video data, animation data, and computer graphics data.
-
30. The method as recited in claim 28, wherein said frame rate is at least about 25 frames per second.
-
31. The method as recited in claim 21, wherein the step of transforming at least a portion of said camera depth data values to said first light view further includes the step of transforming said camera depth array from a camera coordinate system to a corresponding first light coordinate system.
-
32. The method as recited in claim 31, wherein the step of transforming said camera depth array from a camera coordinate system to a corresponding first light coordinate system further includes the step of using a precalculated transformation table to transform directly from said camera coordinate system to said corresponding first light coordinate system.
-
33. A computer-readable medium carrying at least one set of computer instructions configured to cause a computer to operatively simulate light falling on a modeled object in a computer generated multi-dimensional graphics simulation by performing operations comprising:
-
a) rendering an observer view of at least a portion of a spatially modeled object as a plurality of observed depth values and observed image values;
b) rendering a source view of at least a portion of said modeled object as a plurality of source depth values and a plurality of source image values;
c) transforming at least a portion of said observed depth values to said source view;
d) modifying at least one image accumulation value with one of said observed image values if said corresponding transformed observer value is equal to a comparable one of said source depth values;
e) multiplying said one of said observed image values by said at least one image accumulation value to produce at least one pixel value; and
f) displaying said pixel value on a computer screen. - View Dependent Claims (34, 35, 36, 37, 38)
g) following step d), repeating steps b) through d) for at least one additional source view.
-
-
35. The computer-readable medium as recited in claim 34, further configured to cause the computer to perform the further steps of:
-
h) repeating steps a) through g) a frame rate; and
wherein step f) further includes sequentially displaying a plurality of pixels as frames of data on said computer screen at said frame rate, and said step of rendering said source view further includes changing at least one of said source image values between said subsequent frames of data.
-
-
36. The computer-readable medium as recited in claim 35 wherein at least a portion of said source image values represent color data selected from a set comprising motion picture data, video data, animation data, and computer graphics data.
-
37. The computer-readable medium as recited in claim 35, wherein step c) further includes transforming at least a portion of said observed depth values from an observer coordinate system to a corresponding source coordinate system.
-
38. The computer-readable medium as recited in claim 37, wherein the step of transforming at least a portion of said observed depth values from an observer coordinate system to a corresponding source coordinate system further includes using a precalculated transformation table to transform directly from said observer coordinate system to said corresponding source coordinate system.
-
39. A computer-readable medium carrying at least one set of computer instructions configured to cause at least one processor within a computer system to operatively render simulated shadows in a multi-dimensional simulated scene by performing the steps of:
-
providing observer data of a simulated multi-dimensional scene;
providing lighting data associated with a plurality of simulated light sources arranged to illuminate said scene, said lighting data including light image data;
for each of said plurality of light sources, comparing at least a portion of said observer data with at least a portion of said lighting data to determine if a modeled point within said scene is illuminated by said light source and storing at least a portion of said light image data associated with said point and said light source in a light accumulation buffer; and
thencombining at least a portion of said light accumulation buffer with said observer data; and
displaying resulting image data to a computer screen. - View Dependent Claims (40, 41, 42, 43, 44, 45, 46, 47, 48)
-
Specification