Anti-aliased, textured, geocentric and layered fog graphics display method and apparatus
First Claim
Patent Images
1. A method for simulating the effects of layered fog having at least one texture applied thereto, to thereby accurately render position and horizontal movement information in a computer-generated synthetic environment that is rendered on a display screen, thus providing at least visual velocity clues to an observer thereof, said method comprising the steps of:
- (1) applying the at least one texture to the layered fog, having top and bottom horizontal planes for the layered fog;
(2) applying global texture effects for the layered fog that are displayed when an eye-point is substantially distant to the layered fog;
(3) applying local texture effects that are displayed when the eye-point is between the top and bottom horizontal planes; and
(4) blending color modulation, density modulation, local texture effects and global texture effects such that the global texture effects are blended with the local texture effects when the eye-point is within a pre-defined distance of the top and bottom horizontal planes.
1 Assignment
0 Petitions
Accused Products
Abstract
A method and apparatus in a preferred embodiment for generating anti-aliased layered fog which is textured manipulated as if in a geocentric virtual environment to thereby show horizon depression at high altitudes. Hardware is provided such that layer model data and texture model data is combined to generate fogged pixel color.
23 Citations
49 Claims
-
1. A method for simulating the effects of layered fog having at least one texture applied thereto, to thereby accurately render position and horizontal movement information in a computer-generated synthetic environment that is rendered on a display screen, thus providing at least visual velocity clues to an observer thereof, said method comprising the steps of:
-
(1) applying the at least one texture to the layered fog, having top and bottom horizontal planes for the layered fog;
(2) applying global texture effects for the layered fog that are displayed when an eye-point is substantially distant to the layered fog;
(3) applying local texture effects that are displayed when the eye-point is between the top and bottom horizontal planes; and
(4) blending color modulation, density modulation, local texture effects and global texture effects such that the global texture effects are blended with the local texture effects when the eye-point is within a pre-defined distance of the top and bottom horizontal planes. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)
(1) generating a plurality of higher density fog layers; and
(2) generating a plurality of lower density general visibility layers that are separated from each other by the plurality of higher density fog layers.
-
-
4. The method as defined in claim 3 wherein the method further comprises the steps of:
-
(1) selecting three layers as the plurality of higher density fog layers; and
(2) selecting four layers as the plurality of lower density general visibility layers.
-
-
5. The method as defined in claim 3 wherein the method further comprises the steps of:
-
(1) determining an eye vector through the plurality of higher density textured fog layers and the plurality of lower density general visibility layers; and
(2) blending an accumulative effect of each of the plurality of fog and general visibility layers to thereby determine a final density and color of each pixel on the display screen.
-
-
6. The method as defined in claim 5 wherein the method further comprises the step of utilizing an X component and a Y component of the eye vector as indices to the texture map for each of the plurality of fog and general visibility layers.
-
7. The method as defined in claim 6 wherein the method further comprises the step of processing the texture map so that the texture map has a large repeat interval, thereby enabling a relatively small texture map that is stored in memory to be used to entirely cover each of the plurality of higher density textured fog layers throughout the computer-generated synthetic environment.
-
8. The method as defined in claim 5 wherein the method further comprises the step of defining at least one region of layered fog having a top altitude and a bottom altitude, wherein each pixel has calculated therefore a modified fog color, a density delta, a color, and opacity modulation.
-
9. The method as defined in claim 8 wherein the method further comprises the steps of:
-
(1) determining an antialiased density for each pixel; and
(2) adding the density delta to the antialiased density of each pixel.
-
-
10. The method as defined in claim 9 wherein the method further comprises the step of generating positive and negative density delta values, thereby enabling the opacity modulation within a layer of the fog to create a visual effect of a thin or intermittent cloud cover.
-
11. The method as defined in claim 10 wherein the method further comprises the step of making a behavior of each layer of the plurality of fog and general visibility layers independent of a behavior of other layers.
-
12. The method as defined in claim 11 wherein the method further comprises the step of processing the plurality of higher density textured fog layers and the plurality of lower density general visibility layers independent of each other, such that an effect of each of the fog and the general visibility layers is attenuated by any intervening layers.
-
13. The method as defined in claim 12 wherein the method further comprises the step of the plurality of lower density general visibility layers utilize a top color and a bottom color of adjacent textured regions of the plurality of higher density textured fog layers as respective bottom and top colors of the plurality of lower density general visibility layers.
-
14. The method as defined in claim 13 wherein the method further comprises the step of calculating the density from the eye point to each pixel requires combining density effects and color from each of the fog and the general visibility layers.
-
15. The method as defined in claim 14 wherein the method further comprises the steps of:
-
(1) calculating for each layer of the plurality of higher density textured fog layers and the plurality of lower density general visibility layers, that portion of a pixel to eye vector that lies within that layer;
(2) calculating the density utilizing a range of the pixel to eye vector; and
(3) multiplying the density by a ratio of the range of the eye to pixel vector divided by an eye to pixel delta height, wherein the ratio is the same for all layers.
-
-
16. The method as defined in claim 15 wherein the method further comprises the steps of:
-
(1) modulating the density for a texture layer of the plurality of higher density textured fog layers utilizing attributes of an applied texture;
(2) scaling the applied texture by an opacity gain;
(3) summing the opacity gain to the density, wherein this signed summation can both in crease and decrease density of the plurality of higher density textured fog layers;
(4) summing density results for each layer of the plurality of higher density textured fog layers into a net density delta; and
(5) summing the net density delta and an average density determined for an antialiased density.
-
-
17. The method as defined in claim 16 wherein the method further comprises the steps of:
-
(1) summing the densities that are determined during processing of a view ray;
(2) determining a blending factor by exponentiation; and
(3) determining a net fog color from all of the plurality of higher density textured fog layers and the plurality of lower density general visibility layers using a formula;
-
-
18. The method as defined in claim 17 wherein the method further comprises the step of generating a texture model transmittance value from a density sum of all the layers, said transmittance value having the formula:
-
19. The method as defined in claim 18 wherein the method further comprises the step of performing a texture look-up operation, said operation involving the step of determining where the pixel to eye vector intercepts a texture layer, wherein texture axes U and V are aligned with a database X and Y axes.
-
20. The method as defined in claim 19 wherein the method further comprises the step of determining coordinates of the U texture axis utilizing an eye position, an eye position to texture layer range, and a normalized eye vector'"'"'s X and Y components using the formulas:
-
21. A method for providing a multilayer visual cloud effect in a computer-generated synthetic environment, said method comprising the steps of:
-
(1) applying a texture motif to a cloud layer, wherein the texture motif is associated with one of two horizontal planes disposed inside the cloud layer;
(2) utilizing the texture motif to modify a brightness and a visual density of the cloud layer; and
(3) utilizing an upper texture plane when an eye position is above the upper texture plane. - View Dependent Claims (22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45)
utilizing a non-linear transformation in a texture scanning process to thereby account for curvature of a horizon. -
37. The method as defined in claim 36 wherein the method further comprises the step of applying the non-linear transform to a length of a texture scanning vector that causes a horizontal plane to appear to follow a curvature of the horizon.
-
38. The method as defined in claim 37 wherein the method further comprises the step of providing separate non-linear transforms for horizontal planes that are above and below the eye position, to thereby simulate an appearance of cloud decks and cloud ceilings.
-
39. The method as defined in claim 38 wherein the method further comprises the step of applying non-linear transforms that modify a computation of texture level of detail so that it correlates with a curved visual result.
-
40. The method as defined in claim 39 wherein the method further comprises the step of modifying the non-linear transform to thereby account for a relative proximity, in altitude, of the eye position to the cloud decks and the cloud ceilings.
-
41. The method as defined in claim 40 wherein the method further comprises the steps of:
-
(1) determining the texture level of detail by separating a range determination process and a slant factor determination process; and
(2) applying separate limits to the range determination process and to the slant factor determination process.
-
-
42. The method as defined in claim 41 wherein the method further comprises the step of determining the texture level of detail as determined by a combination of a perspective size of a texel in either a visually compressed or uncompressed directions, to thereby provide better homogeneity of textural detail for oblique surfaces.
-
43. The method as defined in claim 42 wherein the method further comprises the step of generating a visual effect of the cloud deck on a curved horizon that correctly occults scene detail that lie above the altitude of the cloud layer, but which lie beyond a cloud horizon range, and below an eye to horizon slope.
-
44. The method as defined in claim 43 wherein the method further comprises the step of utilizing a pixel level blend process to antialias a nominal horizon, and which depresses an altitude of scene detail that should be occulted, on a pixel by pixel basis, to thereby hide the scene details beneath the cloud deck.
-
45. The method as defined in claim 44 wherein the method further comprises the step of applying a blending effect based on a relative range of scene details and the cloud horizon, to thereby prevent visual anomalies for the scene details at the cloud horizon range.
-
-
46. A system for simulating the effects of layered fog having at least one texture applied thereto, to thereby accurately render position and horizontal movement information in a computer-generated synthetic environment that is rendered on a display screen, said system comprising:
-
a layer modeling device for generating at least one fog layer and at least one general visibility layer, wherein the layer modeling device receives eye to pixel range data, eye to pixel viewray data, an eye position altitude, and a pixel altitude; and
a texture modeling device for generating at least one texture to be applied to the at least one fog layer, wherein the texture modeling device receives input from the layer modeling device, an unfogged pixel color data, the eye to pixel range data, the eye to pixel viewray data, the eye position altitude, and the pixel altitude, and wherein the texture modeling device generates fogged pixel color data as output. - View Dependent Claims (47, 48, 49)
an altitude depression calculator for determining a depressed altitude and blend factor;
a three sample altitude generator for generating three antialiasing sample altitudes near a pixel altitude from viewray and polygon orientation;
a density profile change-point selector for comparing the three antialiasing sample altitudes againt a density profile;
a plurality of density generators for determining a vertical density between the eye position altitude and the pixel altitude for each of the three antialiasing sample altitudes;
an eye to horizon density calculator for determining a density offset from the eye to pixel viewray data and horizon data; and
a density averager for blending the three antialiasing sample altitudes to thereby generate a final pixel density value as output.
-
-
48. The system as defined in claim 47 wherein the texture modeling device further comprises:
-
a texture setup device for determining common data required by the at least one texture layer and the at least one general visibility layer, and wherein the texture setup device receives as inputs the eye to pixel range data, eye to pixel viewray data, an eye position altitude, and the final pixel density value;
at least one general visibility layer evaluation module for determining an amount of fog density to be applied to a portion of the eye to pixel viewray that intersects the at least one general visibility layer;
at least one texture layer evaluation module for applying texture data to a color and a density of the at least one fog layer to a portion of the eye to pixel viewray that intersects the at least one fog layer; and
a concatenation module for concatenating data from the at least one general visibility layer and the at least one fog layer to thereby generate a final fog transmittance value and a final fog color.
-
-
49. The system as defined in claim 48 wherein the concatentation layer utilizes the transmittance value to blend between the final fog color and the unfogged pixel color data.
Specification