Image generator for generating perspective views from data defining a model having opaque and translucent features
First Claim
1. An apparatus for generating an image to be displayed on a display screen from data defining a model including a plurality of opaque and translucent features, the image being intended to represent a view of the model from a predetermined eyepoint and being made up from an angry of screen space pixels to be displayed by a raster scanning process, each pixel being of uniform color and intensity, and the pixels together defining an image area, comprising:
- a. dividing means for dividing the image area into an array of sub-areas each of which covers at least one pixel,b. sub-area coverage determining means for determining for each feature in the model which of the sub-areas is at least partially covered by that feature,c. list means for producing a list of feature identifiers in respect of each sub-area, the list for any one sub-area identifying features which at least partially cover that sub-area,d. position means for determining a position in screen space for at least one sampling point within each sub-area,e. sampling point cover determining means for determining, for each sub-area in turn, and for each said sampling point, which of the features in that sub-area'"'"'s list cover that sampling point,f. distance determining means for determining, for each feature which covers a sampling point, a distance from the eyepoint to that feature at the sampling point,g. feature storing means for storing feature describing data for each sampling point within a sub-area, the stored dam being indicative of at least the distance of the opaque feature which covers the sampling point and is nearest to the eyepoint and the distance and translucency of at least one nearer translucent feature which covers the sampling point,h. sampling point output means for producing an output for each sampling point within a sub-area, the sampling point output corresponding to the combined effects of the features identified by the data stored in the data storing means,i. pixel producing output means for producing an output for each pixel within a sub-area, the pixel output corresponding to the combined effects of the sampling point outputs for all sampling points which contribute to that pixel, andj. display means for displaying the pixel outputs.
1 Assignment
0 Petitions
Accused Products
Abstract
An apparatus for generating an image from data defining a model including a plurality of opaque and translucent features. The image is intended to represent a view of the model from a predetermined eyepoint and is made up from an array of screen space pixels. The image area is divided into an array of sub-areas each of which covers at least one pixel. For each feature in the model that is potentially visible from the eyepoint, a test is conducted to determine which of the sub-areas is at least partially covered by that feature. For each feature which covers a sampling point, a function of the distance from the eyepoint to that feature at the sampling point is determined. An output for each pixel within a sub-area is produced, the pixel output corresponding to the combined effects of the sampling point outputs for all sampling points which contribute to that pixel, and the pixel outputs are displayed.
-
Citations
67 Claims
-
1. An apparatus for generating an image to be displayed on a display screen from data defining a model including a plurality of opaque and translucent features, the image being intended to represent a view of the model from a predetermined eyepoint and being made up from an angry of screen space pixels to be displayed by a raster scanning process, each pixel being of uniform color and intensity, and the pixels together defining an image area, comprising:
-
a. dividing means for dividing the image area into an array of sub-areas each of which covers at least one pixel, b. sub-area coverage determining means for determining for each feature in the model which of the sub-areas is at least partially covered by that feature, c. list means for producing a list of feature identifiers in respect of each sub-area, the list for any one sub-area identifying features which at least partially cover that sub-area, d. position means for determining a position in screen space for at least one sampling point within each sub-area, e. sampling point cover determining means for determining, for each sub-area in turn, and for each said sampling point, which of the features in that sub-area'"'"'s list cover that sampling point, f. distance determining means for determining, for each feature which covers a sampling point, a distance from the eyepoint to that feature at the sampling point, g. feature storing means for storing feature describing data for each sampling point within a sub-area, the stored dam being indicative of at least the distance of the opaque feature which covers the sampling point and is nearest to the eyepoint and the distance and translucency of at least one nearer translucent feature which covers the sampling point, h. sampling point output means for producing an output for each sampling point within a sub-area, the sampling point output corresponding to the combined effects of the features identified by the data stored in the data storing means, i. pixel producing output means for producing an output for each pixel within a sub-area, the pixel output corresponding to the combined effects of the sampling point outputs for all sampling points which contribute to that pixel, and j. display means for displaying the pixel outputs. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39)
-
-
40. An image generator for use with an image projection system in which an image is projected onto a display surface and the display surface is viewed from a predetermined eyepoint through an imaginary viewing plane of predetermined area, the image being projected as a series of raster scan lines each made up from a respective row of display pixels, wherein the image generator comprises a model database in which features of the model are described by geometrical attribute data defining the feature with reference to world space and non-geometrical attribute data defining characteristics of the feature, means for determining an eyepoint position in world space from which the model is to be viewed, means for transforming geometrical attribute data from world space to eyepoint space, and means for calculating image data to be displayed on the display surface from the transformed geometrical attribute data and the non-geometrical attribute data, the image data being consistent with an appearance of the model from the eyepoint, wherein the image data calculating means comprises:
-
a. means for dividing a viewing plane area into an array of sub-areas, each sub-area being defined by four comer coordinates arranged such that a projection of the sub-area onto the display surface from the eyepoint corresponds in shape and area to a portion of the display surface upon which a predetermined respective group of pixels is projected, b. means for defining a position of at least one sampling point within each sub-area, each sampling point position being defined by reference to comers of the respective sub-area, c. means for determining from the transformed geometrical feature attributes and the position of each sampling point which of the sampling points is covered by each of the features, d. attribute storing means for storing for each sampling point non-geometrical attribute data for at least one feature covering that sampling point, and e. pixel output generating means for generating from the stored attribute data an output to the image projection system in respect of each pixel. - View Dependent Claims (41, 42, 43, 44, 45)
-
-
46. An apparatus for producing an image on a display screen of a world space model made up from a plurality of features defined in world space, the model including features defined light point and non-light point features and being viewed from an eyepoint defined in world space, wherein the apparatus comprises:
-
a. means for calculating a finite area in screen space to be occupied by each light point feature, b. intensity calculating means for calculating the intensity of light point features, c. means for calculating a translucency for each light point feature visible from the eyepoint, the calculated translucency being a function of the calculated intensity, and d. output producing means for producing outputs to a display device, said outputs corresponding to said calculated finite area, light intensity and feature translucency. - View Dependent Claims (47, 48, 49, 50, 51)
-
-
52. An apparatus for producing an image on a display screen of a world space model made up from a plurality of features defined by world space coordinates and viewed from an eyepoint defined in world space, the model including light point features each defined by world space coordinates determining a position of the light point in world space, wherein the apparatus comprises:
-
a. means for calculating screen space coordinates of each light point, b. means for calculating a screen space area for each light point as a function of at least distance in world space between the light point and eyepoint, c. intensity calculating means for calculating an intensity for each light point, d. means for defining the screen space positions of a plurality of sampling points distributed across the display screen using the calculated light point screen space coordinates, e. means for determining for each light point which of the sampling points lie within the calculated area of the light point, and f. means for producing an output to a display device for each light point, said output corresponding to the calculated light point intensity and the particular sampling points which lie within the calculated area of the light point. - View Dependent Claims (53, 54, 55, 56, 57, 58, 59)
-
-
60. An apparatus for scan converting data describing a plurality of features to enable the display of an image of a world space model defined by those features, each feature having a boundary defined by a plurality of straight edges, and each edge being defined by a line equation in screen space coordinates, the apparatus comprising means for dividing the screen into a plurality of sub-areas, and means for analyzing the coverage of any one sub-area by any one feature, wherein the coverage analyzing means comprises:
-
a. means for calculating a perpendicular distance from a reference point within the one sub-area to each edge of the one feature, b. means for calculating a limiting distance from the reference point, the limiting distance being such that if a feature edge is at a perpendicular distance from the reference point which is greater than the limiting distance that edge cannot cross the sub-area, c. means for comparing the calculated perpendicular distances with the limiting distance, and d. means for assessing coverage of the sub-area by a feature on the basis of a logical combination of results of the comparisons between the calculated distances and the limiting distance. - View Dependent Claims (61)
-
-
62. An apparatus for processing data describing a model an image of which is to be displayed on a screen, the model comprising a plurality of features each described in terms of geometrical attributes defining position and orientation of the feature and non-geometrical attributes defining characteristics of the feature, and the image being intended to represent a view of the model from an eyepoint in world space, comprising
a. a database in which model data is stored in a hierarchical tree structure having a root node corresponding to a base of the tree, branch nodes corresponding to branching points of the tree, and leaf nodes corresponding to ends of individual branches of the tree, each node of the tree storing data describing a respective object, leaf nodes storing at least one feature of the respective object, and root and branch nodes storing at least one pointer to another object and transformation data defining a relative position and orientation of the pointed to object relative to a pointed from object, whereby successive locations in the tree structure store data related to successively more detailed portions of the model, b. a transformation processor having a parallel army of object processors, and c. a controller for reading out data from the database to the object processors such that data read out from one node is read out to a corresponding one of a plurality of data processors, wherein each data processor is adapted to receive data read out from any node of the tree, one of said object processors responding to read out of pointer data by transforming the respective transformation data into a common coordinate space and returning the pointer and the transformed data to the controller, and the one object processor responding to read out of a feature by transforming the geometrical attributes of the feature, the controller reading out data stored at the node corresponding to the base of the tree and subsequently to read out data stored at nodes of the tree identified by the pointers returned to it from the object processors.
Specification