×

Method of generating digital images of objects in 3D scenes while eliminating object overdrawing within the multiple graphics processing pipeline (GPPLS) of a parallel graphics processing system generating partial color-based complementary-type images along the viewing direction using black pixel rendering and subsequent recompositing operations

  • US 8,284,207 B2
  • Filed: 08/29/2008
  • Issued: 10/09/2012
  • Est. Priority Date: 11/19/2003
  • Status: Active Grant
First Claim
Patent Images

1. A method of generating image frames of a 3D scene containing objects along a viewing direction, using an object-division based parallel rendering process carried out on a parallel graphics processing system employing a plurality of graphics processing pipelines (GPPLs) configured to operate according to object-division mode of parallel rendering, while eliminating the overdrawing of objects in said 3D scene that are occluded by other objects along said viewing direction, wherein said GPPLs include a primary GPPL, wherein each said GPPL has a Z depth buffer in which each depth value has an x, y position, and a color image buffer in which each pixel value has an x, y position, wherein said 3D scene to be rendered along said viewing direction is decomposed into objects, and said objects are assigned to particular GPPLs for graphics processing, wherein during the generation of each image frame, said method comprising the steps of:

  • (a) transmitting graphics commands and data for all objects in the image frame, to all said GPPLs to be rendered;

    (b) within each said GPPL, using said graphics commands and data transmitted in step (a) to locally generate a Global Depth Map (GDM), and then storing said GDM within the Z depth buffers of all of said GPPLs;

    (c) transmitting graphics commands and data of objects in the image frame, to only assigned GPPLs;

    (d) within the color buffer of each GPPL, (i) generating a partial color-based complementary-type image, using said GDM, a Z test filter operating on said Z depth buffer, and said graphics commands and data transmitted in step (c), and (ii) then storing said partial color-based complementary-type image in the color buffer of said GPPL;

    wherein the pixels of objects sent to assigned GPPLs are rendered as color pixel values within the color image buffers of the assigned GPPLs, while pixels of objects sent to non-assigned GPPLs are rendered as black pixel values within the color image buffers of the non-assigned GPPLs;

    wherein said partial color-based complementary-type image within the color frame buffer of each said GPPL comprises (i) black pixel values corresponding to objects in the image frame sent to a non-assigned GPPL, and (ii) color pixel values corresponding to objects in the image frame sent to an assigned-GPPL and located closest to the viewer along said viewing direction;

    wherein, at a given x,y position in the color image buffers of said GPPLs, (i) at most only one said GPPL holds a color pixel value which has survived the Z test filter operating on said Z buffers using said GDM stored in said Z buffers of all said GPPLs, while all other said GPPLs hold a black pixel value; and

    (e) after a final pass, recompositing a complete color image frame of said 3D scene within said primary GPPL, using said partial color-based complementary-type images stored in said color image buffers of said GPPLs, by simply combining, at each x, y position, the color and black pixel values of said partial color-based complementary-type images stored in the color image buffers within said GPPLs, so as to form said complete color image frame of the 3D scene, without using said GDM or any depth value information stored in the Z buffers of said GPPLs.

View all claims
  • 4 Assignments
Timeline View
Assignment View
    ×
    ×