Apparatus and method for texture level of detail computation

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
4Forward
Citations 
0
Petitions 
2
Assignments
First Claim
1. A graphic processing system to compute a texture level of detail, the graphic processing system comprising:
 a memory device to implement a first lookup table, the first lookup table to provide a first level of detail component;
a driver to calculate a log value of a second level of detail component;
level of detail computation logic coupled to the memory device and the driver, the level of detail computation logic operable to receive a second level of detail component from the driver and to compute a level of detail for a texture mapping operation based on the first level of detail component from the lookup table and the received second level of detail component from the driver.
2 Assignments
0 Petitions
Accused Products
Abstract
A graphic processing system to compute a texture level of detail. An embodiment of the graphic processing system includes a memory device, a driver, and level of detail computation logic. The memory device is configured to implement a first lookup table. The first lookup table is configured to provide a first level of detail component. The driver is configured to calculate a log value of a second level of detail component. The level of detail computation logic is coupled to the memory device and the driver. The level of detail computation logic is configured to compute a level of detail for a texture mapping operation based on the first level of detail component from the lookup table and the second level of detail component from the driver. Embodiments of the graphic processing system facilitate a simple hardware implementation using operations other than multiplication, square, and square root operations.
5 Citations
View as Search Results
Mipmap generation method and apparatus  
Patent #
US 9,881,392 B2
Filed 01/16/2015

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Graphics processing unit for adjusting levelofdetail, method of operating the same, and devices including the same  
Patent #
US 9,905,036 B2
Filed 07/29/2015

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Processor for realizing at least two categories of functions  
Patent #
US 10,372,359 B2
Filed 05/10/2017

Current Assignee
ChengDu HaiCun IP Technology LLC

Sponsoring Entity
ChengDu HaiCun IP Technology LLC

Configurable processor with inpackage lookup table  
Patent #
US 10,445,067 B2
Filed 11/28/2018

Current Assignee
Hangzhou Haicun Information Technology Co. Ltd.

Sponsoring Entity
Hangzhou Haicun Information Technology Co. Ltd.

Apparatus and method for texture level of detail computation  
Patent #
US 8,106,918 B2
Filed 05/01/2007

Current Assignee
Giquila Corp.

Sponsoring Entity
Giquila Corp.

18 Claims
 1. A graphic processing system to compute a texture level of detail, the graphic processing system comprising:
a memory device to implement a first lookup table, the first lookup table to provide a first level of detail component; a driver to calculate a log value of a second level of detail component; level of detail computation logic coupled to the memory device and the driver, the level of detail computation logic operable to receive a second level of detail component from the driver and to compute a level of detail for a texture mapping operation based on the first level of detail component from the lookup table and the received second level of detail component from the driver.  View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
 10. A method for computing a texture level of detail, the method comprising:
computing, by a processor, a first log term based on a first plurality of level of detail components, the first log term corresponding to a first dimension of a texture map, the first plurality of level of detail components generated with lookup and arithmetic operations other than multiplication; computing, by a processor, a second log term based on a second plurality of level of detail components, the second log term corresponding to a second dimension of a texture map, the second plurality of level of detail components generated with lookup and arithmetic operations other than multiplication; and
computing, by a processor, the level of detail according to a maximum of the first and second log terms. View Dependent Claims (11, 12, 13, 14, 15)
 16. An apparatus for computing a texture level of detail, the apparatus comprising:
 means for computing first and second log terms using operations other than multiplication, square, and square root operations; and
means for computing the texture level of detail based on the first and second log terms.  View Dependent Claims (17, 18)
 means for computing first and second log terms using operations other than multiplication, square, and square root operations; and
1 Specification
This application is a continuation of U.S. Provisional Application Ser. No. 11/799,710 filed May 1, 2007 and entitled Apparatus and method for texture level of detail computation, which is hereby incorporated herein by reference in its entirety.
In video graphics applications, many techniques are used to render graphical images of different shapes and sizes. Typically, graphical images are made up of thousands, or even millions, of primitive shapes such as triangles. Each triangle is defined by the coordinates of its vertices. In order to enhance the threedimensional aspects of a graphical rendering, texture may be added to each of the triangles or other drawing units. Texture coordinates are used to assign texture maps to each object as it is rendered on a display device. A texture map is an array of texture elements (texels) combined to form a standard block of texture.
Mapping textures to rendered objects can be complicated by the depths (i.e., distances relative to the viewer) of the various objects in a rendered scene. The orientation of the rendered objects can also affect the complexity of mapping the textures to the rendered objects. Furthermore, applying texture to a single object can be complicated if the object varies in depth and orientation on the display device.
Mipmapping is one conventional technique used to apply different texture maps, each having a corresponding level of detail (LOD), to such objects. Mipmapping prefilters a texture map and stores the different prefiltered versions, or levels, to decrease the complexity of texture minification calculations. Texture minification refers to minimizing a prefiltered texture map with a standard level of detail to correlate multiple texels to a single pixel because the rendered object is miniaturized on the display device to appear far away. In contrast, texture magnification refers to correlating the texture level of detail to an object that is magnified on the display device to appear relatively close to the viewer. However, it should be noted that texture filtering also may be used with texture magnification because the pixels may be misaligned relative to the texels of a particular texture map.
A texture level of detail computation is implemented to determine which texture map to use to apply texture to a particular rendered object. In general texture maps with greater detail (sometimes designated as “level 0”) are used for close objects, and texture maps with less detail (sometimes designated as “level 1,” “level 2,” “level 3,” and so on) are used for farther objects.
A conventional twodimensional (2D) texture level of detail computation implements two multiply operations, two square operations, one add operation, and one square root operation for each of two log terms in order to calculate the expected value. Similarly, a conventional threedimensional (3D) texture level detail computation implements three multiply operations, three square operations, two add operations, and one square root operation for each of three log terms. Since these computations are often performed in 32 bit floating point space, these conventional texture level of detail computations are expensive to implement in hardware.
Embodiments of a system are described. In one embodiment, the system is a graphic processing system to compute a texture level of detail. An embodiment of the graphic processing system includes a memory device, a driver, and level of detail computation logic. The memory device is configured to implement a first lookup table. The first lookup table is configured to provide a first level of detail component. The driver is configured to calculate a log value of a second level of detail component. The level of detail computation logic is coupled to the memory device and the driver. The level of detail computation logic is configured to compute a level of detail for a texture mapping operation based on the first level of detail component from the lookup table and the second level of detail component from the driver. Embodiments of the graphic processing system facilitate a simple hardware implementation using operations other than multiplication, square, and square root operations. Other embodiments of the system are also described.
Embodiments of a method are also described. In one embodiment, the method is a method for computing a texture level of detail. An embodiment of the method includes computing a first log term based on a first plurality of level of detail components. The first log term corresponds to a first dimension of a texture map. The first plurality of level of detail components are generated with lookup and arithmetic operations other than multiplication. The method also includes computing a second log term based on a second plurality of level of detail components. The second log term corresponds to a second dimension of a texture map. The second plurality of level of detail components are generated with lookup and arithmetic operations other than multiplication. The method also includes computing the level of detail according to a maximum of the first and second log terms. Other embodiments of the method are also described.
Embodiments of an apparatus are also described. In one embodiment, the apparatus is an apparatus for computing a texture level of detail. An embodiment of the apparatus includes means for computing first and second log terms using operations other than multiplication, square, and square root operations. The apparatus also includes means for computing the texture level of detail based on the first and second log terms. Other embodiments of the apparatus are also described.
Other aspects and advantages of embodiments of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrated by way of example of the principles of the invention.
Throughout the description, similar reference numbers may be used to identify similar elements.
The depicted integrated processor circuit 102 includes a processing unit (CPU) 108 which includes a memory management unit (MMU) 110 and one or more instruction and/or data caches 112. The integrated processor circuit 102 also includes a memory interface 114 to interface with the memory 104. The integrated processor circuit 102 also includes graphics processor 116 which includes a texture engine 118. In one embodiment, the texture engine 118 implements one or more texture operations related to the texture pipeline 130 shown in
A direct memory access (DMA) controller 122 is also coupled to the bus 120. The DMA controller 122 couples the bus 120 to an interface (I/F) bus 124 to which other core logic functional components (not shown) such as an encoder/decoder (CODEC) interface, a parallel interface, a serial interface, and an input device interface may be coupled. In one embodiment, the DMA controller 122 accesses data stored in the memory device 104 via the memory interface 114 and provides the data to peripheral devices connected to the I/F bus 124. The DMA controller 122 also sends data from the peripheral devices to the memory device 104 via the memory interface 114.
In one embodiment, the graphics processor 116 requests and accesses graphical data from the memory device 104 via the memory interface 114. The graphics processor 116 then processes the data, formats the processed data, and sends the formatted data to the display device 106. In some embodiments, the display device 106 may be a liquid crystal display (LCD), a cathode ray tube (CRT), or a television (TV) monitor. Other embodiments of the computer graphics system 100 may include more or less components and may be capable of implementing fewer or more operations.
Each of the plurality of texture maps 130 corresponds to a level of detail (LOD) because the detail of the texture representations vary among the plurality of texture maps 130. For example, the high resolution texture map 132 has more texture detail than the first intermediate texture map 134. Hence, the high resolution texture map 132 may be used to map very detailed textures to an object, or a part of an object, that is represented on the display device 106 as being close to the viewer. Similarly, the first intermediate texture map 134 has more texture detail than the second intermediate texture map 136, which has more texture detail than the third intermediate texture map 138, which has more texture detail than the low resolution texture map 139. Hence, the low resolution texture map 139 may be used to map lowdetail textures to an object, or a part of an object, that is represented on the display device 106 as being far away from the viewer. Generating the plurality of texture maps 130 may be performed using various known texture map generation techniques, including compression, interpolation, filtering and so forth.
Since the high resolution texture map 132 represents the most detailed texture of the plurality of texture maps 130, the high resolution texture map 132 may be designated as “level 0.” Similarly, the first intermediate texture map 134 may be designated as “level 1,” the second intermediate texture map 136 may be designated as “level 2,” thethird intermediate texture map 138 may be designated as “level 3,” and the low resolution texture map 139 may be designated as “level 4.” Additionally, various values from the level of detail computation may correspond to each of the levels, or texture maps. In this way, the level of detail computation can be used to select one of the plurality of texture maps 130 to be used to texture an object, or a part of an object, represented on the display device 106.
In one embodiment, the shader 140 receives pixel coordinates corresponding to a triangle (or other drawing object) to be rendered on the display device 106. The texture coordinate generator 142 then generates texture coordinates corresponding to the pixel coordinates. In one embodiment, the texture coordinate generator 142 implements the level of detail computation, as described below. The texture address generator 144 then determines the memory address of the texture map corresponding to the level of detail computation. The texture address generator 144 sends the texture map address to the texture cache 146, which determines if the requested texture map is stored in the cache or in another memory device. If the texture map is stored in another memory device, then the texture cache 146 retrieves the requested data from the memory device and stores it in the texture cache 146. The texture cache 146 then provides a copy of the requested texture map to the texture filter 148.
The texture filter 148 correlates the texels of the texture map to each of the corresponding pixels of the display device 106. In some embodiments, there is a onetoone correlation between the texels and the pixels with respect to size and location. Alternatively, there may be a onetoone correlation between the texels and the pixels with respect to size, but the texture filter 148 nevertheless performs texture filtering because the locations of the texels do not align with the locations of the pixels. In other embodiments, the texel sizes are different from the pixel sizes, so the texture filter 148 implements magnification or minification to correlate the texels of the requested texture map to the pixels of the display device 106. Other embodiments of the texture filter 148 may implement other types of texture filtering.
In one embodiment, the level of detail computation performed in conjunction with the level of detail computation architecture 150 is based on logarithmic mathematical equations derived from conventional level of detail computations. The use of these equations facilitates a less complicated hardware (or software) implementation of the conventional level of detail computation. In one embodiment, this improved level of detail computation is implemented using integer addition and lookup operations. Additionally, embodiments of this level of detail computation are implemented without using multiplication, square, or square root operations.
The following twodimensional (2D) level of detail equation is a starting point to derive the level of detail computation implemented by the level of detail computation architecture 150 of
LOD2 D=log Max(((dsdx.times.width)2+(dtdx.times.height)2), ((dsdy .times. width)2+(dtdy.times.height)2))##EQU00001##
This equation can be written as follows:
LOD2D=Max(log((dsdx.times.width)2+(dtdx.times.height)2), log((dsdy.times. width)2+(dtdy.times.height)2))=Max(L1, L2),##EQU00002## where ##EQU00002.2##L1=log(dsdx.times.width)2+(dtdx.times.height)2, and ##EQU00002.3##L2=log (dsdy.times.width)2+(dtdy.times.height)2##EQU00002.4##
For each of the two log terms, L.sub.1 and L.sub.2, there are two multiplication operations, two square operations, one add operation, and one square root operation. These operations may be implemented in 32 bit floating point space. In order to remove the multiplication, square, and square root operations, the following notations are used:
 A=dsdx.times.width, and
 B=dtdx.times.height
Also, it is assumed that:
A.gtoreq.B, A.gtoreq.0, and B.gtoreq.0
Therefore, it follows that:
(dsdx.times.width).sup.2.gtoreq.(dtdx.times.height).sup.2
Using these notations and assumptions, the equation for the first log term, L.sub.1, can be rewritten as follows:
L1=log A2+B2=log A2.times.(1+B2A2)=log(A(1+B2A2))=log A+log(1+B2A2)=log A+0.5.times.log(1+B2A2) ##EQU00003##
Substituting back again for A and B results in:
L1=log(dsdx.times.width)+0.5.times.log(1+(dtdx.times.height)2(dsdx.times. width)2)=log(dsdx)+log(width)+0.5.times.log(1+(dtdx.times.height)2(dsdx .times.width)2)=log(dsdx)+log(width)+0.5.times.log(1+(dtdx.times.height)2(dsdx.times.width) 2)##EQU00004##
A similar result can be obtained for the other log term, L.sub.2, of the original equation:
L2=log(dsdx.times.width)2+(dtdy.times.height)2=log(dsdy)+log(width)+0.5.times.log (1+(dtdy.times.height)2(dsdy.times.width)2)##EQU00005##
Thus, both of the log terms, L.sub.1 and L.sub.2, of the original equation can be rewritten into logarithmic equations with three components. The first component is the log(dsdx) or the log(dsdy) component. The second component is the log(width) component. And the third component is referred to as a function component. In one embodiment, the expected LOD is expressed in fixed point 7.5 format so that the integer portion is expressed using seven bits and the fractional portion is expressed using five bits. Other embodiments may use other numerical formats.
In one embodiment, the first component of each of the L.sub.1 and L.sub.2 equations may be derived from a lookup table (or separate lookup tables) such as the log lookup table 156. In one embodiment, the log lookup table 156 is a six bit table, although other embodiments may use other types of tables.
The second component of each of the L.sub.1 and L.sub.2 equations may be provided, in part, by the software 162. In one embodiment, the second component is a per texture attribute, so the log(width) component may be in fixed point 7.5 format by the driver 160. Other embodiments may generate the second component in another manner.
The third component of each of the L.sub.1 and L.sub.2 equations is the function component. In order to obtain the function component for the L.sub.1 equation, it may be written as follows:
f(x)−0.5.times.log(1+(dtdx.times.height)2(dsdx.times. width) 2)=0.5.times.log(1 +B 2 A 2)=0.5.times.log(1+(BA)2)=0.5.times.log(1+(2 (log(B)−log(A)))2) =0.5.times.log(1+22.times.(log(B)−log(A)))=0.5.times.log(1+22.times.(log (dtdx.times.height)−log(dsdx.times.width)))=0.5.times.log(1+22.times.(log(dtdx) +log(height)−(log(dsdx)+log(width))))=0.5.times.log(1+22.times.(log(dtdx) −log(dsdx)+log(height)−log(width)))##EQU00006##
Therefore, the equation for the function component of the L.sub.1 equation may be rewritten as follows:
f(x)=0.5.times.log(1+2.sup.2x), where
x=log(dtdx)−log(dsdx)+log(height)−log(width)
Using this notation, the L.sub.1 equation can be rewritten as follows:
L.sub.1=log(dsdx)+log(width)+f(x)
Using similar mathematics and notations, the L.sub.2 equation can be rewritten as follows:
L.sub.2=log(dsdy)+log(width)+f(y), where
f(y)=0.5.times.log(1+2.sup.2y) and
y=log(dtdy)−log(dsdy)+log(height)−log(width)
In one embodiment, the third component—the function component—of the L.sub.1 and L.sub.2 equations may be derived from a lookup table (or separate tables) such as the function lookup table 158. One example of the function table 158 is provided below, although other embodiments may use other lookup tables.
TABLEUS00001 S12 log_adj_table[64]={ 0x10, 0x0f, 0x0f, 0x0e, 0x0e, 0x0d, 0x0d, 0x0c, 0x0c, 0x0b, 0x0b, 0x0b, 0x0a, 0x0a, 0x0a, 0x09, 0x09, 0x09, 0x08, 0x08, 0x08, 0x07, 0x07, 0x07, 0x06, 0x06, 0x06, 0x06, 0x06, 0x05, 0x05, 0x05, 0x05, 0x04, 0x04, 0x04, 0x04, 0x04, 0x04, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01};
Since A.gtoreq.B, A.gtoreq.0, and B.gtoreq.0, it follows that:
1.gtoreq.BA.gtoreq.01.gtoreq.BA.gtoreq.01.gtoreq.(BA)2.gtoreq.02.gtoreq.1+(B A) 2.gtoreq.1log2.gtoreq.log(1+(BA)2).gtoreq.log11.gtoreq.log (1+(BA)2).gtoreq.00.5.gtoreq.0.5.times.log(1+(BA)2).gtoreq.0 ##EQU00007##
Therefore, the function component of each of the L.sub.1 and L.sub.2 equations is within the range of [0.0, 0.5]. In one embodiment, the input is in 7.5 fixed point format, and the function lookup table 158 is a six bit table. However, other embodiments may use other formats.
In one embodiment, using the above components for each of the L.sub.1 and L.sub.2 equations facilitates performing the level of detail computation with only lookup tables and integer math. Therefore, the hardware does not necessarily need to be implemented to perform multiplication, square, and square root operations.
In another embodiment, logarithmic math also may be used to derive similar equations for a threedimensional (3D) level of detail computation. A 3D level of detail computation also may be referred to as a volume texture level of detail computation. The following 3D level of detail equation is a starting point to derive the level of detail computation implemented by the level of detail computation architecture 150 of
LOD3D=logMax(((dsdx.times.width)2+(dtdx.times.height)2+(drdx.times.depth) 2), ((dsdy.times.width)2+(dtdy.times.height)2+(dtdy.times.depth)2))=Max(log(dsdx.times.width)2+(dtdx.times.height)2+(drdx.times.depth)2, log(dsdy.times. width)2+(dtdy.times.height)2+(drdy.times.depth)2)=Max(V1, V2), ##EQU00008## where ##EQU00008.2## V1=log(dsdx.times.width)2+(dtdx.times. height)2+(drdx.times.depth)2, and ##EQU00008.3## V2=log(dsdy.times.width)2+(dtdy.times.height)2+(drdy.times.depth)2##EQU00008.4##
Each component of the 3D level of detail computation may be simplified using mathematical operations similar to those described above for the 2D level of detail computation. For example, the V.sub.1 equation can be rewritten as follows:
V.sub.1=log{square root over(A.sup.2+B.sup.2+C.sup.2)}, where
A=dsdx.times.width
B=dtdx.times.height, and
C=drdx.times.depth
Using an additional substitution, the V.sub.1 equation can be rewritten as follows:
V.sub.1=log {square root over (D.sup.2+C.sup.2)}, where
D={square root over (A.sup.2+B.sup.2)}
Therefore, it follows that:
V1=log D+0.5.times.log(1+C2 D2)##EQU00009##
Substituting back for D in the first term provides:
 V.sub.1=log{square root over (A.sup.2+B.sup.2)}+f(x′), where
 f(x′)=0.5.times.log(1+2.sup.2x′) and
 x′=log(drdx)−log(depth)−L.sub.1
Since the terms log {square root over (A.sup.2+B.sup.2)} and L.sub.1 are defined above for the 2D level of detail computation, the V.sub.1 equation can be rewritten as follows:
V.sub.1=L.sub.1+f(x′)
Therefore, the 3D level of detail computation may be performed using similar operation as the 2D level of detail computation. Furthermore, some embodiments compute L.sub.1 using the 2D process described above, then compute f(x′) using the same lookup logic and computation method as for f(x). In one embodiment, the 3D level of detail computation is performed using only lookup tables and integer math.
In the depicted level of detail computation method 170, the lookup logic 154 references the log lookup table 156 to look up 172 the first component, log(dsdx), of the first log term, L.sub.1. Then, the driver 160 provides 174 the second component, log(width), of the first log term, L.sub.1. Then, the lookup logic 154 references the function lookup table 158 to look up 176 the function component, f(x), of the first log term, L.sub.1. Using these three components, the level of detail computation logic 152 computes 178 the first log term, L.sub.1.
The level of detail computation logic 152 uses similar operations for the second log term, L.sub.2. The lookup logic 154 references the log lookup table 156 to look up 182 the first component, log(dsdy), of the second log term, L.sub.2. Then, the driver 160 provides 184 the second component, log(width), of the second log term, L.sub.2. Then, the lookup logic 154 references the function lookup table 158 to look up 186 the function component, f(y), of the second log term, L.sub.2. Using these three components, the level of detail computation logic 152 computes 188 the second log term, L.sub.2.
The level of detail computation logic 152 then computes 190 the level of detail based on the maximum of the first and second log terms, L.sub.1 and L.sub.2. The depicted level of detail computation method 170 then ends.
It should be noted that embodiments of the level of detail computation method 170 may be implemented in software, firmware, hardware, or some combination thereof. Additionally, some embodiments of the level of detail computation method 170 may be implemented using a hardware or software representation of one or more algorithms related to the operations described above. For example, software, hardware, or a combination of software and hardware may be implemented to compute one or more of the various terms or components described above.
Embodiments of the invention also may involve a number of functions to be performed by a computer processor such as a central processing unit (CPU), a graphics processing unit (GPU), or a microprocessor. The microprocessor may be a specialized or dedicated microprocessor that is configured to perform particular tasks by executing machinereadable software code that defines the particular tasks. The microprocessor also may be configured to operate and communicate with other devices such as direct memory access modules, memory storage devices, Internet related hardware, and other devices that relate to the transmission of data. The software code may be configured using software formats such as Java, C++, XML (Extensible Markup Language) and other languages that may be used to define functions that relate to operations of devices required to carry out the functional operations related described herein. The code may be written in different forms and styles, many of which are known to those skilled in the art. Different code formats, code configurations, styles and forms of software programs and other means of configuring code to define the operations of a microprocessor may be implemented.
Within the different types of computers, such as computer servers, that utilize the invention, there exist different types of memory devices for storing and retrieving information while performing some or all of the functions described herein. In some embodiments, the memory/storage device where data is stored may be a separate device that is external to the processor, or may be configured in a monolithic device, where the memory or storage device is located on the same integrated circuit, such as components connected on a single substrate. Cache memory devices are often included in computers for use by the CPU or GPU as a convenient storage location for information that is frequently stored and retrieved. Similarly, a persistent memory is also frequently used with such computers for maintaining information that is frequently retrieved by a central processing unit, but that is not often altered within the persistent memory, unlike the cache memory. Main memory is also usually included for storing and retrieving larger amounts of information such as data and software applications configured to perform certain functions when executed by the central processing unit. These memory devices may be configured as random access memory (RAM), static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, and other memory storage devices that may be accessed by a central processing unit to store and retrieve information. Embodiments may be implemented with various memory and storage devices, as well as any commonly used protocol for storing and retrieving information to and from these memory devices respectively.
Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or suboperations of distinct operations may be implemented in an intermittent and/or alternating manner.
Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents.