System and method for deadlock-free pipelining
First Claim
1. A method of processing data in a graphics processing unit, said method comprising:
- maintaining a count of vacant memory resources of a pipeline buffer, said pipeline buffer coupled to receive results from an execution unit of said graphics processing unit;
determining a number of requests for said execution unit of a thread, wherein said number of requests comprise a number of multiple request operations that are present within said thread before a read-back operation is present within said thread;
issuing said number of requests to said execution unit provided there is sufficient vacant memory resources of said pipeline buffer to accommodate all of said number of requests, otherwise not issuing any of said number of requests to said execution unit; and
said execution unit writing results of issued requests to said pipeline buffer after execution thereof.
1 Assignment
0 Petitions
Accused Products
Abstract
A system and method for facilitating increased graphics processing without deadlock. Embodiments of the present invention provide storage for execution unit pipeline results (e.g., texture pipeline results). The storage allows increased processing of multiple threads as a texture unit may be used to store information while corresponding locations of the register file are available for reallocation to other threads. Embodiments further provide for preventing deadlock by limiting the number of requests and ensuring that a set of requests is not issued unless there are resources available to complete each request of the set of requests. Embodiments of the present invention thus provide for deadlock free increased performance.
115 Citations
20 Claims
-
1. A method of processing data in a graphics processing unit, said method comprising:
-
maintaining a count of vacant memory resources of a pipeline buffer, said pipeline buffer coupled to receive results from an execution unit of said graphics processing unit; determining a number of requests for said execution unit of a thread, wherein said number of requests comprise a number of multiple request operations that are present within said thread before a read-back operation is present within said thread; issuing said number of requests to said execution unit provided there is sufficient vacant memory resources of said pipeline buffer to accommodate all of said number of requests, otherwise not issuing any of said number of requests to said execution unit; and said execution unit writing results of issued requests to said pipeline buffer after execution thereof. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method of processing data in a graphics processing unit, said method comprising:
-
determining a size of a pipeline buffer, said pipeline buffer coupled to receive results from an execution unit of said graphics processing unit; determining a number of threads allowable to concurrently operate with said execution unit based on a number of request operations within each thread and based further on said size of said pipeline buffer; allowing only said number of threads, or less, to concurrently operate with said execution unit to prevent deadlock thereof; said number of threads issuing request operations to said execution unit; and said execution unit writing results of said request operations to said pipeline buffer after execution thereof. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A computer readable storage medium having stored thereon, computer executable instructions that, if executed by a computer system cause the computer system to perform a method for processing data in a graphics processing unit comprising:
-
maintaining a count of vacant memory resources of a pipeline buffer, said pipeline buffer coupled to receive results from an execution unit of said graphics processing unit; determining a number of requests for said execution unit of a thread, wherein said number of requests comprise a number of multiple request operations that are present within said thread before a read-back operation is present within said thread; issuing said number of requests to said execution unit provided there is sufficient vacant memory resources of said pipeline buffer to accommodate all of said number of requests, otherwise not issuing any of said number of requests to said execution unit; and said execution unit writing results of issued requests to said pipeline buffer after execution thereof. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification