FIFO queue, memory resource, and task management for graphics processing
First Claim
1. A method for managing first-in first-out (FIFO) queues in graphics processing, comprising:
- executing, via parallel execution of multiple write threads of a graphics processing unit (GPU), a write operation to write data to one or more write memory locations in multiple pages of memory from a memory pool allocated to a FIFO queue of the multiple FIFO queues, wherein, for a given write thread of the multiple write threads, the write operation comprises;
based on writing the data to the one or more write memory locations by using a write allocation pointer that is common to the FIFO queue, advancing a write done pointer to a next write memory location following the one or more write memory locations to which the data is written;
monitoring write done pointers associated with each of the multiple FIFO queues, including the write done pointer associated with the FIFO queue; and
in response to determining, based on monitoring the write done pointers and based on read allocation pointers, that written data is present in the FIFO queue but has not been consumed by, or scheduled for consumption by, one or more read threads;
comparing the write done pointers and the read allocation pointers to determine an amount of written data available for consumption from each of the multiple FIFO queues, based at least on;
selecting, based on one or more of criteria related to the multiple FIFO queues, the FIFO queue or a set of FIFO queues including the FIFO queue from which to consume the written data;
determining one or more base addresses of the written data to be processed based at least on one or more of the read allocation pointers associated with the FIFO queue or the set of FIFO queues;
executing, via parallel execution of multiple read threads of the GPU, a read operation to read the written data from one or more read memory locations in the multiple pages of memory associated with the FIFO queue or the set of FIFO queues, wherein the one or more read memory locations are determined based at least in part on the one or more base addresses; and
updating the one or more of the read allocation pointers, based on a range specified to one or more of the multiple read threads.
1 Assignment
0 Petitions
Accused Products
Abstract
Methods and devices for managing first-in first-out (FIFO) queues in graphics processing are described. A write operation can be executed by multiple write threads on a graphics processing unit (GPU) to write data to memory locations in the multiple pages of memory. The write operation can also include allocating additional pages of memory for the FIFO queue where a write allocation pointer is determined to achieve a threshold, such to grow the FIFO queue before the memory is actually needed for writing. Similarly, comprises a read operation can be executed by multiple read threads to read data from the memory locations. The read operation can also include deallocating pages of memory back to a memory pool where a read done pointer is determined to achieve a threshold, such as an end of a page.
-
Citations
26 Claims
-
1. A method for managing first-in first-out (FIFO) queues in graphics processing, comprising:
-
executing, via parallel execution of multiple write threads of a graphics processing unit (GPU), a write operation to write data to one or more write memory locations in multiple pages of memory from a memory pool allocated to a FIFO queue of the multiple FIFO queues, wherein, for a given write thread of the multiple write threads, the write operation comprises; based on writing the data to the one or more write memory locations by using a write allocation pointer that is common to the FIFO queue, advancing a write done pointer to a next write memory location following the one or more write memory locations to which the data is written; monitoring write done pointers associated with each of the multiple FIFO queues, including the write done pointer associated with the FIFO queue; and in response to determining, based on monitoring the write done pointers and based on read allocation pointers, that written data is present in the FIFO queue but has not been consumed by, or scheduled for consumption by, one or more read threads; comparing the write done pointers and the read allocation pointers to determine an amount of written data available for consumption from each of the multiple FIFO queues, based at least on; selecting, based on one or more of criteria related to the multiple FIFO queues, the FIFO queue or a set of FIFO queues including the FIFO queue from which to consume the written data; determining one or more base addresses of the written data to be processed based at least on one or more of the read allocation pointers associated with the FIFO queue or the set of FIFO queues; executing, via parallel execution of multiple read threads of the GPU, a read operation to read the written data from one or more read memory locations in the multiple pages of memory associated with the FIFO queue or the set of FIFO queues, wherein the one or more read memory locations are determined based at least in part on the one or more base addresses; and updating the one or more of the read allocation pointers, based on a range specified to one or more of the multiple read threads. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A device for managing first-in first-out (FIFO) queues in graphics processing, comprising
a memory storing one or more parameters or instructions for managing FIFO queues in graphics processing; - and
at least one processor coupled to the memory, wherein the at least one processor is configured to; initialize a memory pool of memory resources in the memory for multiple FIFO queues; allocate multiple pages of the memory pool to a FIFO queue of the multiple FIFO queues; and execute, via parallel execution of multiple write threads of a graphics processing unit (GPU), a write operation to write data to memory locations in the multiple pages of memory, wherein, for a given write thread of the multiple write threads, the write operation comprises; advancing a write allocation pointer to a next available memory location following one or more memory locations to which the data is to be written, wherein the write allocation pointer is common to the FIFO queue; detecting whether the write allocation pointer achieves a threshold write memory location; and where the write allocation pointer achieves the threshold write memory location, allocating at least one additional page of memory from the memory pool to the FIFO queue. - View Dependent Claims (21)
- and
-
22. A method for managing first-in first-out (FIFO) queues in graphics processing, comprising:
monitoring, via one or more processors of a graphics processing unit (GPU) or a central processing unit (CPU), write done pointers associated with each of one or more FIFO queues, and in response to determining, based on the write done pointers and read allocation pointers associated with the one or more FIFO queues, that data is present in a FIFO queue but has not been consumed by, or scheduled for consumption by, one or more read threads; determining an amount of the data that is available for consumption from each of the one or more FIFO queues, based at least on comparing the write done pointers and read allocation pointers, and/or on specified batch sizes; selecting, based on one or more of;
specified priorities of each of the one or more FIFO queues, counts of data available for consumption in each of the one or more FIFO queues, specified thresholds associated with each of the one or more FIFO queues, FIFO queue identifiers, and/or a balanced selection method, the FIFO queue or a set of FIFO queues from which to consume data;determining one or more base addresses of data to be processed based at least on a read allocation pointer associated with the FIFO queue or the read allocation pointers associated with the set of FIFO queues; invoking execution of one or more threads of a data consumption shader program to be executed in parallel on the GPU, wherein the data consumption shader program is configured to retrieve the data from the FIFO queue or the set of FIFO queues; and updating the read allocation pointer or the read allocation pointers, based on the range specified to the one or more shader threads. - View Dependent Claims (23, 24, 25, 26)
Specification