DYNAMIC SHARED READ BUFFER MANAGEMENT
First Claim
1. In a multi-processor computer system having a hierarchical bus architecture facilitating transfer of data between a plurality of agents coupled to the bus, a method of managing access to shared buffer resources in a bridge controller, comprising:
- defining a limit of pending read data requests for a bus agent requesting read data, 1 to m, where m equals the number of buffers in the requesting bus agent;
waiting until a read operation completes once the number of pending read data requests reaches the limit prior to fetching additional read data for the requesting bus agent; and
employing a round-robin arbitration scheme to ensure the shared memory resources are not dominated by a first requesting bus agent, such that no executing process of a second bus agent stalls for lack of read data.
1 Assignment
0 Petitions
Accused Products
Abstract
A structure and method of allocating read buffers among multiple bus agents requesting read access in a multi-processor computer system. The number of outstanding reads a requestor may have based on the current function it is executing is dynamically limited, instead of based on local buffer space available or a fixed allocation, which improves the overall bandwidth of the requestors sharing the buffers. A requesting bus agent may control when read data may be returned from shared buffers to minimize the amount of local buffer space allocated for each requesting agent, while maintaining high bandwidth output for local buffers. Requests can be made for virtual buffers by oversubscribing the physical buffers and controlling the return of read data to the buffers.
26 Citations
19 Claims
-
1. In a multi-processor computer system having a hierarchical bus architecture facilitating transfer of data between a plurality of agents coupled to the bus, a method of managing access to shared buffer resources in a bridge controller, comprising:
- defining a limit of pending read data requests for a bus agent requesting read data, 1 to m, where m equals the number of buffers in the requesting bus agent;
waiting until a read operation completes once the number of pending read data requests reaches the limit prior to fetching additional read data for the requesting bus agent; and
employing a round-robin arbitration scheme to ensure the shared memory resources are not dominated by a first requesting bus agent, such that no executing process of a second bus agent stalls for lack of read data. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 15, 16, 18, 19)
- defining a limit of pending read data requests for a bus agent requesting read data, 1 to m, where m equals the number of buffers in the requesting bus agent;
-
9. In a multi-processor computer system, a method of managing read data requests from a plurality of bus agents, comprising:
- polling whether a first bus agent needs data to execute a function;
checking whether an idle local buffer is available if additional read data is required and terminating processing if no additional data is required;
determining whether there currently is a local buffer with one read data request pending if no idle local buffer is available;
waiting until a local buffer is idle if no local buffer with one read data request pending is available, allowing a second pending read request to proceed if a local buffer has one read data request pending, monitoring whether the number of pending read data requests for the first bus agent is less than a defined limit;
processing the read data request of the first bus agent when the number of pending read data requests for the first bus agent is less than the defined limit; and
determining whether there are additional read data requests after the request has been acknowledged.
- polling whether a first bus agent needs data to execute a function;
-
10. A requesting bus agent, comprising:
- a plurality of local buffers to store read data used by one of a plurality of hardware accelerator engines coupled to the requesting bus agent; and
read request selection logic, comprising;
a plurality of registers to store an allocated read request limit for a plurality of executable functions serviced by the requesting bus agent, wherein the allocated read request limit is determined by a hardware accelerator function serviced by the requesting bus agent;
a first multiplexer to select one of the plurality of registers, a current pending request register, and a comparator having inputs from the first multiplexer and the pending request count register to select a next allowed read data request.
- a plurality of local buffers to store read data used by one of a plurality of hardware accelerator engines coupled to the requesting bus agent; and
-
11. A method of managing a read data request issued from a requesting bus agent, comprising:
determining whether a first requesting bus agent has a pending read data request;
if yes, monitoring whether an idle buffer is available if additional read data is required, else terminating processing if no additional data is required;
monitoring whether a request count for the first bus agent is less than a defined limit;
processing the read request for the first bus agent if the request count for the first bus agent is less than the defined limit; and
determining whether there are additional read data requests after the read data request has been acknowledged.
-
12. A read data controller to manage the flow rate of read data from a bridge controller to a requesting bus agent, comprising:
- a plurality of registers to monitor and communicate busy and idle status of a plurality of shared buffers; and
a plurality of multiplexors each operatively coupled to a corresponding one of the plurality of registers to select one read data request and pass the request to the bridge controller.
- a plurality of registers to monitor and communicate busy and idle status of a plurality of shared buffers; and
-
13. A read request arbiter to manage arbitration between a plurality of read requestors requiring use of a plurality of shared read buffers in a bridge, comprising:
-
a plurality of registers for controlling utilization of the plurality of shared buffers; an adder operatively coupled to the plurality of registers to receive signals from the plurality of registers indicating whether each one of the plurality of shared buffers are idle or enabled to return read data;
a comparator coupled to the adder to monitor whether the number of shared buffers idle or enabled to return read data is less than a defined threshold, the comparator outputting a signal to a plurality of bus agent requestors to prevent requests by one of the plurality of read requestor not ready to receive data; and
an arbiter to receive and manage requests for read data from the plurality of bus agent requestors and forward to a bridge controller based on idle and busy states of the shared buffers.
-
-
14. A multi-processor computer system with shared memory resources, comprising:
-
a bus to facilitate transfer of address and data between multiple agents coupled to the bus; a plurality of multi-processor nodes, each node having one or more processor cores connected thereto;
a memory subsystem associated with each one of the plurality of multi-processor nodes;a local cache associated with each one of the one or more processor cores;
a bridge controller facilitating transfer of data between shared memory resources, wherein the bridge controller includes a set of shared read data buffers used for read requests to memory;
a plurality of coprocessor hardware accelerators, each coprocessor hardware accelerator having one or more dedicated processing functions and a configuration register to record settings for read request limits;
a direct access memory (DMA) controller to manage data flow to and from the plurality of coprocessor hardware accelerators; and
a plurality of local read buffers associated with each one of the plurality of coprocessor hardware accelerators. - View Dependent Claims (17)
-
Specification