SHARED STORAGE FOR MULTI-THREADED ORDERED QUEUES IN AN INTERCONNECT
First Claim
1. A method for operating an interconnect comprising:
- transferring payload of multiple threads between a plurality of cores of an integrated circuit by buffering the payload using a shared storage structure that includes a plurality of order queues and an index to track which thread is assigned to each queue;
guaranteeing each of the order queues access to a minimum number of buffer entries that make up the queue, for use by the thread that is assigned to the queue; and
increasing, above said minimum, the number of buffer entries that make up any one of the queues for any of the multiple threads being received by the shared storage structure, by borrowing from a shared pool of unused buffer entries on a first-come, first-served basis.
3 Assignments
0 Petitions
Accused Products
Abstract
In one embodiment, payload of multiple threads between intellectual property (IP) cores of an integrated circuit are transferred, by buffering the payload using a number of order queues. Each of the queues is guaranteed access to a minimum number of buffer entries that make up the queue. Each queue is assigned to a respective thread. A number of buffer entries that make up any queue is increased, above the minimum, by borrowing from a shared pool of unused buffer entries on a first-come, first-served basis. In another embodiment, an interconnect implements a content addressable memory (CAM) structure that is shared storage for a number of logical, multi-thread ordered queues that buffer requests and/or responses that are being routed between data processing elements coupled to the interconnect. Other embodiments are also described and claimed.
-
Citations
19 Claims
-
1. A method for operating an interconnect comprising:
-
transferring payload of multiple threads between a plurality of cores of an integrated circuit by buffering the payload using a shared storage structure that includes a plurality of order queues and an index to track which thread is assigned to each queue; guaranteeing each of the order queues access to a minimum number of buffer entries that make up the queue, for use by the thread that is assigned to the queue; and increasing, above said minimum, the number of buffer entries that make up any one of the queues for any of the multiple threads being received by the shared storage structure, by borrowing from a shared pool of unused buffer entries on a first-come, first-served basis. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A system comprising:
-
an integrated circuit having one or more data processing elements and one or more memory storage elements; and an interconnect to which the data processing elements are coupled, the interconnect to route a plurality of requests and a plurality of responses between the elements, wherein the interconnect implements a content addressable memory (CAM) structure that is shared storage for a plurality of logical, multi-thread ordered queues that make up entries in the CAM structure and buffer the requests from two or more threads, the responses from two or more threads, or both, wherein each thread has its own unique identifier. - View Dependent Claims (8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. An apparatus comprising:
an interconnect for an integrated circuit (IC), the interconnect to transfer payload of multiple threads between a plurality of Intellectual Property (IP) cores of the integrated circuit that are coupled to the interconnect, wherein the interconnect implements a content addressable memory (CAM) structure that is shared storage for a plurality of multiple thread buffers, the CAM structure stores requests that are from two or more threads and that come from an initiator IP core and that are to be routed to a target IP core in the integrated circuit. - View Dependent Claims (18, 19)
Specification