Shared resource queue for simultaneous multithreading processing wherein entries allocated to different threads are capable of being interspersed among each other and a head pointer for one thread is capable of wrapping around its own tail in order to access a free entry
First Claim
1. A resource queue, comprising:
- (a) a plurality of entries, each entry having unique resources required for information processing;
(b) the plurality of entries allocated amongst a plurality of independent simultaneously executing hardware threads such that resources of more than one thread may be within the queue; and
(c) a portion of the plurality of entries being allocated to one thread and being capable of being interspersed among another portion of the plurality of entries allocated to another thread wherein a first entry of one thread is capable of wrapping around a last entry of the same thread to access an available entry;
(d) a head pointer and a tail pointer for at least one thread wherein the head pointer is the first entry of the at least one thread and the tail pointer is the last entry of the at least one thread, and(e) one of the unique resources is a bank number to indicate how many times the head pointer has wrapped around the tail pointer in order to maintain an order of the resources for the at least one thread.
1 Assignment
0 Petitions
Accused Products
Abstract
A queue, such as a first-in first-out queue, is incorporated into a processing device, such as a multithreaded pipeline processor. The queue may store the resources of more than one thread in the processing device such that the entries of one thread may be interspersed among the entries of another thread. The entries of each thread may be identified by a thread identification, a valid marker to indicate if the resources within the entry are valid, and a bank number. For a particular thread, the bank number tracks the number of times a head pointer pertaining to the first entry has passed a tail pointer. In this fashion, empty entries may be used and the resources may be efficiently allocated. In a preferred embodiment, the shared resource queue may be implemented into an in-order multithreaded pipelined processor as a queue storing resources to be dispatched for execution of instructions. The shared resource queue may also be implemented into a branch information queue or into any queue where more than one thread may require dynamic registers.
78 Citations
12 Claims
-
1. A resource queue, comprising:
-
(a) a plurality of entries, each entry having unique resources required for information processing; (b) the plurality of entries allocated amongst a plurality of independent simultaneously executing hardware threads such that resources of more than one thread may be within the queue; and (c) a portion of the plurality of entries being allocated to one thread and being capable of being interspersed among another portion of the plurality of entries allocated to another thread wherein a first entry of one thread is capable of wrapping around a last entry of the same thread to access an available entry; (d) a head pointer and a tail pointer for at least one thread wherein the head pointer is the first entry of the at least one thread and the tail pointer is the last entry of the at least one thread, and (e) one of the unique resources is a bank number to indicate how many times the head pointer has wrapped around the tail pointer in order to maintain an order of the resources for the at least one thread. - View Dependent Claims (2, 3)
-
-
4. An out-of-order multithreaded computer processor, comprising:
-
(a) a load reorder queue; (b) a store reorder queue; (c) a global completion table; (d) a branch information queue, at least one of the queues being a resource queue comprising; (i) a plurality of entries, each entry having unique resources required for information processing; (ii) the plurality of entries allocated amongst a plurality of independent simultaneously executing hardware threads such that resources of more than one thread may be within the queue; and (iii) a portion of the plurality of entries being allocated to one thread and being capable of being interspersed among another portion of the plurality of entries allocated to another thread; (iv) a first entry of one thread being capable of wrapping around a last entry of the same thread; (v) a head pointer and a tail pointer for at least one thread wherein the head pointer is the first entry of the at least one thread and the tail pointer is the last entry of the at least one thread; (vi) a bank number to indicate how many times the head pointer has wrapped around the tail pointer in order to maintain an order of the resources for the at least one thread; and (vii) at least one free pointer for the at least one thread indicating an entry in the queue available for resources of the at least one thread.
-
-
5. A method of allocating a shared resource queue for simultaneous multithreaded electronic data processing, comprising:
-
(a) determining if the shared resource queue is empty for a particular thread; (b) finding a first entry of said particular thread; (c) determining if the first entry and a free entry of the particular thread are the same; (d) if, not advancing the first entry to the free entry; (e) incrementing a bank number if the first entry passes a last entry of the particular thread before it finds the free entry; (f) allocating the next free entry by storing resources for the particular thread. - View Dependent Claims (6, 7)
-
-
8. A shared resource mechanism in a hardware multithreaded pipeline processor, said pipeline processor simultaneously processing a plurality of threads, said shared resource mechanism comprising:
-
(a) a dispatch stage of said pipeline processor; (b) at least one shared resource queue connected to the dispatch stage; (c) dispatch control logic connected to the dispatch stage and to the at least one shared resource queue; and (d) an issue queue of said pipeline processor connected to said dispatch stage and to the at least one shared resource queue; wherein the at least one shared resource queue allocates and deallocates resources for at least two of said plurality of threads passing into said issue queue in response to the dispatch control logic and the at least one shared resource queue further comprises a plurality of entries allocated to one thread and capable of being interspersed among another plurality of entries allocated to another of the plurality of threads wherein a bank number records the number of times a first entry of one thread wraps around a last entry of the same thread to access an available entry for allocating resources of the one thread.
-
-
9. An apparatus to enhance processor efficiency, comprising:
-
(a) means to fetch instructions from a plurality of threads into a hardware multithreaded pipeline processor; (b) means to distinguish said instructions into one of a plurality of threads; (c) means to decode said instructions; (d) means to allocate a plurality of entries in at least one shared resource between at least two of the plurality of threads simultaneously executing; (e) means to allocate and intersperse entries in the at least one shared resource to one thread among entries allocated to other threads; (f) means for a first entry of one thread to wrap around a last entry of the same thread; (g) means to indicate the number of times the first entry of the one thread wraps around the last entry of the same thread; (h) means to determine if said instructions have sufficient private resources and at least one shared resource queue for dispatching said instructions; (i) means to dispatch said instructions; (j) means to deallocate said entries in said at least one shared resource when one of said at least two threads are dispatched; (k) means to execute said instructions and said resources for the one of said at least two threads. - View Dependent Claims (10)
-
-
11. A computer processing system, comprising:
-
(a) a central processing unit; (b) a semiconductor memory unit attached to said central processing unit; (c) at least one memory drive capable of having removable memory; (d) a keyboard/pointing device controller attached to said central processing unit for attachment to a keyboard and/or a pointing device for a user to interact with said computer processing system; (e) a plurality of adapters connected to said central processing unit to connect to at least one input/output device for purposes of communicating with other computers, networks, peripheral devices, and display devices; (f) a hardware multithreading pipelined processor within said central processing unit to simultaneously process at least two independent threads of execution, said pipelined processor comprising a fetch stage, a decode stage, and a dispatch stage; and (g) at least one shared resource queue within said central processing unit, said shared resource queue having a plurality of entries pertaining to more than one thread in which entries pertaining to different threads are interspersed among each other, and a head pointer pertaining to an entry of one thread is capable of wrapping around a tail pointer pertaining to another entry of the same one thread to access an available entry and the number of times the head pointer wraps around the tail pointer is recorded. - View Dependent Claims (12)
-
Specification