Multi-threaded Processing Using Path Locks
First Claim
1. A method comprising the steps of:
- causing all threads of a plurality of threads that access a particular portion of a shared computational resource during a particular instruction path though one or more sequences of instructions to be scheduled at a thread scheduler for a set of one or more processors;
receiving, at the thread scheduler, data that indicates a first thread of the plurality of threads is to execute next the particular instruction path to access the particular portion of the shared computational resource;
determining whether a second thread of the plurality of threads is exclusively eligible to execute the particular instruction path on any processor of the set of one or more processors to access the particular portion of the shared computational resource, andif it is determined that the second thread is exclusively eligible to execute the particular instruction path, then preventing the first thread from executing any instruction from the particular instruction path on any processor of the set of one or more processors.
2 Assignments
0 Petitions
Accused Products
Abstract
In one embodiment, a method includes receiving at a thread scheduler data that indicates a first thread is to execute next a particular instruction path in software to access a particular portion of a shared computational resource. The thread scheduler determines whether a different second thread is exclusively eligible to execute the particular instruction path on any processor of a set of one or more processors to access the particular portion of the shared computational resource. If so, then the thread scheduler prevents the first thread from executing any instruction from the particular instruction path on any processor of the set of one or more processors. This enables several threads of the same software to share a resource without obtaining locks on the resource or holding a lock on a resource while a thread is not running.
-
Citations
22 Claims
-
1. A method comprising the steps of:
-
causing all threads of a plurality of threads that access a particular portion of a shared computational resource during a particular instruction path though one or more sequences of instructions to be scheduled at a thread scheduler for a set of one or more processors; receiving, at the thread scheduler, data that indicates a first thread of the plurality of threads is to execute next the particular instruction path to access the particular portion of the shared computational resource; determining whether a second thread of the plurality of threads is exclusively eligible to execute the particular instruction path on any processor of the set of one or more processors to access the particular portion of the shared computational resource, and if it is determined that the second thread is exclusively eligible to execute the particular instruction path, then preventing the first thread from executing any instruction from the particular instruction path on any processor of the set of one or more processors. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method comprising the steps of:
-
receiving data that indicates a unique path identifier (ID) for a particular instruction path though one or more sequences of instructions wherein a particular portion of a shared computational resource is accessed within the particular instruction path; receiving data that indicates a processor is to execute the particular instruction path; in response to receiving data that indicates the processor is to execute the particular instruction path, sending, to a thread scheduler different from the processor, switch data that indicates the processor should switch to another thread and that indicates the path ID; and executing the particular instruction path to access the particular portion of the shared computational resource when the thread scheduler determines based on the path ID that no other thread is eligible to execute the particular instruction path to access the particular portion of the shared computational resource. - View Dependent Claims (10)
-
-
11. An apparatus for processing a thread of a plurality of threads that share a processor, comprising:
-
means for causing all threads of a plurality of threads that access a particular portion of a shared computational resource during a particular instruction path though one or more sequences of instructions to be received at a thread scheduler for a set of one or more processors; means for receiving, at the thread scheduler, data that indicates a first thread of the plurality of threads is to execute next the particular instruction path to access the particular portion of the shared computational resource; means for determining whether a second thread of the plurality of threads is exclusively eligible to execute the particular instruction path on any processor of the set of one or more processors to access the particular portion of the shared computational resource, and means for preventing the first thread from executing any instruction from the particular instruction path on any processor of the set of one or more processors, if it is determined that the second thread is exclusively eligible to execute the particular instruction path.
-
-
12. An apparatus comprising:
-
one or more processors; software encoded as instructions in one or more computer-readable media for execution on the one or more processors; a computational resource to be shared among a plurality of execution threads executing the software on the one or more processors; a single thread scheduler for scheduling the plurality of execution threads, wherein the thread scheduler includes logic encoded in one or more tangible media for execution and, when executed, operable for; receiving data that indicates a first thread of the plurality of execution threads is to execute next a particular instruction path in the software to access a particular portion of the computational resource; determining whether a second thread of the plurality of threads is exclusively eligible to execute the particular instruction path on any processor of the one or more processors to access the particular portion of the computational resource, and if it is determined that the second thread is exclusively eligible to execute the particular instruction path, then preventing the first thread from executing any instruction from the particular instruction path on any processor of the one or more processors. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20, 21)
-
-
22. A method for achieving near line rate processing of data packets at a router comprising the steps of:
-
receiving over a first network interface a first set of one or more data packets; storing data based on the first set of one or more data packets in a particular queue in a shared memory; and determining a second set of one or more data packets to send through a different second network interface based on data in the particular queue, including the steps of; receiving, at a thread scheduler, data that indicates a first thread of a plurality of threads is to execute next a particular instruction path that includes one or more instructions to access the particular queue; determining whether a second thread of the plurality of threads is exclusively eligible to execute the particular instruction path on any processor of a set of one or more processors to access the particular queue; and if it is determined that the second thread is exclusively eligible to execute the particular instruction path to access the particular queue, then preventing the first thread from executing any instruction from the particular instruction path on any processor of the set of one or more processors to access the particular queue.
-
Specification