Deterministic and preemptive thread scheduling and its use in debugging multithreaded applications
First Claim
1. A method of scheduling threads in a multi-threaded data processing system, said method comprising:
- obtaining a plurality of threads being scheduled to run on a processor of said data processing system, said plurality of threads including at least a first thread and a second thread;
allocating an instruction slice parameter, N, on said processor for each thread; and
scheduling threads in a deterministic and preemptive manner utilizing instruction slices of said threads, wherein each instruction slice includes a set number of instructions equal to N, and only a first instruction slice of the first thread is executed before passing execution to a second instruction slice of the second thread, and wherein further, N is constant and represents a subset of a total number of instructions in a thread such that a single thread may be executed as multiple instruction slices, one instruction slice at a time.
1 Assignment
0 Petitions
Accused Products
Abstract
A computer system which permits deterministic and preemptive scheduling of threads in a software application. In one embodiment, a scheduler is utilized to schedule the threads in a queue. Once the threads are scheduled, they are divided up into instruction slices each consisting of a predetermined number of instructions. The scheduler executes each instruction slice. An instruction counter is utilized to keep track of the number of instructions executed. The thread is permitted to run the instruction slice until the predetermined number of instructions has been executed. Alternatively, the thread stops if it is blocked while waiting for an input, for example. The next thread is then executed for the same number of instructions. This process permits for the efficient debugging of software which utilizes traditional cyclic debugging.
-
Citations
33 Claims
-
1. A method of scheduling threads in a multi-threaded data processing system, said method comprising:
-
obtaining a plurality of threads being scheduled to run on a processor of said data processing system, said plurality of threads including at least a first thread and a second thread;
allocating an instruction slice parameter, N, on said processor for each thread; and
scheduling threads in a deterministic and preemptive manner utilizing instruction slices of said threads, wherein each instruction slice includes a set number of instructions equal to N, and only a first instruction slice of the first thread is executed before passing execution to a second instruction slice of the second thread, and wherein further, N is constant and represents a subset of a total number of instructions in a thread such that a single thread may be executed as multiple instruction slices, one instruction slice at a time. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
setting an instruction counter to N at the beginning of processing of each instruction slice;
tracking the number of instructions of a thread that is executed utilizing said instruction counter; and
generating an interrupt to halt processing of a particular thread when N instructions have been executed to enable scheduling of an instruction slice of a next thread for processing.
-
-
3. The method of scheduling threads of claim 2, further comprising the steps of:
-
placing threads in a queue to determine the order of execution, wherein one slice of each thread is executed in the order of the threads in the queue followed by a next slice of the threads in said order until all instructions of each thread in said queue have been executed; and
admitting new threads into said queue by utilizing an admission control window, wherein said new threads are made to wait until the processor completes processing of slices of a set number of the threads within the queue and admits at least one thread waiting to be admitted to the queue.
-
-
4. The method of scheduling threads of claim 3, wherein each thread is executed one slice at a time and when a thread has multiple slices of instructions, a first slice is executed up to N instructions or an occurrence of a terminating condition and, when a terminating condition occurs, a second slice beginning at the instruction following the instruction associated with the terminating condition is provided when said thread is later scheduled for execution up to N instructions or the occurrence of another terminating condition.
-
5. The method of scheduling threads of claim 4, wherein said terminating condition comprises a block on a synchronization variable, and a given thread executes N instructions per an instruction slice and less than N instructions when the thread blocks on a synchronization variable before said thread is re-scheduled for later execution.
-
6. The method of scheduling threads of claim 5, further comprising placing the thread at the end of the queue when the given thread is no longer blocked.
-
7. The method of scheduling threads of claim 3, wherein said terminating condition comprises a wait for input, and a given thread executes N instructions per an instruction slice and less than N instructions up to a point at which the thread stops to wait for input before said thread is re-scheduled for later execution.
-
8. The method of scheduling threads of claim 3, wherein said admitting new threads by utilizing the admission control window, comprises:
-
defining a preset constant value corresponding to a number of instruction slices to be executed prior to admitting said new threads, said number of instruction slices constituting a time period between an opening of consecutive admission control windows;
holding newly generated threads in memory during execution of current instruction slices;
tracking a number of instruction slices executed by said processor and comparing against said preset value;
admitting new threads to said queue after a number of instruction slices equal to said preset value has been executed; and
placing newly admitted threads at the end of the queue.
-
-
9. The method of scheduling threads of claim 3, said method further comprising:
-
defining constant values of N and said preset value for each run of a program that is run at least twice;
admitting threads at the same point in each subsequent run as in an initial run of the program; and
providing the same input in each subsequent run as in the initial run of each thread of the program to enable cyclical debugging of the program from which said threads originate.
-
-
10. The method of scheduling threads of claim 9, further comprising:
tracking the point of admitting threads and receiving input and storing them in a log during the initial run of the program, wherein the stored thread admission points and inputs are utilized to determine when to admit threads and provide inputs in subsequent runs, wherein said input includes both internal inputs and external inputs.
-
11. A scheduler for scheduling threads within a processor of a data processing system that comprises a memory and supports multi-threaded operations, said scheduler comprising:
-
logic for allocating an instruction slice on a processor of the data processing system for each thread, wherein said instruction slice represents a set number of instructions of each currently scheduled thread that is executed by the processor before a next thread is scheduled for execution in place of the current thread; and
logic for scheduling threads in a deterministic and preemptive manner using said instruction slice. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22)
specifying the number of instructions N included in an instruction slice, wherein a number of instructions in an instruction slice is constant and represents a subset of the total number of instructions in each thread such that a single thread may comprise multiple slices;
setting an instruction counter to N at the beginning of processing of each instruction slice;
tracking the number of instructions of a thread that is executed utilizing said instruction counter; and
generating an interrupt to halt processing of a particular thread when N instructions have been executed to enable scheduling of a next thread for processing.
-
-
14. The scheduler of claim 13, further comprising:
-
means for placing the threads in a queue for scheduling, wherein said queue operates as a FIFO queue for scheduling threads for execution of instruction slices; and
means for executing each thread until N instructions have executed; and
means for beginning execution of a next thread once N instructions of a previous thread has executed.
-
-
15. The scheduler of claim 13, further comprising means for executing each thread by instruction slices until a particular thread blocks on a synchronization variable, wherein a next thread is executed when said particular thread blocks on said synchronization variable and said particular thread is re-scheduled for later execution.
-
16. The scheduler of claim 15, further comprising means for placing said particular thread after the other threads running on the processor, when said particular thread is no longer blocked.
-
17. The scheduler of claim 13, further comprising means for executing each thread by instruction slices until a current thread stops to wait for input, wherein a next thread is scheduled for execution in place of said current thread and said current thread is moved to the end of the queue, wherein further, said current thread moved to the end of the queue waits its turn in the queue until it is scheduled for execution and executes a new instruction slice if said input is received before it is scheduled.
-
18. The scheduler of claim 13, further comprising means for admitting new threads, said means including logic for:
-
placing threads in a queue; and
admitting new threads into said queue by utilizing an admission control window, wherein, said new threads are made to wait until the processor completes processing of slices of a set number of the threads within the queue and admits at least one thread waiting to be admitted to the queue.
-
-
19. The scheduler of claim 18, further comprising logic for:
-
defining a preset constant value corresponding to a number of instruction slices to be executed prior to admitting said new threads, said number of instruction slices constituting a time period between an opening of consecutive admission control windows;
holding newly generated threads in memory during execution of instruction slices;
tracking a number of instruction slices executed by said processor and comparing against said preset value;
admitting new threads to said queue after a number of instruction slices equal to said preset value have been executed; and
placing newly admitted threads at the end of the queue.
-
-
20. The scheduler of claim 13 further comprising logic for:
-
defining constant values of N and said preset value for each run of a program that is being debugged;
admitting threads at the same point in each subsequent run as in an initial run of the program; and
providing the same input in each subsequent run as in the initial run of each thread of the program to facilitate cyclical debugging of said program.
-
-
21. The scheduler of claim 20, further comprising logic for:
-
tracking the point of admitting threads and receiving input; and
saving said points and inputs in a log during the initial run of the program, wherein said points and inputs are utilized to determine similar points and inputs in subsequent runs of the program.
-
-
22. The scheduler of claim 21, further comprising logic for saving external inputs within said log for utilization in subsequent runs.
-
23. A data processing system comprising:
-
a storage medium; and
a processor coupled to said storage medium and which processes multi-threaded programs with a thread scheduler, said processor comprising logic for;
allocating an instruction slice on said processor for each thread, said instruction slice being a set number of consecutive instructions of a thread that is executed at a time before a next thread is scheduled for execution by the processor; and
scheduling threads in a deterministic and preemptive manner using said instruction slices. - View Dependent Claims (24, 25, 26, 27, 28, 29, 30, 31, 32, 33)
specifying the number of instructions N included in an instruction slice, wherein a number of instructions in an instruction slice is constant and represents a subset of the total number of instructions in each thread such that a single thread may comprise multiple slices;
setting an instruction counter to N at the beginning of processing of each instruction slice;
tracking the number of instructions of a thread that is executed utilizing said instruction counter; and
generating an interrupt to halt processing of a particular thread when N instructions have been executed to enable scheduling of a next thread for processing.
-
-
25. The data processing system of claim 24, said processor further comprising an instruction counter and logic for executing a thread until said thread executes N instructions.
-
26. The data processing system of claim 25, said processor further comprising logic for executing said thread until said thread stops to wait for a synchronization variable, wherein another thread is scheduled for execution and said thread is re-scheduled for execution after execution of an instruction slice of each other thread running on the processor.
-
27. The data processing system of claim 26, further comprising:
logic for identifying when the thread is no longer blocked and scheduling the thread for execution after the other threads running on the processor.
-
28. The data processing system of claim 24, said processor further comprising logic for executing a given thread until it stops to wait for input.
-
29. The data processing system of claim 24, further comprising logic for:
-
placing threads in a queue, wherein one slice of each thread is executed in the order of the threads in the queue followed by a next slice of the threads in said order until all instructions of each thread in said queue have been executed; and
admitting new threads into said queue by utilizing an admission control window, wherein, said new threads are made to wait until the processor completes processing of slices of a set number of the threads within the queue and admits at least one thread waiting to be admitted to the queue.
-
-
30. The data processing system of claim 29, further comprising logic for:
-
defining a preset constant value corresponding to a number of instruction slices to be executed prior to admitting said new threads, said number of instruction slices constituting a time period between an opening of consecutive admission control windows;
holding newly generated threads in memory during execution of current instruction slices;
tracking a number of instruction slices executed by said processor and comparing against said preset value;
admitting new threads to said queue after a number of instruction slices equal to said preset value have been executed; and
placing newly admitted threads at the end of the queue.
-
-
31. The data processing system of claim 24, wherein said processor provides cyclical debugging and includes logic for:
-
tracking points at which threads are admitted and inputs received during an initial run of the threads of a program, said inputs including both internal inputs and external inputs;
admitting threads at the same point in each subsequent run as in the initial run; and
providing the same input in each subsequent run as in the initial run to facilitate cyclical debugging of said program.
-
-
32. The data processing system of claim 24, wherein said logic components of said processor are software logic components.
-
33. The data processing system of claim 32, wherein said software logic components are program code stored on a computer readable medium accessible to said processor.
Specification