Cross address space thread control in a multithreaded environment
First Claim
1. In a computer system having an application comprising a plurality of threads executing concurrently in a common address space, a method of handling the occurrence of a new event on a first one of said threads while a previous event on a second one of said threads is currently being processed, comprising the steps of:
- (a) defining a flag indicating whether a previous event on one of said threads is currently being processed;
(b) defining a deferred event queue containing an entry for each event that cannot be processed immediately, said entry identifying the thread on which a deferred event occurred; and
(c) having said first one of said threads, in response to the occurrence of a new event on said thread;
(1) test said flag to determine whether a previous event is currently being processed;
(2) if said flag indicates that a previous event is not being processed, set said flag and process the new event; and
(3) if said flag indicates that a previous event is being processed, add the new event to said queue and suspend its own execution without processing said new event.
1 Assignment
0 Petitions
Accused Products
Abstract
A method of controlling the execution of the threads of a first application such as a user application from a second application such as a debugger application running in a different address space. After initializing trace mode for the user application, the debugger waits for an event to occur on one of the threads of the user application. Upon the occurrence of an event on one of the user application threads, an event handler obtains control of the thread execution. The event handler suspends execution of the remaining threads in the application, posts the debugger and then suspends its own execution. When the debugger application has completed its debugging operations, it posts the event handler, which resumes execution of the suspended threads and returns control to the thread on which the event occurred. If a subsequent event occurs on one thread while a previous event on another thread is being processed, the event handler for the subsequent event places it in a deferred event queue for deferred processing. Events consisting of breakpoints are redriven rather than being placed on the deferred queue. The debugger application may hold selected threads in a suspended state following resumption of the remaining threads by setting hold flags associated with those threads.
190 Citations
6 Claims
-
1. In a computer system having an application comprising a plurality of threads executing concurrently in a common address space, a method of handling the occurrence of a new event on a first one of said threads while a previous event on a second one of said threads is currently being processed, comprising the steps of:
-
(a) defining a flag indicating whether a previous event on one of said threads is currently being processed; (b) defining a deferred event queue containing an entry for each event that cannot be processed immediately, said entry identifying the thread on which a deferred event occurred; and (c) having said first one of said threads, in response to the occurrence of a new event on said thread; (1) test said flag to determine whether a previous event is currently being processed; (2) if said flag indicates that a previous event is not being processed, set said flag and process the new event; and (3) if said flag indicates that a previous event is being processed, add the new event to said queue and suspend its own execution without processing said new event. - View Dependent Claims (2, 3, 4, 5, 6)
-
Specification