NON-BLOCKING FLOW CONTROL IN MULTI-PROCESSING-ENTITY SYSTEMS
First Claim
1. A flow-control component of a multi-processing-entity computer system, the flow-control component comprising:
- a shared computational resource;
two or more local access pools, together comprising a distributed access pool, each local access pool uniquely associated with a processing entity; and
a process or thread that accesses the shared computational resource when a local access pool associated with the processing entity on which the process or thread executes contains at least one shared-computational-resource access, and is therefore not exhausted, and when the process or thread first removes a shared-computational-resource access from the local access pool before accessing the shared computational resource.
2 Assignments
0 Petitions
Accused Products
Abstract
The current document is directed to an efficient and non-blocking mechanism for flow control within a multi-processor or multi-core processor with hierarchical memory caches. Traditionally, a centralized shared-computational-resource access pool, accessed using a locking operation, is used to control access to a shared computational resource within a multi-processor system or multi-core processor. The efficient and non-blocking mechanism for flow control, to which the current document is directed, distributes local shared-computational-resource access pools to each core of a multi-core processor and/or to each processor of a multi-processor system, avoiding significant computational overheads associated with cache-controller contention-control for a traditional, centralized access pool and associated with use of locking operations for access to the access pool.
12 Citations
20 Claims
-
1. A flow-control component of a multi-processing-entity computer system, the flow-control component comprising:
-
a shared computational resource; two or more local access pools, together comprising a distributed access pool, each local access pool uniquely associated with a processing entity; and a process or thread that accesses the shared computational resource when a local access pool associated with the processing entity on which the process or thread executes contains at least one shared-computational-resource access, and is therefore not exhausted, and when the process or thread first removes a shared-computational-resource access from the local access pool before accessing the shared computational resource. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
10. A method that controls a rate of access to a shared computational resource in a multi-processing-entity computer system, the method comprising:
-
initializing a distributed access pool comprising two or more local access pools, each uniquely associated with a processing entity, to contain a number of shared-computational-resource accesses distributed among the local access pools; and removing, by a process or thread, a shared-computational-resource access from a local access pool associated with a processing entity on which the process or thread executes prior to accessing the shared-computational-resource
-
-
19. Computer instructions, stored within a data-storage component of a multi-processing-entity computer system, that, when executed by the processing entities, control the multi-processing-entity computer system to controls a rate of access to a shared computational resource by:
-
initializing a distributed access pool comprising two or more local access pools, each uniquely associated with a processing entity, to contain a number of shared-computational-resource accesses distributed among the local access pools; and removing, by a process or thread, a shared-computational-resource access from a local access pool associated with a processing entity on which the process or thread executes prior to accessing the shared-computational-resource. - View Dependent Claims (20)
-
Specification