Multi-lock caches
First Claim
Patent Images
1. A system for updating cached data, the system comprising:
- computer memory for storing a caching structure, the caching structure including multiple locks and a plurality of entries, each entry stored in one or more storage blocks;
a first processor configured to;
obtain a first group lock and a first entry lock, the first group lock controlling access to at least a first entry and a second entry, the first entry lock controlling access to the first entry, wherein the first group lock and the first entry lock are obtained as shared locks;
identify first storage nodes associated with the first entry using first associative array value that associates the first storage nodes with the first entry;
subsequent to identifying the first storage nodes, release the first group lock while retaining the first entry lock for the first entry;
convert the first entry lock into an exclusive lock;
disassociate the first storage nodes from the first entry by deleting the first associative array value;
update the first storage nodes; and
subsequent to updating the first storage nodes, release the first entry lock; and
a second processor configured to;
obtain the first group lock and a second entry lock, the first group lock controlling access to at least the first entry and the second entry after the first processor obtains the first group lock and prior to the first processor releasing the first group lock, the second entry lock controlling access to the second entry, wherein the first group lock and the second entry lock are obtained as shared locks;
identify second storage nodes associated with the second entry using a second associative array value that associates the second storage nodes with the second entry;
subsequent to identifying the second storage nodes, release the first group lock while retaining the second entry lock for the second entry;
convert the second entry lock into an exclusive lock;
disassociate the second storage nodes from the second entry by deleting the second associative array value;
update the second storage nodes; and
subsequent to updating the second storage nodes, release the second entry lock;
wherein the first processor updates the first storage nodes at least partially concurrently with the second processor updating the second storage nodes.
1 Assignment
0 Petitions
Accused Products
Abstract
Processes are disclosed for decreasing contention in caches in order to increase the efficiency of multi-threaded or multi-processor systems. By using multiple locks in a cache, smaller portions of the cache can be locked during cache updates (e.g., during a data update or a storage block eviction). As only small portions of the cache are locked at any given time, contention between threads, particularly in multi-processor implementations, will likely be reduced. For example, if different threads are trying to update different entries in the cache, the threads can proceed with updating the cache concurrently.
30 Citations
30 Claims
-
1. A system for updating cached data, the system comprising:
-
computer memory for storing a caching structure, the caching structure including multiple locks and a plurality of entries, each entry stored in one or more storage blocks; a first processor configured to; obtain a first group lock and a first entry lock, the first group lock controlling access to at least a first entry and a second entry, the first entry lock controlling access to the first entry, wherein the first group lock and the first entry lock are obtained as shared locks; identify first storage nodes associated with the first entry using first associative array value that associates the first storage nodes with the first entry; subsequent to identifying the first storage nodes, release the first group lock while retaining the first entry lock for the first entry; convert the first entry lock into an exclusive lock; disassociate the first storage nodes from the first entry by deleting the first associative array value; update the first storage nodes; and subsequent to updating the first storage nodes, release the first entry lock; and a second processor configured to; obtain the first group lock and a second entry lock, the first group lock controlling access to at least the first entry and the second entry after the first processor obtains the first group lock and prior to the first processor releasing the first group lock, the second entry lock controlling access to the second entry, wherein the first group lock and the second entry lock are obtained as shared locks; identify second storage nodes associated with the second entry using a second associative array value that associates the second storage nodes with the second entry; subsequent to identifying the second storage nodes, release the first group lock while retaining the second entry lock for the second entry; convert the second entry lock into an exclusive lock; disassociate the second storage nodes from the second entry by deleting the second associative array value; update the second storage nodes; and subsequent to updating the second storage nodes, release the second entry lock; wherein the first processor updates the first storage nodes at least partially concurrently with the second processor updating the second storage nodes. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method for updating cached data, the method comprising:
-
receiving updated data for a first entry in a cache, the cache including multiple locks and a plurality of entries stored in the cache; receiving updated data for a second entry in the cache; obtaining, by a first thread, a first group lock and a first entry lock, the first entry lock controlling access to the first entry in the cache, the first group lock controlling access to at least the first entry and the second entry; identifying first storage nodes associated with the first ent using a first associative array value that associates the first storage nodes with the first entry; subsequent to identifying the first storage nodes, releasing, by the first thread, the first group lock while retaining the first entry lock; disassociating the first storage nodes from the first entry by deleting the first associative array value; obtaining, by a second thread, the first group lock and a second entry lock, the first group lock controlling access to at least the first entry and the second entry, the first group lock obtained after obtaining the first group lock and prior to releasing the first group lock, the second entry lock controlling access to the second entry in the cache; and updating, by the first thread, the first entry and updating, by the second thread, the second entry, wherein the updating by the first thread and the second thread occur at least partially concurrently. - View Dependent Claims (8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22)
-
-
23. Non-transitory computer storage having stored thereon instructions that, when executed by a computer system, cause the computer system to perform operations comprising:
-
receiving updated data for a first entry and a second entry in a cache, the cache including multiple locks and a plurality of entries; obtaining, by a first thread, a first group lock and a first entry, the first entry lock controlling access to the first entry in the cache, the first group lock controlling access to at least the first entry and the second entry; identifying first storage nodes associated with the first ent using a first associative array value that associates the first storage nodes with the first entry; subsequent to the identifying the first storage nodes, releasing, by the first thread, the first group lock while retaining the first entry lock; disassociating the first storage nodes from the first entry by deleting the first associative array value; obtaining, by a second thread, the first group lock and a second entry lock, the first group lock controlling access to at least the first entry and the second entry, the first group lock obtained after obtaining the first group lock and prior to releasing the first group lock, the second entry lock controlling access to the second entry in the cache; and updating, by the first thread, the first entry and updating, by the second thread, the second entry, wherein the updating by the first thread and the second thread occur at least partially concurrently. - View Dependent Claims (24, 25, 26, 27, 28, 29, 30)
-
Specification