Single-sided distributed cache system
First Claim
1. A distributed cache system comprising:
- a data storage portion having memory hosts, each memory host comprising;
non-transitory memory; and
a network interface controller in communication with the non-transitory memory for servicing remote direct memory access requests;
a data control portion having a curator separate and remote from the memory hosts and in communication with the memory hosts, the curator managing striping of data across the memory hosts by;
dividing a file into data stripes and replicating each data stripe; and
allocating storage of the data stripes and data stripe replications on the memory hosts; and
a cache logic portion in communication with the data storage and data control portions, the cache logic portion executing at least one memory access request to implement a cache operation, the cache logic portion comprising a cache service having a cache data layer storing cache data in files and a cache indexing layer indexing the cache data stored in the files, the cache service sharding the cache data into the files, each file storing cache entries, each cache entry comprising cache entry data, a cache tag, and a cache fingerprint, at least one of the files comprising a circular data file having a fixed size, a first-in-first-out queue having a front and a back, and a tail pointer providing an offset to the front of the queue;
wherein in response to the at least one memory access request to access the file, the curator providing the cache logic portion a file descriptor mapping location information of the data stripes and the data stripe replications of the file on the memory hosts for remote direct memory access of the file on the memory hosts and a key to allow access to the file on the memory hosts through the corresponding network interface controllers.
2 Assignments
0 Petitions
Accused Products
Abstract
A distributed cache system including a data storage portion, a data control portion, and a cache logic portion in communication with the data storage and data control portions. The data storage portion includes memory hosts, each having non-transitory memory and a network interface controller in communication with the memory for servicing remote direct memory access requests. The data control portion includes a curator in communication with the memory hosts. The curator manages striping of data across the memory hosts. The cache logic portion executes at least one memory access request to implement a cache operation. In response to each memory access request, the curator provides the cache logic portion a file descriptor mapping data stripes and data stripe replications of a file on the memory hosts for remote direct memory access of the file on the memory hosts through the corresponding network interface controllers.
-
Citations
27 Claims
-
1. A distributed cache system comprising:
-
a data storage portion having memory hosts, each memory host comprising; non-transitory memory; and a network interface controller in communication with the non-transitory memory for servicing remote direct memory access requests; a data control portion having a curator separate and remote from the memory hosts and in communication with the memory hosts, the curator managing striping of data across the memory hosts by; dividing a file into data stripes and replicating each data stripe; and allocating storage of the data stripes and data stripe replications on the memory hosts; and a cache logic portion in communication with the data storage and data control portions, the cache logic portion executing at least one memory access request to implement a cache operation, the cache logic portion comprising a cache service having a cache data layer storing cache data in files and a cache indexing layer indexing the cache data stored in the files, the cache service sharding the cache data into the files, each file storing cache entries, each cache entry comprising cache entry data, a cache tag, and a cache fingerprint, at least one of the files comprising a circular data file having a fixed size, a first-in-first-out queue having a front and a back, and a tail pointer providing an offset to the front of the queue; wherein in response to the at least one memory access request to access the file, the curator providing the cache logic portion a file descriptor mapping location information of the data stripes and the data stripe replications of the file on the memory hosts for remote direct memory access of the file on the memory hosts and a key to allow access to the file on the memory hosts through the corresponding network interface controllers. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method of accessing a distributed cache system comprising memory hosts, the method comprising:
-
dividing, using a curator separate and remote from the memory hosts, a file into data stripes and replicating each data stripe; allocating, using the curator, storage of the data stripes and data stripe replications on the memory hosts, each memory host comprising non-transitory memory and a network interface controller in communication with the non-transitory memory for servicing remote direct memory access requests; storing data in files on the memory hosts and indexing the data stored in the files; sharding the data into the files, each file storing cache entries, each cache entry comprising cache entry data, a cache tag, and a cache fingerprint, at least one of the files comprising a circular data file having a fixed size, a first-in-first-out queue having a front and a back, and a tail pointer providing an offset to the front of the queue; receiving a cache operation from a client; executing at least one memory access request; for each memory access request to access the file, returning a file descriptor mapping location information of the data stripes and the data stripe replications of the file on the memory hosts for remote direct memory access of the file on the memory hosts and a key to allow access to the file on the memory hosts through the corresponding network interface controllers; and executing on a computing processor a transaction comprising at least one of a read operation or a write operation on files stored on the memory hosts to implement the cache operation. - View Dependent Claims (14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27)
-
Specification