Data path accelerator with variable parity, variable length, and variable extent parity groups
First Claim
1. A data path accelerator comprising:
- a network interface for communicating with one or more clients;
a storage interface for communicating with one or more disk drives;
a metadata processor configured to queue network transaction requests to said network interface and storage transaction requests to said storage interface, said metadata processor further configured to manage file system metadata information, said file system metadata information comprising disk locations of one or more distributed parity groups on said one or more disk drives, each distributed parity group comprising one or more data blocks and a parity block, said file system metadata information further comprising information regarding a length of each distributed parity group; and
a data engine configured to communicate with said storage interface to receive data from or write data to said one or more disk drives in satisfaction of said storage transaction requests, said data engine further configured to communicate with said network interface to receive data from or send data to said one or more clients in satisfaction of said network transaction requests, said data engine comprising at least one data cache and one or more parity engines to perform parity calculations for cached distributed parity groups.
13 Assignments
0 Petitions
Accused Products
Abstract
A data path accelerator with variable parity, variable length, and variable extent groups is described. The data path accelerator includes a network interface for communicating with one or more clients and a storage interface for communicating with one or more disk drives. A metafile processor is configured to queue network transaction requests to the network interface and to queue storage transaction requests to the storage interface. The metafile processor is further configured to manage file system metafile information. The file system metafile information includes disk locations of one or more distributed parity groups on the one or more disk drives. Each distributed parity group includes one or more data blocks and a parity block. The file system metafile information further includes information regarding a length of each distributed parity group. A data engine is configured to communicate with the storage interface to receive data from or to write data to the one or more disk drives in satisfaction of the storage transaction requests. The data engine is further configured to communicate with the network interface to receive data from or to write data to the one or more clients in satisfaction of the network transaction requests. The data engine includes at least one data cache and one or more parity engines to perform parity calculations for cached distributed parity groups.
186 Citations
35 Claims
-
1. A data path accelerator comprising:
-
a network interface for communicating with one or more clients;
a storage interface for communicating with one or more disk drives;
a metadata processor configured to queue network transaction requests to said network interface and storage transaction requests to said storage interface, said metadata processor further configured to manage file system metadata information, said file system metadata information comprising disk locations of one or more distributed parity groups on said one or more disk drives, each distributed parity group comprising one or more data blocks and a parity block, said file system metadata information further comprising information regarding a length of each distributed parity group; and
a data engine configured to communicate with said storage interface to receive data from or write data to said one or more disk drives in satisfaction of said storage transaction requests, said data engine further configured to communicate with said network interface to receive data from or send data to said one or more clients in satisfaction of said network transaction requests, said data engine comprising at least one data cache and one or more parity engines to perform parity calculations for cached distributed parity groups. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34)
-
-
19. A method of providing file services, comprising:
-
receiving a file request from a client, said request received by a first processing module;
accessing metadata to locate file data corresponding to said file request, said metadata stored in a metadata cache provided to said first processing module;
queuing one or more storage transaction requests to a storage interface, said storage transaction requests queued by said first processing module;
caching a distributed parity group as a cached distributed parity group, said cached distributed parity group retrieved as a result of said storage transaction requests in a data cache operably connected to a data engine, said data engine operating asynchronously with respect to said first processing module;
using a parity engine in said data engine to compute parity for said cached distributed parity group;
queuing one or more network transaction requests to a network interface, said network transaction requests queued by said first processing module upon completion of said at least one storage transaction request; and
sending at least a portion of said cached distributed parity group to said client according to said network transaction request, wherein said sending operation is performed asynchronously, with respect to said first processing module, by said data engine and said network interface.
-
-
35. An apparatus, comprising:
-
means for receiving a file request from a client, accessing metadata to locate disk addresses of data blocks of a distributed parity group and a parity block of said distributed parity group corresponding to said file request, and queuing one or more storage transaction requests, each storage transaction request providing information regarding said disk addresses and cache addresses;
means for caching said distributed parity group as a cached distributed parity group in response to said storage transaction requests; and
means for computing parity for said cached distributed parity group.
-
Specification