Clustered cache appliance system and methodology
First Claim
1. An enterprise cache cluster of two or more cache appliances for caching data operations requested from one or more networked data storage devices by one or more remote clients, the enterprise cache cluster comprising:
- a cache memory at each cache appliance for caching data that is requested by a remote client;
a packet processor at each cache appliance coupled to the cache memory in said cache appliance and to one or more I/O ports for splicing connections between the data storage devices and remote clients, where the packet processor inspects network protocol traffic state parameters received on the I/O ports to determine if a request from a remote client can be serviced by the enterprise cache cluster; and
a connection interface for connecting the two or more cache appliances over a cluster bus in a private network to form a cohesive memory pool from the cache memories in the two or more cache appliances.
3 Assignments
0 Petitions
Accused Products
Abstract
A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using clustered cache appliances with packet inspection intelligence. A cache appliance cluster that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group).
-
Citations
20 Claims
-
1. An enterprise cache cluster of two or more cache appliances for caching data operations requested from one or more networked data storage devices by one or more remote clients, the enterprise cache cluster comprising:
-
a cache memory at each cache appliance for caching data that is requested by a remote client; a packet processor at each cache appliance coupled to the cache memory in said cache appliance and to one or more I/O ports for splicing connections between the data storage devices and remote clients, where the packet processor inspects network protocol traffic state parameters received on the I/O ports to determine if a request from a remote client can be serviced by the enterprise cache cluster; and a connection interface for connecting the two or more cache appliances over a cluster bus in a private network to form a cohesive memory pool from the cache memories in the two or more cache appliances. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method for caching data operations at an enterprise cache cluster of two or more cache appliances, comprising:
-
receiving at the enterprise cache cluster a request from a remote client to perform a specified data operation at one or more networked data storage devices; inspecting packet parameters in each TCP/IP stack layer associated with the request to determine if the request can be serviced by a cache memory located in one of the cache appliances in the enterprise cache cluster; forwarding the request to the one or more networked data storage devices if the request can not be serviced by the enterprise cache cluster; and performing the specified data operation at a selected cache memory located at the enterprise cache cluster if the request can be serviced by the enterprise cache cluster by forwarding the request to the selected cache memory over a connection interface which connects the two or more cache appliances over a cluster bus in a private network to form a cohesive memory pool from the cache memories in the two or more cache appliances. - View Dependent Claims (13, 14, 15, 16, 17)
-
-
18. A network cache appliance cluster for accelerating read and write requests from one or more storage clients for one or more files residing at one or more networked storage devices, comprising:
-
a plurality of cache appliances, each comprising; a tiered memory cache system for providing low-latency access in responding to read and write requests comprising a first cache storage for storing business critical data under control of a policy engine that is managed independently from the one or more networked storage devices, and a packet inspection module for inspecting a read or write request sent using an IP-based network protocol to determine if the request should be passed to the tiered memory cache system or forwarded to a networked storage device for further processing; and a cluster switch for connecting the plurality of cache appliances in a private network to form a cohesive memory pool from the cache memories in the plurality of cache appliances. - View Dependent Claims (19, 20)
-
Specification