Non-disruptive storage caching using spliced cache appliances with packet inspection intelligence
First Claim
1. A standalone cache unit for caching data operations requested from one or more networked data storage devices by one or more remote clients, the cache unit comprising:
- a cache memory for caching data that is requested by a remote client;
a high-speed packet processor coupled to the cache memory and to one or more I/O ports for splicing connections between the data storage devices and remote clients, where the high-speed packet processor inspects network protocol traffic state parameters received on the I/O ports to determine if a request from a remote client can be serviced by the cache memory.
3 Assignments
0 Petitions
Accused Products
Abstract
A method, system and program are disclosed for accelerating data storage by providing non-disruptive storage caching using spliced cache appliances with packet inspection intelligence. A cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files using dynamically adjustable cache policies provides low-latency access and redundancy in responding to both read and write requests for cached files, thereby improving access time to the data stored on the disk-based NAS filer (group).
46 Citations
21 Claims
-
1. A standalone cache unit for caching data operations requested from one or more networked data storage devices by one or more remote clients, the cache unit comprising:
-
a cache memory for caching data that is requested by a remote client; a high-speed packet processor coupled to the cache memory and to one or more I/O ports for splicing connections between the data storage devices and remote clients, where the high-speed packet processor inspects network protocol traffic state parameters received on the I/O ports to determine if a request from a remote client can be serviced by the cache memory. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method for caching data operations, comprising:
-
receiving at a standalone cache unit a request from a remote client to perform a specified data operation at one or more networked data storage devices; inspecting packet parameters in each TCP/IP stack layer associated with the request to determine if the request can be serviced by a cache memory located at the standalone cache unit; forwarding the request to the one or more networked data storage devices if the request can not be serviced by the cache memory located at the standalone cache unit; and performing the specified data operation at the cache memory located at the standalone cache unit if the request can be serviced by the cache memory located at the standalone cache unit. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A network cache appliance for accelerating read and write requests from one or more storage clients for one or more files residing at one or more networked storage devices, comprising:
-
a tiered memory cache system for providing low-latency access in responding to read and write requests comprising a first cache storage for storing business critical data under control of a policy engine that is managed independently from the one or more networked storage devices; and a packet inspection module for inspecting a read or write request sent using an IP-based network protocol to determine if the request should be passed to the tiered memory cache system or forwarded to a networked storage device for further processing. - View Dependent Claims (19, 20, 21)
-
Specification