Method of dynamically allocating network node memory's partitions for caching distributed files
First Claim
1. In a distributed file system including high speed random access general purpose memory within a network node coupled to a host computer and a plurality of mass storage devices interconnected via a network for storing data files in disparate locations, a method for caching data files from said mass storage devices using a limited amount of said general purpose memory, said method comprising the steps of:
- providing at least one cache area in said general purpose memory for each accessed file;
evaluating a data flow rate over network data paths and direct data paths associated with said each accessed file through said at least one cache area ("-- file data flow rate-- "); and
allocating, by means of a processor within said network node, a portion of said general purpose memory to said at least one cache area in an amount proportional to said associated file data flow rate.
13 Assignments
0 Petitions
Accused Products
Abstract
A distributed file system with dedicated nodes capable of being connected to workstations at their bus. The system uses a complementary client-side and server-side file caching method that increases parallelism by issuing multiple server requests to keep the hardware devices busy simultaneously. Most of the node memory is used for file caching and input/output (I/O) device buffering using dynamic memory organization, reservation and allocation methods for competing memory-intensive activities.
238 Citations
5 Claims
-
1. In a distributed file system including high speed random access general purpose memory within a network node coupled to a host computer and a plurality of mass storage devices interconnected via a network for storing data files in disparate locations, a method for caching data files from said mass storage devices using a limited amount of said general purpose memory, said method comprising the steps of:
-
providing at least one cache area in said general purpose memory for each accessed file; evaluating a data flow rate over network data paths and direct data paths associated with said each accessed file through said at least one cache area ("-- file data flow rate-- "); and allocating, by means of a processor within said network node, a portion of said general purpose memory to said at least one cache area in an amount proportional to said associated file data flow rate. - View Dependent Claims (2, 3, 4, 5)
-
Specification