Method to perform parallel data migration in a clustered storage environment
First Claim
1. A method of migrating data from a source logical unit to a target logical unit, the source and target logical units corresponding to different areas of storage on one or more storage systems, the method comprising the steps of:
- providing a clustered storage array comprised of a plurality of nodes interconnected with each other and in communication with the one or more storage systems by a network;
running, by each node of the plurality of nodes, shared file system software to enable multiple clients to concurrently access shared data in the one or more storage systems through any of the nodes, and clustered system software to ensure coherency of the shared data;
providing by the nodes a LUN-device for shared access by the multiple clients, the LUN-device mapping to the source logical unit;
grouping data that are to be copied from the source logical unit to the target logical unit into data chunks;
concurrently attempting, by two or more of the plurality of nodes, to acquire an exclusive lock for a set of data chunks;
providing a bit-mask having one bit for each data chunk;
dividing the bit-mask into splices of multiple bits;
uniquely associating each splice with one set of data chunks;
wherein the step of concurrently attempting to acquire the exclusive lock for a set of data chunks includes the step of attempting to acquire an exclusive lock on the splice associated with said set of data chunks, andmigrating, by the node that acquires the exclusive lock, the set of data chunks from the source logical unit to the target logical unit, while the exclusive lock is used to prevent other nodes from migrating the set of data chunks.
9 Assignments
0 Petitions
Accused Products
Abstract
A clustered storage array consists of multiple nodes coupled to one or more storage systems. The nodes provide a LUN-device for access by a client. The LUN-device maps to a source logical unit corresponding to areas of storage on the one or more storage systems. A target logical unit corresponds to different areas of storage on the one or more storage systems. The source logical unit is migrated in parallel by the multiple nodes to the target logical unit. Data to be copied from the source logical unit to the target logical unit are grouped into data chunks. Two or more of the plurality of nodes concurrently attempt to acquire an exclusive lock for a set of data chunks. The node acquiring the exclusive lock migrates the set of data chunks from the source logical unit to the target logical unit, while the exclusive lock is used to prevent other nodes from migrating the set of data chunks.
30 Citations
18 Claims
-
1. A method of migrating data from a source logical unit to a target logical unit, the source and target logical units corresponding to different areas of storage on one or more storage systems, the method comprising the steps of:
-
providing a clustered storage array comprised of a plurality of nodes interconnected with each other and in communication with the one or more storage systems by a network; running, by each node of the plurality of nodes, shared file system software to enable multiple clients to concurrently access shared data in the one or more storage systems through any of the nodes, and clustered system software to ensure coherency of the shared data; providing by the nodes a LUN-device for shared access by the multiple clients, the LUN-device mapping to the source logical unit; grouping data that are to be copied from the source logical unit to the target logical unit into data chunks; concurrently attempting, by two or more of the plurality of nodes, to acquire an exclusive lock for a set of data chunks; providing a bit-mask having one bit for each data chunk; dividing the bit-mask into splices of multiple bits; uniquely associating each splice with one set of data chunks; wherein the step of concurrently attempting to acquire the exclusive lock for a set of data chunks includes the step of attempting to acquire an exclusive lock on the splice associated with said set of data chunks, and migrating, by the node that acquires the exclusive lock, the set of data chunks from the source logical unit to the target logical unit, while the exclusive lock is used to prevent other nodes from migrating the set of data chunks. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A clustered system comprising:
-
at least one hardware storage system including a source logical unit and a target logical unit, the source and target logical units corresponding to different areas of storage on the one or more storage systems; and a clustered storage array comprised of a plurality of nodes interconnected with each other and in communication with the one or more storage systems by a network, each node running shared file system software to enable multiple clients to concurrently access shared data in the one or more storage systems through any of the nodes and clustered system software to ensure coherency of the shared data, the nodes providing, for shared access by the multiple clients, a LUN-device mapped to the source logical unit, each node including logic for grouping data that are to be copied from the source logical unit to the target logical unit into data chunks and logic for attempting to acquire an exclusive lock for a set of said data chunks, the system further comprising a bit-mask having one bit for each data chunk; logic for dividing the bit-mask into splices of multiple bits; logic for uniquely associating each splice with one set of data chunks; and wherein the logic for attempting to acquire the exclusive lock for a set of data chunks includes logic for attempting to acquire an exclusive lock on the splice associated with said set of data chunks, and wherein two or more of the plurality of nodes concurrently attempt to acquire an exclusive lock for the set of data chunks, with the node that acquires the exclusive lock migrating the set of data chunks from the source logical unit to the target logical unit, while the exclusive lock is used to prevent other nodes from migrating the set of data chunks. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18)
-
Specification