Method and device for managing cluster membership by use of storage area network fabric
First Claim
1. A device, comprising:
- a processor;
logic coupled to said processor, configured to forward data messages addressed from a node of a plurality of nodes that form a cluster to another of the plurality of nodes, and further configured to transfer to the processor a cluster membership message addressed from the node; and
memory coupled to said processor and storing at least one program executed by said processor that causes the device to;
prepare the cluster membership message for transmission if the device determines that the cluster membership message needs to be forwarded to a principal switch; and
transition the device to act as a newly selected principal switch if the device detects that it has been so selected and that the current principal switch has failed;
wherein the device is separate from the plurality of nodes and is not a portion of the cluster.
7 Assignments
0 Petitions
Accused Products
Abstract
Managing cluster membership and providing and managing locks in the switches forming the interconnecting network. To manage the cluster membership, a zone is created, with indicated members existing in the zone and the zone being managed by the switches. The nodes communicate their membership events, such as alive messages, using an API to work with the switch to which they are attached. The desired membership algorithm is executed by the switches, preferably in a distributed manner. Each switch then enforces the membership policies, including preventing operations from evicted nodes. This greatly simplifies the programs used on the nodes and unburdens them from many time consuming tasks, thus providing improved cluster performance. In a like manner, the switches in the fabric manage the resource locks. The nodes send their lock requests, such as creation and ownership requests, to the switch to which they are connected using an API. The switches then perform the desired lock operation and provide a response to the requesting node. Again, this greatly simplifies the programs used on the nodes and unburdens them from many time consuming activities, providing improved cluster performance.
8 Citations
23 Claims
-
1. A device, comprising:
-
a processor; logic coupled to said processor, configured to forward data messages addressed from a node of a plurality of nodes that form a cluster to another of the plurality of nodes, and further configured to transfer to the processor a cluster membership message addressed from the node; and memory coupled to said processor and storing at least one program executed by said processor that causes the device to; prepare the cluster membership message for transmission if the device determines that the cluster membership message needs to be forwarded to a principal switch; and transition the device to act as a newly selected principal switch if the device detects that it has been so selected and that the current principal switch has failed; wherein the device is separate from the plurality of nodes and is not a portion of the cluster. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A device, comprising:
-
a processor; logic coupled to said processor, configured to forward data messages addressed from a node of a plurality of nodes that form a cluster to another of the plurality of nodes, and further configured to transfer to the processor a cluster membership message addressed from the node and to a principal switch if the device is acting as the principal switch; and memory coupled to said processor and storing at least one stored program that causes the device to; prepare the cluster membership message for transmission if the device determines that the cluster membership message needs to be forwarded to the principal switch, when the device acts as a local switch; detect a failure of the principal switch, when the device acts as a local switch; participate in the selection of a new principal switch, when the device acts as a local switch; and control cluster membership based on received cluster membership messages, if the device is selected as the new principal switch; wherein the device is separate from the plurality of nodes and is not a portion of the cluster. - View Dependent Claims (8, 9, 10, 11)
-
-
12. A method for managing cluster membership, comprising:
-
forwarding, by a local switch, data messages addressed from a node of a plurality of nodes that form a cluster to another of the plurality of nodes; transferring, by the local switch, to a processor within the local switch a cluster membership message addressed from the node; preparing, by the local switch, the cluster membership message for transmission if the local switch determines that the cluster membership message needs to be forwarded to a principal switch; and transitioning, by the local switch, to act as a newly selected principal switch if the device detects that it has been so selected and that the current principal switch has failed; wherein the local switch is separate from the plurality of nodes and is not a portion of the cluster. - View Dependent Claims (13, 14, 15, 16, 17)
-
-
18. A method for managing cluster membership, comprising:
-
forwarding, by a local switch, data messages addressed from a node of a plurality of nodes that form a cluster to another of the plurality of nodes; transferring, by the local switch, to a processor within the local switch a cluster membership message addressed from the node; preparing, by the local switch, the cluster membership message for transmission if the local switch determines that the cluster membership message needs to be forwarded to the principal switch; detecting, by the local switch, a failure of the principal switch; participating, by the local switch, in the selection of a new principal switch; transferring, by the new principal switch, to the processor forwarded cluster membership messages addressed to the new principal switch, if the local switch is selected as the new principal switch; and controlling, by the new principal switch, cluster membership based on the forwarded cluster membership messages, if the local switch is selected as the new principal switch; wherein the local switch is separate from the plurality of nodes and is not a portion of the cluster. - View Dependent Claims (19, 20, 21, 22, 23)
-
Specification