For the Hardware Appliance, availability is defined as the ability to maintain service with full data integrity for applications running on the Hardware Appliance using the internal SQL database.
This is a basic single node installation of the Hardware Appliance. In the event of a node failure, a new Hardware Appliance must be reinstalled from a backup. All data between the last backup and the failure will be lost. If no cold standby (spare) Hardware Appliance is available, the time taken to deliver a new box must be taken into account when calculating the acceptable downtime.
In this setup, two nodes are connected as a cluster, where the first installed node has a higher quorum vote than the second node.
If the second node fails, the first node will continue to operate but the second node will be put into maintenance.
If the first node fails, the second node will be taken out of service and put into maintenance. To bring the second Node back into service requires manual interaction via the Hardware Appliance management interface (WebConf).
To avoid data loss manual interaction is required. The second node should only be Forced into Active if the first node really is dead and being replaced.
This is a setup of three or more nodes. In the event of a node failure, the remaining nodes can still form a cluster by majority quorum and continue to operate. If the failed Hardware Appliance is still powered on, it will be put on maintenance.
To ensure that quorum votes never result in a tie, all nodes are assigned unique quorum vote weights according to their assigned node number (Weight=128−NodeNumber).
In a configuration where an equal number of nodes N is evenly distributed across two sites, the site that is to remain active when the link between the sites fails should have a greater sum of quorum vote weights than the other site. You should deploy nodes 1 through N/2 at the primary site because cluster nodes with lower node numbers have higher weights.