Levels of Availability

Stand-alone instance

This is a basic single node installation of the Hardware Appliance. In case of a node failure, a new Hardware Appliance needs to be reinstalled from a backup. All data between the time of the latest backup and the failure will be lost. If a cold standby (spare) Hardware Appliance is not available, the time of delivery of a new box needs to be taken into account when calculating the acceptable downtime.

Hot standby with manual failover

In this setup, two nodes are connected as a cluster where the first installed node has a higher quorum vote than the second node. 

In case the second node fails, the first node will continue operating but the second node will be set into maintenance. In the case the first node fails, the second node will cease to operate and will be set into maintenance. To bring back the second node into service it requires manual interaction via the Hardware Appliance administrative interface (WebConf).

To avoid data loss, the manual interaction is required and the second node should only be Forced into Active if the first node really is dead and will be replaced.

High availability with automatic failover

This is a setup with three or more nodes. In case of a node failure, the remaining nodes will still be able to form a cluster through a majority quorum vote and continue to operate. If the Hardware Appliance that has failed is still switched on it will be set into maintenance.

To ensure that quorum votes never result in a tie, all nodes are assigned unique quorum vote weights according to their assigned node number (Weight=128−NodeNumber).

In a setup where an even number of nodes N are distributed equally over two sites, the site that is intended to remain Active if connectivity between the sites fails should have a larger sum of quorum vote weights than that of the other site. Since cluster nodes with lower node numbers have higher weights you should deploy nodes 1 to N/2 on the primary site.