- Hardware Appliance Unboxing
- Initial Set-up
- Restore from Backup
- Connect to Cluster
- Using External CA for Installation
- Basic Hardware Operations
- WebConf - Configurator of Hardware Appliance
- Certificates and Trusted CAs
- Setting up a Validation Authority (VA)
- HA Setup
- PKCS#11 Slot Smart Card Activation
- EJBCA Administration
- Certificate Life Cycle Management
Creating CA Hierarchy
- Step 1: Create the RootCA
- Step 2: Create Certificate Profile for SubCAs
- Step 3: Create End Entity Profile for SubCAs
- Step 4: Import RootCA as External CA in Node A
- Step 5: Create SignCA as SubCA in Node A
- Step 6: Create AuthCA as SubCA in Node A
- Step 7: Create SSLCA as SubCA in Node A
- Step 8: Create Certificate Profiles for End Entities that use the SubCAs
- Step 9: Create End Entity Profiles for SubCAs
- Step 10: Create End Entities that use the SubCAs
- Managing End Entities
- Creating Java Truststore
- Check for Weak Debian Keys
- Hardware Appliance 3.5.4 Release Notes
- Hardware Appliance 3.5.3 Release Notes
- Hardware Appliance 3.5.2 Release Notes
- Hardware Appliance 3.5.1 Release Notes
- Hardware Appliance 3.5.0 Release Notes
- PKI Appliance 3.4.5 Release Notes
- PKI Appliance 3.4.4 Release Notes
- PKI Appliance 3.4.3 Release Notes
PKI Appliance 3.4.2 Release Notes
PKI Appliance 3.4.1 Release Notes
- Release Notes Summary
- Hardware Appliance 3.5.X Upgrade Notes
Levels of Availability
This is a basic single node installation of the Hardware Appliance. In case of a node failure, a new Hardware Appliance needs to be reinstalled from a backup. All data between the time of the latest backup and the failure will be lost. If a cold standby (spare) Hardware Appliance is not available, the time of delivery of a new box needs to be taken into account when calculating the acceptable downtime.
Hot standby with manual failover
In this setup, two nodes are connected as a cluster where the first installed node has a higher quorum vote than the second node.
In case the second node fails, the first node will continue operating but the second node will be set into maintenance. In the case the first node fails, the second node will cease to operate and will be set into maintenance. To bring back the second node into service it requires manual interaction via the Hardware Appliance administrative interface (WebConf).
To avoid data loss, the manual interaction is required and the second node should only be Forced into Active if the first node really is dead and will be replaced.
High availability with automatic failover
This is a setup with three or more nodes. In case of a node failure, the remaining nodes will still be able to form a cluster through a majority quorum vote and continue to operate. If the Hardware Appliance that has failed is still switched on it will be set into maintenance.
To ensure that quorum votes never result in a tie, all nodes are assigned unique quorum vote weights according to their assigned node number (Weight=128−NodeNumber).
In a setup where an even number of nodes N are distributed equally over two sites, the site that is intended to remain Active if connectivity between the sites fails should have a larger sum of quorum vote weights than that of the other site. Since cluster nodes with lower node numbers have higher weights you should deploy nodes 1 to N/2 on the primary site.