EJBCA Cloud AWS
- AWS Launch Guide
Quick Start Guide
- Create Crypto Tokens
- Create Root CA Certificate Profile
- Create Issuing CA Certificate Profile
- Create Certificate Authorities
- Create User and Workstation Profiles
- Create End Entity Profiles
- Request Certificate
- Create Another Administrator Account
- Import Certificate to Mozilla Firefox
- Configure Health Checks
- Create CRL Updater Service
- AWS Backup Guide
- AWS Restore and Upgrade Guide
- AWS TLS Certificate Generation Guide
- AWS RA Configuration and Administration Guide
- AWS VA Configuration and Administration Guide
AWS Cluster Configuration Guide
- Cluster AWS Operating Environment
- Multi Node Clusters
- Cluster Security Groups
- Clustering with RDS Database
- Clustering with Galera on Local Nodes
AWS CloudHSM Integration Guide
- Multiple Crypto Tokens with AWS CloudHSM
- 1 - Create CloudHSM Cluster
- 2 - Use OpenSSL to Validate the HSM
- 3 - Initialize the CloudHSM
- 4 - Assigning the Security Group to the EJBCA Instance
- 5 - Configure the cloudhsm-client
- 6 - PKCS11 PIN
- 7 - Activate the Cluster
- 8 - Create a CloudHSM Crypto User
- 9 - Create a Keystore in the HSM with clientToolBox
- 10 - Test with EJBCA ClientToolbox
- 11 - Create a CryptoToken in EJBCA
- Appendix A - Restoring an HSM Backup to a New Instance
- Appendix B - Troubleshooting HSM Issues
AWS Certificate Manager Integration Guide
- Provisioning an EJBCA Instance and setting up CloudHSM
- Create Root CA Keys
- Create CloudHSM Crypto Token for Root CA
- Create the Root and Issuing CA Certificate Profiles
- Create End Entity Sub CA Profile
- Create Root CA that uses the CloudHSM Crypto Token
- Create AWS ACM Certificate Authority CSR
- Add ACM PCA End Entity
- Generate the ACM PCA Certificate for AWS
- Fulfill the Pending ACM PCA Certificate Request
- AWS S3 Publisher Configuration Guide
- AWS KMS Configuration Guide
- How to Create Support Package
- EJBCA Cloud AWS VA
EJBCA Cloud Azure
- Azure Launch Guide
- Azure Backup Guide
- Azure Restore and Upgrade Guide
- Azure TLS Certificate Generation Guide
- Azure RA Configuration and Administration Guide
- Azure VA Configuration and Administration Guide
- Azure Cluster Configuration Guide
- Azure Key Vault Integration Guide
- How to Create Azure Support Package
- EJBCA Cloud Release Notes
Azure Node Clusters
Three Node Cluster Information
The cluster implementation used for Galera replication uses regular network connectivity over the main instance interface for all cluster communication. This means that cluster nodes don’t have to be placed physically close to each other as long as they have good network connectivity.
However, this also means that a node cannot distinguish between a node failure of another node and broken network connectivity to the other node. To avoid the situation where the cluster nodes operate independently and get diverging data sets (a split-brain situation), the cluster nodes take a vote and will cease to operate unless they are part of the majority of connected nodes. This ensures that there is only one data set that is allowed to be updated at the time. In the case of a temporary network failure, disconnected nodes can easily synchronize their data to the majority’s data set and continue to operate.
Two Node Clusters
Galera recommends three nodes to avoid a split-brain situation. If only two instances are chosen, which is not recommended, make sure only one of them is getting written to and is a primary while the other is for DR purposes. An arbitrator can also be configured to avoid split brain. For more information, refer to Galera Documentation on Galera Arbitrator.
There is no real high availability in two node clusters. In the event that one of the node leaves the cluster ungracefully it will take the database offline on the remaining node. Two node clusters are more for redundancy than availability and must be manually intervened with to be functional again in the event of a failure.
For more information, refer to Galera Documentation on Two-node Clusters.
This setup requires three or more nodes. In case of a node failure, the remaining nodes will still be able to form a cluster through a majority quorum vote and continue to operate.
The first cluster node always has a slightly higher quorum vote than the rest of the nodes. In a setup of an even (4 or more) number of nodes where the nodes are divided over two sites, the site that has the first node will continue to operate if the connectivity between the sites fails.
Continuous Service Availability
To ensure that service clients always connect to an operational node in the cluster, an external load-balancer should be used for automatic fail-over and/or load distribution.
In the case a custom application being developed for consumption of the services provided by EJBCA Enterprise Cloud's external interfaces, this could also be handled by making the custom application connect to any of the nodes that is found to be operational.
If lower availability and manual interaction is acceptable in case of a node failure, this could also be solved by redirecting a DNS name to the service.