Demo Deployment Script
The included deploy script (keysafe5-k8s/deploy.sh) provides a quick start installer for installing a Kubernetes deployment of KeySafe 5 on a Linux machine.
Overview and prerequisites
The following steps provide a quick-start guide to installing KeySafe 5 and its dependencies using the provided deploy script for evaluation purposes. Please refer to Manual Install for full installation instructions.
The script is designed to be run on UNIX/Linux based systems by a non-root user.
The script may call sudo as required.
|
These steps install KeySafe 5 and its dependencies.
The included deploy script ( Please see Hardening The Deployment for steps to harden the deployment. Entrust recommends these steps as a minimum and that additional hardening may be required dependent on your own requirements. A production deployment will have as a minimum the following:
|
The script requires a local installation of Docker or Podman.
When using podman on Red Hat Enterprise Linux, you should install the podman-docker package to provide the Docker alias.
The user executing the deploy script must be able to successfully execute docker info.
If this is not the case, please consult the appropriate documentation for your platform, Docker Documentation or Podman Documentation.
This release includes Docker images that need to be pushed to a Docker registry. If you have a private registry you may push the images from a different machine.
Unpack the release
mkdir keysafe5-1.5.0
tar -xf nshield-keysafe5-1.5.0.tar.gz -C keysafe5-1.5.0
cd keysafe5-1.5.0/keysafe5-k8s
The user executing the deploy.sh script must have permission to read and write files within the keysafe5-1.5.0 directory.
This will automatically be the case if the user extracting the release package is the same user that executes the deploy script.
If there is an existing infrastructure that you may like to use when installing KeySafe 5 via the deployment script, continue with Existing infrastructure. Otherwise skip to Authentication and proceed from there.
Existing infrastructure
Docker images
If you have a private registry you may push the images to it like so:
# Load the Docker images to your local Docker
docker load < docker-images/agent-mgmt.tar
docker load < docker-images/codesafe-mgmt.tar
docker load < docker-images/hsm-mgmt.tar
docker load < docker-images/mongodb.tar
docker load < docker-images/nginx.tar
docker load < docker-images/sw-mgmt.tar
docker load < docker-images/ui.tar
|
There is no need to load the mongodb or nginx images if an external MongoDB instance is being used. |
# Define the private registry location
export DOCKER_REGISTRY=private.registry.local/my_space/keysafe5
# Tag the Docker images for a private registry
docker tag agent-mgmt:1.5.0 $DOCKER_REGISTRY/keysafe5/agent-mgmt:1.5.0
docker tag codesafe-mgmt:1.5.0 $DOCKER_REGISTRY/keysafe5/codesafe-mgmt:1.5.0
docker tag hsm-mgmt:1.5.0 $DOCKER_REGISTRY/keysafe5/hsm-mgmt:1.5.0
docker tag mgmt-ui:1.5.0 $DOCKER_REGISTRY/keysafe5/mgmt-ui:1.5.0
docker tag sw-mgmt:1.5.0 $DOCKER_REGISTRY/keysafe5/sw-mgmt:1.5.0
docker tag bitnami/mongodb:8.0.13-debian-12-r0-2025-09-15 $DOCKER_REGISTRY/keysafe5/bitnami/mongodb:8.0.13-debian-12-r0-2025-09-15
docker tag bitnami/nginx:1.29.1-debian-12-r0-2025-09-15 $DOCKER_REGISTRY/keysafe5/bitnami/nginx:1.29.1-debian-12-r0-2025-09-15
# Log in to ensure pushes succeed
docker login private.registry.local
# And push
docker push $DOCKER_REGISTRY/keysafe5/agent-mgmt:1.5.0
docker push $DOCKER_REGISTRY/keysafe5/codesafe-mgmt:1.5.0
docker push $DOCKER_REGISTRY/keysafe5/hsm-mgmt:1.5.0
docker push $DOCKER_REGISTRY/keysafe5/mgmt-ui:1.5.0
docker push $DOCKER_REGISTRY/keysafe5/sw-mgmt:1.5.0
docker push $DOCKER_REGISTRY/keysafe5/bitnami/mongodb:8.0.13-debian-12-r0-2025-09-15
docker push $DOCKER_REGISTRY/keysafe5/bitnami/nginx:1.29.1-debian-12-r0-2025-09-15
By setting the DOCKER_REGISTRY environment variable the deploy script will pull images from that registry.
Otherwise the deploy script will set up a local insecure Docker registry.
In this case, ensure that Docker is installed.
Kubernetes
If you have a Kubernetes cluster available, ensure that kubectl points to it, and that kubectl get pods -A returns a list of pods.
Otherwise the deploy script will install K3s locally to /usr/local/bin and create a ${HOME}/.kube/config to point to it.
Kubernetes tools use the KUBECONFIG environment variable for the location of its configuration file, but when unset this defaults to ${HOME}/.kube/config.
For this setting to persist it needs to be added to your shell’s configuration file.
Object Storage
If you do not have an existing Kubernetes cluster, or if your cluster contains only 1 worker node, the deploy script will use local storage on the single worker node.
If you would like to use an NFS for large object storage, set the environment variable NFS_IP to the NFS server address, and NFS_PATH to the path of the directory being exported from the NFS server.
To set the user and group IDs used by the KeySafe 5 application when accessing the object storage, configure the podSecurityContext.runAsUser, podSecurityContext.runAsGroup and podSecurityContext.fsGroup Chart parameters.
To do this, specify the environment variable KEYSAFE_BACKEND_CHART_EXTRA_ARGS.
For example, KEYSAFE_BACKEND_CHART_EXTRA_ARGS="--set podSecurityContext.runAsUser=2000 --set podSecurityContext.runAsGroup=3000".
Istio
If you have Istio installed, ensure that istioctl is on your path.
Otherwise the deploy script will download a local copy of istioctl, and install Istio as required.
MongoDB
Entrust recommends that you use your standard secure MongoDB Replica Set installation.
If you have an existing MongoDB deployment:
-
Set the environment variable
MONGODBto a backslash-comma separated list of servers, along with their port numbers in the form:mongo-1.example.com:27017\,mongo-2.example.com:27017The backslash should be visible when runningecho $MONGODB. A quick tip: using single-quotes'will prevent the bash command line acting on the backslash you have typed. -
You will also need to create a Kubernetes generic secret in the
nshieldkeysafe5namespace withca.crt,tls.crt, andtls.keyfor a user that has readWrite roles on the databases:agent-mgmt-db,codesafe-mgmt-db,hsm-mgmt-dbandsw-mgmt-db. SetMONGO_SECRETSto the name of this generic secret.
If you do not have an existing MongoDB deployment, the deploy script will set one up on the Kubernetes cluster along with the secrets for both the server and backend services.
| MongoDB 5.0 and newer requires use of the AVX instruction set for processors. For more information, see MongoDB Production Notes |
Authentication
To disable OIDC authentication, set the environment variable DISABLE_AUTHENTICATION to yes, and you may move on to Install KeySafe 5.
To configure authentication for Istio, the environment variable AUTH_ISSUER_URL needs to point at the issuer URL.
Additionally, either AUTH_JWKS (for the payload) or AUTH_JWKS_URL (for the URL) also needs to be set.
AUTH_AUDIENCES should be a comma-delimited list.
The deploy script will automatically add the fully qualified domain name for the host to this list if not already present.
For UI authentication the deploy script requires an OIDCProviders.json file.
Its location should be set in the environment variable OIDC_PROVIDERS_FILE_LOCATION.
Further details on configuring authentication for KeySafe 5 can be found in the Helm Chart Installation section of the KeySafe 5 Installation Guide.
Legacy KeySafe 5 agent support
By default, this KeySafe 5 central platform deployment will only be able to communicate with version 1.5 or later KeySafe 5 Agents.
If you want your deployment to be able to communicate with legacy (1.4 or earlier) KeySafe 5 Agents then you must set the environment variable AGENT_COMPATIBILITY to 1.
Install KeySafe 5
It is now possible to run the deploy script.
|
Do not run the deploy script under |
The deploy script must be run from inside the directory to which it is extracted.
Running with the -n flag will perform a set of pre-flight checks and show what will happen then exit without taking any action.
./deploy.sh -n
To disable authentication set the environment variable DISABLE_AUTHENTICATION to yes.
Otherwise you may follow the instructions in the Authentication section.
You may now perform the deployment with the -y flag.
./deploy.sh -y
The script will take a few minutes to run, showing what actions are taking place.
You may be prompted for your password by sudo, for example when installing K3s.
The script will create a local insecure Certificate Authority to be used by the agentcert.sh and updateinternalcerts.sh scripts.
This directory should be preserved to allow this.
The script will also produce two archives, agent-config.tar.gz (for Unix) and agent-config.zip (for Windows), that contains the agent configuration file.
The contents are used for configuring nShield client machines below.
K3s
If the deploy.sh script has installed K3s, you should configure kubectl access for the current user account by running:
mkdir -p $\{HOME}/.kube
sudo /usr/local/bin/k3s kubectl config view --raw > $\{HOME}/.kube/config
chmod 600 $\{HOME}/.kube/config
export KUBECONFIG=$\{HOME}/.kube/config
This step is necessary for any user account on this machine that will be required to administrate the KeySafe 5 central platform.
You may append the export KUBECONFIG=${HOME}/.kube/config to your shell’s configuration file.
Configure nShield Client Machines
In summary, to configure your nShield client machine to be managed and monitored by this deployment:
-
Install the KeySafe 5 agent on the nShield client machine containing the relevant Security World or HSMs.
-
Extract the
agent-config.tar.gzoragent-config.ziparchive on the nShield client machine alongside the KeySafe 5 agent. -
Generate a unique private key and client CSR using the provided
ks5agenttlsfor each individual KeySafe 5 agent. -
Copy the client CSR to the central platform and sign it with the CA.
-
Copy the certificate to the nShield client machine, and place it alongside the KeySafe 5 agent configuration.
The steps to install and configure vary depending on the client. See Agent Configuration for more details.
Uninstall
If Kubernetes was not provided and K3s was installed by the deploy script, you may simply uninstall K3s which will clear up all the installed helm charts.
/usr/local/bin/k3s-uninstall.sh
This will request sudo permissions.
If a private Docker Registry was not provided, the deploy script will have created a local one and it will be removed when the script finishes. Should this fail, you may uninstall it manually by running:
docker stop registry
docker rm registry
If a Kubernetes installation was provided then the helm charts will need to be uninstalled individually.
helm --namespace nshieldkeysafe5 uninstall keysafe5-istio
helm --namespace nshieldkeysafe5 uninstall keysafe5-backend
helm --namespace nshieldkeysafe5 uninstall keysafe5-ui
If an existing MongoDB installation was not provided, then the deploy script will have installed a MongoDB helm chart that should be uninstalled.
helm --namespace mongons uninstall mongo-chart
If Istio was installed by the deploy script, it may be uninstalled by running:
keysafe5-1.5.0/istioctl uninstall --purge
To uninstall the KeySafe 5 agent, run the KeySafe 5 uninstaller:
sudo /opt/nfast/keysafe5/sbin/install -u
Finally secrets and pvc’s can be deleted
kubectl --namespace=nshieldkeysafe5 delete agentcomms-server-certificates
kubectl --namespace=nshieldkeysafe5 delete agentcomms-client-certificates
kubectl --namespace=nshieldkeysafe5 delete pvc data-nshield-keysafe5