Manual Install
The following steps provide a step-by-step guide to installing KeySafe 5 and its dependencies into an existing Kubernetes cluster.
An alternative to this guide is the KeySafe 5 Quick Start Guide which provides a scripted means of installing KeySafe 5.
These steps install KeySafe 5 and its dependencies. They should be followed to set up a demo environment for evaluation purposes and should not be used for production environments. Please see Hardening The Deployment for steps to harden the deployment. Entrust recommends these steps as a minimum and that additional hardening may be required dependent on your own requirements. A production deployment will have as a minimum the following:
|
This set of commands are an example of how to install KeySafe 5. They may need modification to suit your environment. |
Unpack the release
mkdir ~/keysafe5-1.6.1
tar -xf nshield-keysafe5-1.6.1.tar.gz -C ~/keysafe5-1.6.1
cd ~/keysafe5-1.6.1/keysafe5-k8s
Docker images
The Docker images need to be loaded onto a Docker registry that each node in your Kubernetes cluster can pull the images from.
-
Load the Docker images to your local Docker, for example:
docker load < docker-images/agent-mgmt.tar docker load < docker-images/alert-manager-sidecar.tar docker load < docker-images/alertmanager.tar docker load < docker-images/codesafe-mgmt.tar docker load < docker-images/hsm-mgmt.tar docker load < docker-images/licence-mgmt.tar docker load < docker-images/mongodb.tar docker load < docker-images/monitoring-mgmt.tar docker load < docker-images/nginx.tar docker load < docker-images/prometheus.tar docker load < docker-images/sw-mgmt.tar docker load < docker-images/ui.tar
-
Set the
DOCKER_REGISTRY
variable to the registry in use, for example:export DOCKER_REGISTRY=localhost:5000
If you are using a single-machine Kubernetes distribution like K3s, you may be able to create a simple unauthenticated local private Docker registry by following the instructions in Distribution Registry. However this registry is only accessible by setting the name to localhost
which will not work for other configurations. -
Log in to the registry to ensure that you can push to it:
docker login $DOCKER_REGISTRY
-
Tag the Docker images for the registry, for example:
docker tag agent-mgmt:1.6.1 $DOCKER_REGISTRY/keysafe5/agent-mgmt:1.6.1 docker tag alert-manager-sidecar:1.6.1 $DOCKER_REGISTRY/keysafe5/alert-manager-sidecar:1.6.1 docker tag alertmanager:v0.28.1 $DOCKER_REGISTRY/keysafe5/alertmanager:v0.28.1 docker tag codesafe-mgmt:1.6.1 $DOCKER_REGISTRY/keysafe5/codesafe-mgmt:1.6.1 docker tag hsm-mgmt:1.6.1 $DOCKER_REGISTRY/keysafe5/hsm-mgmt:1.6.1 docker tag licence-mgmt:1.6.1 $DOCKER_REGISTRY/keysafe5/licence-mgmt:1.6.1 docker tag mgmt-ui:1.6.1 $DOCKER_REGISTRY/keysafe5/mgmt-ui:1.6.1 docker tag monitoring-mgmt:1.6.1 $DOCKER_REGISTRY/keysafe5/monitoring-mgmt:1.6.1 docker tag prometheus:v3.6.0-rc.0 $DOCKER_REGISTRY/keysafe5/prometheus:v3.6.0-rc.0 docker tag sw-mgmt:1.6.1 $DOCKER_REGISTRY/keysafe5/sw-mgmt:1.6.1 docker tag bitnami/mongodb:8.0.13-debian-12-r0-2025-09-15 $DOCKER_REGISTRY/keysafe5/bitnami/mongodb:8.0.13-debian-12-r0-2025-09-15 docker tag bitnami/nginx:1.29.1-debian-12-r0-2025-09-15 $DOCKER_REGISTRY/keysafe5/bitnami/nginx:1.29.1-debian-12-r0-2025-09-15
-
Push the KeySafe 5 images to the registry, for example:
docker push $DOCKER_REGISTRY/keysafe5/agent-mgmt:1.6.1 docker push $DOCKER_REGISTRY/keysafe5/alertmanager:v0.28.1 docker push $DOCKER_REGISTRY/keysafe5/alert-manager-sidecar:1.6.1 docker push $DOCKER_REGISTRY/keysafe5/codesafe-mgmt:1.6.1 docker push $DOCKER_REGISTRY/keysafe5/hsm-mgmt:1.6.1 docker push $DOCKER_REGISTRY/keysafe5/licence-mgmt:1.6.1 docker push $DOCKER_REGISTRY/keysafe5/mgmt-ui:1.6.1 docker push $DOCKER_REGISTRY/keysafe5/monitoring-mgmt:1.6.1 docker push $DOCKER_REGISTRY/keysafe5/prometheus:v3.6.0-rc.0 docker push $DOCKER_REGISTRY/keysafe5/sw-mgmt:1.6.1 docker push $DOCKER_REGISTRY/keysafe5/bitnami/mongodb:8.0.13-debian-12-r0-2025-09-15 docker push $DOCKER_REGISTRY/keysafe5/bitnami/nginx:1.29.1-debian-12-r0-2025-09-15
Install and set up the supporting software
Kubernetes namespace
Create a namespace in Kubernetes for KeySafe 5 installation.
kubectl create namespace nshieldkeysafe5
TLS Secrets
By default the Bitnami MongoDB chart will create its own CA, and generate TLS keys for each of its servers from this CA. As we have an existing CA to use, we will pass the private key and certificate to MongoDB to use as its CA.
First, we need to create a TLS key and certificate for securing communications between the backend services and MongoDB. Here is an example:
export MONGOUSER="ks5-mongo-user"
openssl genrsa -out $MONGOUSER.key 4096
openssl req -new -key $MONGOUSER.key -out $MONGOUSER.csr \
-subj "/CN=${MONGOUSER}" \
-addext "keyUsage=digitalSignature" \
-addext "extendedKeyUsage=clientAuth" \
-addext "subjectAltName=DNS:${MONGOUSER}"
openssl ca -config ~/keysafe5-1.6.1/keysafe5-k8s/internalCA/internalCA.conf \
-out ${MONGOUSER}.crt -in ${MONGOUSER}.csr -batch
rm ${MONGOUSER}.csr
kubectl create secret generic ks5-mongotls \
--namespace nshieldkeysafe5 \
--from-file=ca.crt \
--from-file=tls.crt=${MONGOUSER}.crt \
--from-file=tls.key=${MONGOUSER}.key
We will repeat the process to make the remaining keys, certificates and Kubernetes secrets.
Start by creating the following keys and certificate pairs:
-
mongodb-arbiter
-
mongodb-replica-0
-
mongodb-replica-1
Now we use these to create the Kubernetes secrets:
kubectl create namespace mongons
kubectl create secret generic mongodb-certs-arbiter-0 \
--namespace mongons \
--from-file=ca.crt=mongodb-ca-cert \
--from-file=tls.crt=mongodb-arbiter.crt \
--from-file=tls.key=mongodb-arbiter.key
kubectl create secret generic mongodb-certs-0 \
--namespace mongons \
--from-file=ca.crt=mongodb-ca-cert \
--from-file=tls.crt=mongodb-replica-0.crt \
--from-file=tls.key=mongodb-replica-0.key
kubectl create secret generic mongodb-certs-1 \
--namespace mongons \
--from-file=ca.crt=mongodb-ca-cert \
--from-file=tls.crt=mongodb-replica-1.crt \
--from-file=tls.key=mongodb-replica-1.key
Now we install the MongoDB chart, using the CA secrets from above. This may take a few minutes.
helm install mongo-chart \
--namespace=mongons \
--set image.registry=$DOCKER_REGISTRY \
--set image.tag=8.0.13-debian-12-r0-2025-09-15 \
--set image.repository=keysafe5/bitnami/mongodb \
--set tls.image.registry=$DOCKER_REGISTRY \
--set tls.image.tag=1.29.1-debian-12-r0-2025-09-15 \
--set tls.image.repository=keysafe5/bitnami/nginx \
--set architecture=replicaset \
--set auth.enabled=true \
--set auth.usernames={dummyuser} \
--set auth.passwords={dummypassword} \
--set auth.databases={authdb} \
--set tls.enabled=true \
--set tls.mTLS.enabled=true \
--set tls.autoGenerated=false \
--set 'tls.replicaset.existingSecrets={mongodb-certs-0, mongodb-certs-1}' \
--set tls.arbiter.existingSecret=mongodb-certs-arbiter-0 \
--set global.security.allowInsecureImages=true \
--wait --timeout 15m \
helm-charts/bitnami-mongodb-17.0.0.tgz
There will be a message listing the MongoDB server addresses. Save the addresses to environment variables for use later.
export MONGO1=mongo-chart-mongodb-0.mongo-chart-mongodb-headless.mongons.svc.cluster.local:27017
export MONGO2=mongo-chart-mongodb-1.mongo-chart-mongodb-headless.mongons.svc.cluster.local:27017
export MONGODB=${MONGO1},${MONGO2}
The database tables and access for MONGOUSER
also need to be set up.
For this we will pass commands to mongosh running on the database server itself as the root user.
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongons \
mongo-chart-mongodb -o jsonpath="{.data.mongodb-root-password}" \
| base64 --decode)
export MONGO_TLSUSER=$(openssl x509 -in $MONGOUSER.crt -subject -noout | cut -f2- -d= | tr -d '[:space:]')
echo $MONGO_TLSUSER
Make a note of the MONGO_TLSUSER
as you will need it shortly.
We need to start the mongosh command prompt.
kubectl -n mongons exec \
--stdin=true mongo-chart-mongodb-0 0 -- \
mongosh admin --tls \
--tlsCAFile /certs/mongodb-ca-cert \
--tlsCertificateKeyFile /certs/mongodb.pem \
--host 127.0.0.1 \
--authenticationDatabase admin \
-u root -p $MONGODB_ROOT_PASSWORD
At the command prompt enter these database commands to create the tables.
db.createRole(
{
role: "hsm-mgmt-db-user",
privileges: [
{
"resource": {"db": "hsm-mgmt-db", "collection": ""},
"actions": ["createIndex", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
db.createRole(
{
role: "sw-mgmt-db-user",
privileges: [
{
"resource": {"db": "sw-mgmt-db", "collection": ""},
"actions": ["createIndex", "dropCollection", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
db.createRole(
{
role: "codesafe-mgmt-db-user",
privileges: [
{
"resource": {"db": "codesafe-mgmt-db", "collection": ""},
"actions": ["createIndex", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
db.createRole(
{
role: "agent-mgmt-db-user",
privileges: [
{
"resource": {"db": "agent-mgmt-db", "collection": ""},
"actions": ["createIndex", "dropCollection", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
db.createRole(
{
role: "licence-mgmt-db-user",
privileges: [
{
"resource": {"db": "licence-mgmt-db", "collection": ""},
"actions": ["createIndex", "dropCollection", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
db.createRole(
{
role: "monitoring-mgmt-db-user",
privileges: [
{
"resource": {"db": "monitoring-mgmt-db", "collection": ""},
"actions": ["createIndex", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
We now need to create the user with access to the database.
Replace CN=ks5-mongo-user
with the value you got for MONGO_TLSUSER
, and enter it into the prompt.
use $external
x509_user = {
user : "CN=ks5-mongo-user",
roles : [
{"role": "agent-mgmt-db-user", "db": "admin" },
{"role": "codesafe-mgmt-db-user", "db": "admin" },
{"role": "hsm-mgmt-db-user", "db": "admin" },
{"role": "licence-mgmt-db-user", "db": "admin" },
{"role": "monitoring-mgmt-db-user", "db": "admin" },
{"role": "sw-mgmt-db-user", "db": "admin" },
]
}
db.createUser(x509_user)
Type exit
to exit the mongosh prompt.
Object Storage
For large object storage, create a Persistent Volume Claim, in the nshieldkeysafe5
Kubernetes namespace (the same namespace that we will deploy the application to).
Cluster-local Object Storage
If your Kubernetes cluster only has 1 worker node, you can choose to use local storage.
cat << EOF | kubectl -n nshieldkeysafe5 apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-nshield-keysafe5
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
EOF
NFS Object Storage
If your Kubernetes cluster has more than 1 worker node, you must use a type of storage that supports distributed access, such as NFS. For details on creating a PVC for NFS object storage, please see NFS Object Storage Configuration.
Prometheus Database
Prometheus requires a persistent volume for its database and this must be created prior to installation of the Prometheus helm charts. This can only be created as local storage as NFS is not supported.
cat << EOF | kubectl -n nshieldkeysafe5 apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-data-keysafe5
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 4Gi
EOF
Install KeySafe 5
Bringing all the secrets and URLs created above, install KeySafe 5.
The commands below assume that a login is not required to pull from the Docker Registry.
In order to send email notifications for alerts, an email server must be configured. The additional required configuration options need to be added to the instructions for installing the KeySafe 5 backend services below.
--set monitoring_mgmt.alertmanager.email.smarthost=email.server.com:port \
--set monitoring_mgmt.alertmanager.email.from=no-reply@yourdomain.com \
--set monitoring_mgmt.alertmanager.email.auth_username=username \
--set monitoring_mgmt.alertmanager.email.auth_password=password \
# Get Ingress IP address
export INGRESS_IP=$(kubectl --namespace istio-system get svc -l app=istio-ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
# Install the KeySafe 5 backend services
helm install keysafe5-backend \
--namespace=nshieldkeysafe5 \
--set agent_mgmt.image=$DOCKER_REGISTRY/keysafe5/agent-mgmt:1.6.1 \
--set codesafe_mgmt.image=$DOCKER_REGISTRY/keysafe5/codesafe-mgmt:1.6.1 \
--set hsm_mgmt.image=$DOCKER_REGISTRY/keysafe5/hsm-mgmt:1.6.1 \
--set licence_mgmt.image=$DOCKER_REGISTRY/keysafe5/licence-mgmt:1.6.1 \
--set monitoring_mgmt.image=$DOCKER_REGISTRY/keysafe5/monitoring-mgmt:1.6.1 \
--set sw_mgmt.image=$DOCKER_REGISTRY/keysafe5/sw-mgmt:1.6.1 \
--set database.type=mongo \
--set database.mongo.hosts="$MONGO1\,$MONGO2" \
--set database.mongo.replicaSet=rs0 \
--set database.mongo.auth.type=tls \
--set database.mongo.auth.authDatabase=authdb \
--set database.mongo.tls.enabled=true \
--set database.mongo.tls.existingSecret=ks5-mongotls \
--set messageBus.auth.type=tls \
--set messageBus.tls.enabled=true \
--set messageBus.tls.serverTLS.existingSecret=agentcomms-server-certificates \
--set messageBus.tls.existingSecret=agentcomms-client-certificates \
--set objectStore.pvc=data-nshield-keysafe5 \
--wait --timeout 10m \
helm-charts/nshield-keysafe5-backend-1.6.1.tgz
# Install the KeySafe 5 UI
helm install keysafe5-ui \
--namespace=nshieldkeysafe5 \
--set ui.image=$DOCKER_REGISTRY/keysafe5/mgmt-ui:1.6.1 \
--set svcEndpoint="https://${INGRESS_IP}" \
--set authMethod=none \
--wait --timeout 10m \
helm-charts/nshield-keysafe5-ui-1.6.1.tgz
# Create the TLS secret for the Istio Ingress Gateway
openssl genrsa -out istio.key 4096
openssl req -new -key istio.key -out istio.csr \
-subj "/CN=${HOSTNAME}" \
-addext "keyUsage=digitalSignature" \
-addext "extendedKeyUsage=serverAuth" \
-addext "subjectAltName=DNS:${HOSTNAME},IP:${INGRESS_IP}"
openssl ca -config ~/keysafe5-1.6.1/keysafe5-k8s/internalCA/internalCA.conf \
-out istio.crt -in istio.csr -batch
kubectl -n istio-system create secret tls \
keysafe5-server-credential --cert=istio.crt --key=istio.key
# Configure Istio Ingress Gateway for KeySafe 5
helm install keysafe5-istio \
--namespace=nshieldkeysafe5 \
--set tls.existingSecret=keysafe5-server-credential \
--set requireAuthn=false \
--wait --timeout 1m \
helm-charts/nshield-keysafe5-istio-1.6.1.tgz
# Install the KeySafe 5 Prometheus
helm install keysafe5-prometheus \
--namespace=nshieldkeysafe5 \
--set HostIP= \
--set prometheus.image=localhost:5000/keysafe5/prometheus:v3.6.0-rc.0 \
--set prometheus.pvc=prometheus-data-keysafe5 \
--set prometheus.sharedpvc=data-nshield-keysafe5 \
--wait --timeout 3m \
helm-charts/nshield-keysafe5-prometheus-1.6.1.tgz
# Install the KeySafe 5 Alertmanager
helm install keysafe5-alertmanager\
--namespace=nshieldkeysafe5 \
--set HostIP= \
--set alertmanager.image=localhost:5000/keysafe5/alertmanager:v0.28.1 \
--set alertmanager.sharedpvc=data-nshield-keysafe5 \
--set sidecar.image=localhost:5000/keysafe5/alert-manager-sidecar:1.6.1 \
--set sidecar.configPath=/etc/shared_volume/prometheus \
--wait --timeout 3m \
helm-charts/nshield-keysafe5-alertmanager-1.6.1.tgz
Access KeySafe 5
You can now access KeySafe 5 at https://$INGRESS_IP
.
For example, you could send curl requests as demonstrated below.
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/hsms | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/hosts | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/pools | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/feature-certificates | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/worlds | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/codesafe/v1/images | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/codesafe/v1/certificates | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/licensing/v1/licences | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/monitoring/v1/triggers | jq
You can access the Management UI in a web browser at https://$INGRESS_IP
.
Configure KeySafe 5 Agent machines
To configure a host machine to be managed and monitored by this deployment, run the KeySafe 5 agent binary on the KeySafe 5 Agent machine containing the relevant Security World or HSMs.
After copying over the agent tar file, extract it and start configuring:
sudo tar -C / -xf keysafe5-1.6.1-Linux-keysafe5-agent.tar.gz
export KS5CONF=/opt/nfast/keysafe5/conf
sudo cp $KS5CONF/config.yaml.example $KS5CONF/config.yaml
Create the messagebus/tls directory and copy the ca.crt
file copied from the keysafe5-1.6.1
directory on the demo machine into it.
mkdir -p $KS5CONF/messagebus/tls
cp ca.crt $KS5CONF/messagebus/tls/
Create the private key and a certificate signing request (CSR) for this specific KeySafe 5 agent.
sudo /opt/nfast/keysafe5/bin/ks5agenttls --keypath=$KS5CONF/messagebus/tls/tls.key --keygen
sudo /opt/nfast/keysafe5/bin/ks5agenttls --keypath=$KS5CONF/messagebus/tls/tls.key --csrgen
For this installation we copy the CSR to the demo machine, into the keysafe5-1.6.1 directory, then sign it using OpenSSL.
openssl ca -config ~/keysafe5-1.6.1/keysafe5-k8s/internalCA/internalCA.conf \
-in ks5agent_demohost.csr \
-out ks5agent_demohost.crt -batch
Transfer the resulting certificate ks5agent_demohost.crt
to the nShield Agent machine at /opt/nfast/keysafe5/conf/messagebus/tls/tls.crt
.
On the nShield Agent machine, if the hardserver is already running, use the KeySafe 5 install script to not restart it when installing the KeySafe 5 agent.
sudo /opt/nfast/keysafe5/sbin/install
Otherwise, use the nShield install script which will start both the nShield Security World software and the KeySafe 5 agent.
sudo /opt/nfast/sbin/install
Uninstall
KeySafe 5 services
helm --namespace nshieldkeysafe5 uninstall keysafe5-istio
helm --namespace nshieldkeysafe5 uninstall keysafe5-backend
helm --namespace nshieldkeysafe5 uninstall keysafe5-ui
helm --namespace nshieldkeysafe5 uninstall keysafe5-prometheus
helm --namespace nshieldkeysafe5 uninstall keysafe5-alertmanager
helm --namespace mongons uninstall mongo-chart
KeySafe 5 Agent
To uninstall the KeySafe 5 agent, run the KeySafe 5 uninstaller, then remove the files manually.
sudo /opt/nfast/keysafe5/sbin/install -u
rm -f /opt/nfast/lib/versions/keysafe5-agent-atv.txt
rm -f /opt/nfast/sbin/keysafe5-agent
rm -f /opt/nfast/scripts/install.d/12keysafe5-agent
rm -f /opt/nfast/keysafe5/sbin/install
rm -f /opt/nfast/keysafe5/bin/ks5agenttls
rm -f /opt/nfast/keysafe5/conf/config.yaml.example
The configuration for KeySafe 5 Agent is stored in the conf directory which can also be deleted.
rm -rf /opt/nfast/keysafe5/conf