Upgrade
This chapter details how to update an existing KeySafe 5 install to the latest version.
When upgrading KeySafe 5 it is recommended to first update the Helm charts installed in the central platform and then update all KeySafe 5 Agent installs on host machines being managed by KeySafe 5. |
Upgrading from KeySafe 5 1.4
To upgrade the release of a Helm Chart we do a helm upgrade
command, see Helm Upgrade
List all installed releases using helm list -A
.
$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
keysafe5-backend nshieldkeysafe5 1 2024-04-25 15:59:40.995994525 +0100 BST deployed nshield-keysafe5-backend-1.4.0 1.4.0
keysafe5-istio nshieldkeysafe5 1 2024-04-25 15:58:09.344300669 +0100 BST deployed nshield-keysafe5-istio-1.4.0 1.4.0
keysafe5-ui nshieldkeysafe5 1 2024-04-25 15:57:42.260802671 +0100 BST deployed nshield-keysafe5-ui-1.4.0 1.4.0
mongo-chart mongons 1 2024-04-25 15:55:05.825098514 +0100 BST deployed mongodb-12.1.31 5.0.10
rabbit-chart rabbitns 1 2024-04-25 15:58:19.881365343 +0100 BST deployed rabbitmq-11.16.2 3.11.18
Ensure all pods are healthy prior to performing an upgrade, unhealthy pods can prevent helm from fully completing an upgrade. |
The process involves upgrading the charts in the following order:
-
mongo-chart
-
keysafe5-backend
-
keysafe5-ui
-
keysafe5-istio
-
keysafe5-prometheus
-
keysafe5-alertmanager
Unpack the source
mkdir ~/keysafe5-1.6.1
tar -C ~/keysafe5-1.6.1 -xf nshield-keysafe5-1.6.1.tar.gz
cd ~/keysafe5-1.6.1/keysafe5-k8s
Docker Images
The Docker images need to be loaded onto a Docker registry that each node in your Kubernetes cluster can pull the images from.
Follow the instructions on this section of the install guide to do this: Docker Images.
Moving the CA
The CA needs to be moved from the 1.4 directory of KeySafe 5 to the 1.6.1 directory. Depending on your existing setup this is done in different ways. This guide includes the steps for moving internalCA and externalCA.
Both methods use the ~/keysafe5-1.6.1/keysafe5-k8s/updateinternalcerts.sh
script.
externalCA
For externalCA a new directory will need to be created in the 1.6.1 upgrade directory. This directory will need to contain the server, the client keys and certificates in PEM format.
mkdir ~/keysafe5-1.6.1/keysafe5-k8s/externalCA
The following files will need to be included in this directory:
ca.crt The certificate of the CA that is to be trusted by
the system.
agentcomms.key The key to be used by the Agent Communications
Server
agentcomms.crt And its certificate
ks5agentcomms.key The key to be used by ks5
ks5agentcomms.crt And its certificate
mongodb-0.key The key to be used by mongo-chart-mongodb-0
mongodb-0.crt And its certificate
mongodb-1.key The key to be used by mongo-chart-mongodb-1
mongodb-1.crt And its certificate
mongodb-a.key The key to be used by mongo-chart-mongodb-arbiter
mongodb-a.crt And its certificate
ks5mongodb.key The key to be used by ks5-mongo-user
ks5mongodb.crt And its certificate
Once this directory has been created, updateinternalcerts.sh can be used to refresh certificates:
This specific command refresh certificates in the "certs" namespace, for more instructions refer to the help of updateinternalcerts.sh
./updateinternalcerts.sh -n certs externalCA
internalCA
If you are using internalCA then the CA is contained within a folder called "CA" or "internalCA" of the previous installation. Copy this existing folder into the current directory for the upgrade. This might be something like
cp -r ~/existing-ks5-install/internalCA .
Then generate the new certificates using updateinternalcerts.sh - the following example sets the expiration date for 1 year. This command may appear to fail, but if a folder called keysafe5-cert-update-agentcomms is created then this step was successful.
./updateinternalcerts.sh agentcomms 365
Create secrets
In order for the services to communicate with the message bus, secrets need to be created to be referenced during helm installation. Now that the CAs have been moved, these secrets can be made. Replace "CA" in the following scripts with the name of the CA you created in the previous step.
kubectl create secret generic agentcomms-server-certificates \
--namespace=nshieldkeysafe5 \
--from-file=ca.crt=CA/cacert.pem \
--from-file=tls.crt=keysafe5-cert-update-agentcomms/agentcomms.crt \
--from-file=tls.key=keysafe5-cert-update-agentcomms/agentcomms.key
kubectl create secret generic agentcomms-client-certificates \
--namespace=nshieldkeysafe5 \
--from-file=ca.crt=CA/cacert.pem \
--from-file=tls.crt=keysafe5-cert-update-agentcomms/keysafe5-backend-services.crt \
--from-file=tls.key=keysafe5-cert-update-agentcomms/keysafe5-backend-services.key
MongoDB to 8.0.13
To update a non-Kubernetes existing MongoDB install to a MongoDB 8.0.13 install, see the official documentation at Upgrade to the Latest Revision of MongoDB.
To update a MongoDB 8.0.13 install deployed via Bitnami Helm Charts:
First ensure that MongoDB is running.
helm list -A
Fetch the existing helm chart’s settings
helm -n mongons get values --output yaml mongo-chart > mongo-chart-values.yaml
The password and replicaset key have not changed, so we can upgrade.
helm upgrade --install mongo-chart \
--namespace mongons \
--set image.registry=$DOCKER_REGISTRY \
--set image.tag=8.0.13-debian-12-r0-2025-09-15 \
--set image.repository=keysafe5/bitnami/mongodb \
--set tls.image.registry=$DOCKER_REGISTRY \
--set tls.image.tag=1.29.1-debian-12-r0-2025-09-15 \
--set tls.image.repository=keysafe5/bitnami/nginx \
--set architecture=replicaset \
--set auth.enabled=true \
--set image.debug=false \
--set systemLogVerbosity=5 \
--set auth.rootPassword=secret \
--set tls.enabled=true \
--set tls.mTLS.enabled=true \
--set tls.autoGenerated=false \
--set 'tls.replicaset.existingSecrets={mongodb-certs-0, mongodb-certs-1}' \
--set tls.arbiter.existingSecret=mongodb-certs-arbiter-0 \
--set featureCompatibilityVersion=7.0 \
--set global.security.allowInsecureImages=true \
--values mongo-chart-values.yaml \
--wait --timeout 3m \
helm-charts/bitnami-mongodb-17.0.0.tgz
There will be some warnings regarding rolling tags and container substitution. This is expected as we are using the container images supplied in the release package. See Release Package - Docker Images for more information.
Obtain details of newly deployed helm charts
helm list -A
Define and update the new database users
First connect to the database
export MONGO1=mongo-chart-mongodb-0.mongo-chart-mongodb-headless.mongons.svc.cluster.local:27017
export MONGO2=mongo-chart-mongodb-1.mongo-chart-mongodb-headless.mongons.svc.cluster.local:27017
export MONGODB=${MONGO1},${MONGO2}
export MONGO_RUN="kubectl -n mongons exec mongo-chart-mongodb-0 0 -- "
export TLS_PRIVKEY="$(${MONGO_RUN} bash -c 'cat /certs/mongodb.pem')"
export TLS_CERT="$(${MONGO_RUN} bash -c 'cat /certs/mongodb-ca-cert')"
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongons \
mongo-chart-mongodb -o jsonpath="{.data.mongodb-root-password}" \
| base64 --decode)
kubectl run --namespace mongons mongo-chart-mongodb-client \
--rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" \
--env="TLS_PRIVKEY=$TLS_PRIVKEY" --env="TLS_CERT=$TLS_CERT" --env="MONGODB=$MONGODB" \
--image $DOCKER_REGISTRY/keysafe5/bitnami/mongodb:8.0.13-debian-12-r0-2025-09-15 --command -- bash
then new definitions have to be added
$ echo "$TLS_CERT" > /tmp/tls.crt
$ echo "$TLS_PRIVKEY" > /tmp/tls.key
$ mongosh admin --tls --tlsCAFile /tmp/tls.crt --tlsCertificateKeyFile /tmp/tls.key \
--host $MONGODB --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
> use admin
> db.createRole(
{
role: "monitoring-mgmt-db-user",
privileges: [
{
"resource": {"db": "monitoring-mgmt-db", "collection": ""},
"actions": ["createIndex", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
> db.createRole(
{
role: "licence-mgmt-db-user",
privileges: [
{
"resource": {"db": "licence-mgmt-db", "collection": ""},
"actions": ["createIndex", "dropCollection", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
> db.createRole(
{
role: "agent-mgmt-db-user",
privileges: [
{
"resource": {"db": "agent-mgmt-db", "collection": ""},
"actions": ["createIndex", "dropCollection", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
> db.updateRole( "hsm-mgmt-db-user",
{
privileges : [
{
"resource": {"db": "hsm-mgmt-db", "collection": ""},
"actions": ["createIndex", "dropIndex", "find", "insert", "remove", "update"]
},
]
}
)
> use $external
> x509_user = {
"roles" : [
{"role": "agent-mgmt-db-user", "db": "admin" },
{"role": "codesafe-mgmt-db-user", "db": "admin" },
{"role": "hsm-mgmt-db-user", "db": "admin" },
{"role": "sw-mgmt-db-user", "db": "admin" },
{"role": "monitoring-mgmt-db-user", "db": "admin" },
{"role": "licence-mgmt-db-user", "db": "admin" },
]
}
> db.updateUser("CN=ks5-mongo-user", x509_user)
> exit
$ exit
Upgrading the KeySafe 5 backend
The parameters used for the running Helm chart need to be retrieved into a file called keysafe5-backed-values.yaml
.
helm -n nshieldkeysafe5 get values --all --output yaml keysafe5-backend > keysafe5-backend-values.yaml
The new services, monitoring_mgmt, licence_mgmt and agent_mgmt support the same common values as other services, such as probe thresholds. These can be added to the keysafe5-backend-values.yaml file if desired.
In order to send email notifications for alerts, an email server must be configured. The additional required configuration options need to be added to the instructions for installing the KeySafe 5 backend services below.
|
helm upgrade --install keysafe5-backend \
--namespace=nshieldkeysafe5 \
--set hsm_mgmt.image=$DOCKER_REGISTRY/keysafe5/hsm-mgmt:1.6.1 \
--set sw_mgmt.image=$DOCKER_REGISTRY/keysafe5/sw-mgmt:1.6.1 \
--set codesafe_mgmt.image=$DOCKER_REGISTRY/keysafe5/codesafe-mgmt:1.6.1 \
--set agent_mgmt.image=$DOCKER_REGISTRY/keysafe5/agent-mgmt:1.6.1 \
--set licence_mgmt.image=$DOCKER_REGISTRY/keysafe5/licence-mgmt:1.6.1 \
--set monitoring_mgmt.image=$DOCKER_REGISTRY/keysafe5/monitoring-mgmt:1.6.1 \
--set messageBus.compatibilityMode=true \
--set messageBus.URL=127.0.0.1:18084 \
--set messageBus.auth.type=tls \
--set messageBus.tls.enabled=true \
--set messageBus.tls.existingSecret=agentcomms-client-certificates \
--set messageBus.serverTLS.existingSecret=agentcomms-server-certificates \
--values keysafe5-backend-values.yaml \
--wait --timeout 3m \
helm-charts/nshield-keysafe5-backend-1.6.1.tgz
Upgrading the KeySafe 5 UI
The same process as for the backend is also used for the UI:
helm -n nshieldkeysafe5 get values --all --output yaml keysafe5-ui > keysafe5-ui-values.yaml
You may make changes to the yaml files before upgrading though this is not required.
helm upgrade --install keysafe5-ui \
--namespace=nshieldkeysafe5 \
--set ui.image=$DOCKER_REGISTRY/keysafe5/mgmt-ui:1.6.1 \
--set ui.pullPolicy=Always \
--values keysafe5-ui-values.yaml \
--wait --timeout 3m \
helm-charts/nshield-keysafe5-ui-1.6.1.tgz
Upgrading the KeySafe 5 Istio
Check the version of Istio installed aligns with the software version of istioctl
.
Enable the agent-mgmt port and certificates reference
helm -n nshieldkeysafe5 get values --all --output yaml keysafe5-istio > keysafe5-istio-values.yaml
istioctl x precheck
istioctl upgrade -y \
--set values.gateways.istio-ingressgateway.ingressPorts[0].name=agent-comms \
--set values.gateways.istio-ingressgateway.ingressPorts[0].port=18084 \
--set values.gateways.istio-ingressgateway.ingressPorts[0].protocol=TCP
helm upgrade --install keysafe5-istio \
--namespace=nshieldkeysafe5 \
--wait --timeout 3m \
--values keysafe5-istio-values.yaml \
helm-charts/nshield-keysafe5-istio-1.6.1.tgz
Prometheus
Create a file with the name "pvc.yaml" with the following contents in your current folder:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-data-keysafe5
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
Then it can be created in kubernetes
kubectl apply -f pvc.yaml --namespace=nshieldkeysafe5
Once the volume is created prometheus helm chart can be installed
helm install keysafe5-prometheus \
--namespace=nshieldkeysafe5 \
--set HostIP= \
--set prometheus.image=$DOCKER_REGISTRY/keysafe5/prometheus:v3.6.0-rc.0 \
--set prometheus.pvc=prometheus-data-keysafe5 \
--set prometheus.sharedpvc=data-nshield-keysafe5 \
--wait --timeout 3m \
helm-charts/nshield-keysafe5-prometheus-1.6.1.tgz
Prometheus Alertmanager
Alertmanager helm chart can be installed with
helm install keysafe5-alertmanager\
--namespace=nshieldkeysafe5 \
--set HostIP= \
--set alertmanager.image=$DOCKER_REGISTRY/keysafe5/alertmanager:v0.28.1 \
--set alertmanager.sharedpvc=data-nshield-keysafe5 \
--set sidecar.image=$DOCKER_REGISTRY/keysafe5/alert-manager-sidecar:1.6.1 \
--set sidecar.configPath=/etc/shared_volume/prometheus \
--wait --timeout 3m \
helm-charts/nshield-keysafe5-alertmanager-1.6.1.tgz
Agent Upgrade
The following information may be useful when upgrading:
-
Agents must be upgraded to 1.6.1 before use, mixing 1.6.1 and 1.4 is not supported.
-
The TLS certificate doesn’t need to be generated again, as the backed up directory contains it. However, it can be generated again if preferred.
-
If the agent config from 1.4 is configured with the RabbitMQ(AMQP) message bus it cannot be used in 1.6.1 as RabbitMQ(AMQP) is no longer supported.
-
Any existing agent configuration files that use the RabbitMQ(AMQP) message bus must be re-configured to use the NATS message bus. Information on configuration can be found in Agent configuration.
-
New agents that are upgraded to 1.6.1 will be added as new hosts and modules on the UI, old 1.4 agents will be staled and lose connection. These hosts and modules must be manually removed individually. Pools will be removed automatically after hosts and modules are removed.
To update the KeySafe 5 Agent installed on a machine:
-
Take a backup of the Agent config directory located at
%NFAST_DATA_HOME%/keysafe5/conf
. -
Uninstall the existing KeySafe 5 Agent as detailed in the KeySafe 5 Installation Guide for the currently installed version of the product.
-
Install the new KeySafe 5 Agent as detailed in chapter KeySafe 5 Agent Installation.
Confirm Upgrade
To ensure that the upgrades have been successful, you can run the following commands and ensure they look similar to the recommended output.
First check:
helm list -A
Output should look like:
[source, subs="attributes"]
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
keysafe5-alertmanager nshieldkeysafe5 1 2025-09-10 14:26:31.744396244 +0100 BST deployed nshield-keysafe5-alertmanager-{prod_vrsn} {prod_vrsn}
keysafe5-backend nshieldkeysafe5 1 2025-09-10 14:18:14.413954531 +0100 BST deployed nshield-keysafe5-backend-{prod_vrsn} {prod_vrsn}
keysafe5-istio nshieldkeysafe5 1 2025-09-10 10:53:07.370113525 +0100 BST deployed nshield-keysafe5-istio-{prod_vrsn} {prod_vrsn}
keysafe5-prometheus nshieldkeysafe5 1 2025-09-10 14:25:11.814394741 +0100 BST deployed nshield-keysafe5-prometheus-{prod_vrsn} {prod_vrsn}
keysafe5-ui nshieldkeysafe5 1 2025-09-10 10:44:59.935963805 +0100 BST deployed nshield-keysafe5-ui-{prod_vrsn} {prod_vrsn}
mongo-chart mongons 1 2025-09-10 10:31:07.844261312 +0100 BST deployed mongodb-17.0.0 8.0.13
rabbit-chart rabbitns 1 2025-09-10 09:54:54.077173236 +0100 BST deployed rabbitmq-12.13.1 3.12.13
As shown in the above example, rabbitmq may still be present. It does not affect the installation whether it is available or not. Once the upgrade is completed, the old RabbitMQ components can be safely removed.
Second check:
kubectl get pods -A
Output should look like:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6799fbcd5-sflbm 1/1 Running 0 5h39m
kube-system local-path-provisioner-6c86858495-5v8rb 1/1 Running 0 5h39m
kube-system svclb-rabbit-chart-rabbitmq-750352e4-jt2fl 5/5 Running 0 5h38m
nshieldkeysafe5 ratelimit-588cd5944c-85kck 1/1 Running 0 5h32m
nshieldkeysafe5 redis-6d75fc6d74-sr852 1/1 Running 0 5h32m
rabbitns rabbit-chart-rabbitmq-1 1/1 Running 0 5h32m
rabbitns rabbit-chart-rabbitmq-0 1/1 Running 0 5h31m
mongons mongo-chart-mongodb-1 1/1 Running 0 4h56m
mongons mongo-chart-mongodb-arbiter-0 1/1 Running 0 4h56m
mongons mongo-chart-mongodb-0 1/1 Running 0 4h55m
nshieldkeysafe5 nshield-keysafe5-ui-6d9b97c57b-hjvgv 1/1 Running 0 4h42m
nshieldkeysafe5 nshield-keysafe5-ui-6d9b97c57b-wchpd 1/1 Running 0 4h42m
nshieldkeysafe5 nshield-keysafe5-ui-6d9b97c57b-xs9bl 1/1 Running 0 4h41m
istio-system istiod-6f4cc8459f-wn6cp 1/1 Running 0 4h35m
kube-system svclb-istio-ingressgateway-106b0a9e-4lr7w 4/4 Running 0 4h35m
istio-system istio-ingressgateway-c5fbff6d6-q4b9s 1/1 Running 0 4h35m
nshieldkeysafe5 nshield-keysafe5-2 6/6 Running 0 69m
nshieldkeysafe5 nshield-keysafe5-1 6/6 Running 0 69m
nshieldkeysafe5 nshield-keysafe5-0 6/6 Running 0 68m
nshieldkeysafe5 nshield-prometheus-0 1/1 Running 0 62m
nshieldkeysafe5 nshield-alertmanager-0 2/2 Running 0 61m
nshieldkeysafe5 nshield-alertmanager-1 2/2 Running 0 61m
nshieldkeysafe5 nshield-alertmanager-2 2/2 Running 0 61m