Manual Install
The following steps provide a step-by-step guide to installing KeySafe 5 and its dependencies into an existing Kubernetes cluster.
An alternative to this guide is the KeySafe 5 Quick Start Guide which provides a scripted means of installing KeySafe 5.
These steps install KeySafe 5 and its dependencies. They should be followed to set up a demo environment for evaluation purposes and should not be used for production environments. Please see Hardening The Deployment for steps to harden the deployment. Entrust recommends these steps as a minimum and that additional hardening may be required dependent on your own requirements. A production deployment will have as a minimum the following:
|
This set of commands are an example of how to install KeySafe 5. They may need modification to suit your environment. |
Unpack the release
mkdir ~/keysafe5-install
tar -xf nshield-keysafe5-1.3.0.tar.gz -C ~/keysafe5-install
cd ~/keysafe5-install
Docker images
The Docker images need to be loaded onto a Docker registry that each node in your Kubernetes cluster can pull the images from.
-
Load the Docker images to your local Docker, for example:
docker load < docker-images/codesafe-mgmt.tar docker load < docker-images/hsm-mgmt.tar docker load < docker-images/sw-mgmt.tar docker load < docker-images/ui.tar
-
Set the
DOCKER_REGISTRY
variable to the registry in use, for example:export DOCKER_REGISTRY=localhost:5000
If you are using a single-machine Kubernetes distribution like K3s, you may be able to create a simple unauthenticated local private Docker registry by following the instructions in Distribution Registry. However this registry is only accessible by setting the name to localhost
which will not work for other configurations. -
Log in to the registry to ensure that you can push to it:
docker login $DOCKER_REGISTRY
-
Tag the Docker images for the registry, for example:
docker tag codesafe-mgmt:1.3.0 $DOCKER_REGISTRY/keysafe5/codesafe-mgmt:1.3.0 docker tag hsm-mgmt:1.3.0 $DOCKER_REGISTRY/keysafe5/hsm-mgmt:1.3.0 docker tag sw-mgmt:1.3.0 $DOCKER_REGISTRY/keysafe5/sw-mgmt:1.3.0 docker tag mgmt-ui:1.3.0 $DOCKER_REGISTRY/keysafe5/mgmt-ui:1.3.0
-
Push the KeySafe 5 images to the registry, for example:
docker push $DOCKER_REGISTRY/keysafe5/codesafe-mgmt:1.3.0 docker push $DOCKER_REGISTRY/keysafe5/hsm-mgmt:1.3.0 docker push $DOCKER_REGISTRY/keysafe5/sw-mgmt:1.3.0 docker push $DOCKER_REGISTRY/keysafe5/mgmt-ui:1.3.0
Set up a Certificate Authority
You should use your existing CA for a production system. This is simply used as an example for the purposes of having a working demo system.
Either OpenSSL 3.0 or OpenSSL 1.1.1 may be used to create the CA, and the CA may be created in a directory of your choosing.
In these examples, /home/user/keysafe5-install/internalCA
is the example directory used.
In that directory, create the file internalCA.conf
with the contents:
[ ca ]
default_ca = CA_default # The default ca section
[ CA_default ]
dir = /home/user/keysafe5-install/internalCA # The directory of the CA
database = $dir/index.txt # index file.
new_certs_dir = $dir/newcerts # new certs dir
certificate = $dir/cacert.pem # The CA cert
serial = $dir/serial # serial no file
#rand_serial = yes # for random serial#'s
private_key = $dir/private/cakey.pem # CA private key
RANDFILE = $dir/private/.rand # random number file
default_days = 15 # how long to certify for
default_crl_days= 5 # how long before next CRL
default_md = sha256 # Message Digest
policy = test_root_ca_policy
x509_extensions = certificate_extensions
unique_subject = no
# This copy_extensions setting should not be used in a production system.
# It is simply used to simplify the demo system.
copy_extensions = copy
[ test_root_ca_policy ]
commonName = supplied
stateOrProvinceName = optional
countryName = optional
emailAddress = optional
organizationName = optional
organizationalUnitName = optional
domainComponent = optional
[ certificate_extensions ]
basicConstraints = CA:false
[ req ]
default_bits = 4096
default_md = sha256
prompt = yes
distinguished_name = root_ca_distinguished_name
x509_extensions = root_ca_extensions
[ root_ca_distinguished_name ]
commonName = hostname
[ root_ca_extensions ]
basicConstraints = CA:true
keyUsage = keyCertSign, cRLSign
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical,CA:true
Remember to update the dir
value to the directory in which the internalCA.conf
and the other CA files will be stored.
The certificates generated, unless overridden on the command line, will be valid for 15 days as specified in default_days
.
To generate the long-term CA key and random number source, create a directory called private
, then place them in that directory:
mkdir ~/keysafe5-install/internalCA/private
openssl genrsa -out ~/keysafe5-install/internalCA/private/cakey.pem 4096
openssl rand -out ~/keysafe5-install/internalCA/private/.rand 1024
The CA needs a self-signed certificate; as this is a short-term demo it will be valid for 90 days:
openssl req -x509 -new -nodes \
-key internalCA/private/cakey.pem \
-subj "/CN=internalCA" -days 90 \
-out internalCA/cacert.pem \
-config internalCA/internalCA.conf
cp internalCA/cacert.pem ca.crt
And finally, to finish off the configuration:
mkdir internalCA/newcerts
echo 01 > internalCA/serial
touch internalCA/index.txt
Install and set up the supporting software
Kubernetes namespace
Create a namespace in Kubernetes for KeySafe 5 installation.
kubectl create namespace nshieldkeysafe5
Istio
These instructions assume that only Istio will be used for ingress, and no other ingress controller is installed. |
If Istio is not already installed, you may install a version aligned with the software version of istioctl
with:
istioctl install -y
RabbitMQ
Entrust recommends that you use your standard secure RabbitMQ installation, along with your policies for authentication and virtual hosts on your production system; this is only a demo system.
First, you must generate the TLS keys and guest password. You must add the network addresses through which RabbitMQ will be accessed to the certificate, and are very dependent on the configuration of the Kubernetes cluster.
openssl genrsa -out ~/keysafe5-install/rabbit.key 4096
export DNS1="*.rabbit-chart-rabbitmq-headless.rabbitns.svc.cluster.local"
export DNS2=rabbit-chart-rabbitmq.rabbitns.svc
export DNS3=rabbitmq.rabbitns.svc.cluster.local
If you know the external IP address that will be allocated by Kubernetes, then set HOSTIP
to that address.
Otherwise set it to a temporary address.
export HOSTIP=127.0.0.1
openssl req -new -key ~/keysafe5-install/rabbit.key \
-out ~/keysafe5-install/rabbitmq.csr \
-subj "/CN=rabbitmq" \
-addext "keyUsage=digitalSignature" \
-addext "extendedKeyUsage=serverAuth" \
-addext "subjectAltName=DNS:rabbitmq,DNS:${DNS1},DNS:${DNS2},DNS:${DNS3},IP:${HOSTIP}"
openssl ca -config ~/keysafe5-install/internalCA/internalCA.conf \
-out rabbit.crt \
-in rabbitmq.csr -batch
rm rabbitmq.csr
This will create a certificate that is valid for the default time period as set in the configuration file.
We now transfer the key and certificate into a Kubernetes secret for the RabbitMQ service:
kubectl create namespace rabbitns
kubectl create secret generic rabbitmq-certificates \
--namespace=rabbitns \
--from-file=ca.crt \
--from-file=tls.crt=rabbit.crt \
--from-file=tls.key=rabbit.key
kubectl -n rabbitns create secret generic rabbitmq-pw \
--from-literal=rabbitmq-password=guest
Then install RabbitMQ. This can take a few minutes.
helm repo add bitnami https://charts.bitnami.com/bitnami && helm repo update
helm install rabbit-chart \
--namespace=rabbitns \
--set image.tag=3.12.13-debian-12-r1 \
--set auth.username=guest \
--set auth.existingPasswordSecret=rabbitmq-pw \
--set auth.tls.enabled=true \
--set auth.tls.existingSecret=rabbitmq-certificates \
--set replicaCount=2 \
--set service.type=LoadBalancer \
--set extraConfiguration='
listeners.ssl.default = 5671
ssl_options.versions.1 = tlsv1.3
ssl_options.depth=0
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
auth_mechanisms.1 = EXTERNAL
ssl_cert_login_from = subject_alternative_name
ssl_cert_login_san_type = dns
ssl_cert_login_san_index = 0' \
--set plugins="" \
--set extraPlugins="rabbitmq_auth_mechanism_ssl" \
--wait --timeout 10m \
bitnami/rabbitmq --version 12.13.1
Add the virtual host that will be used for KeySafe 5 communication.
export RUN_RABBIT="kubectl -n rabbitns exec rabbit-chart-rabbitmq-0 -c rabbitmq -- "
export RABBIT_VHOST=nshieldvhost
${RUN_RABBIT} rabbitmqctl add_vhost ${RABBIT_VHOST}
Then add and configure the X.509 user for the KeySafe 5 application to communicate with RabbitMQ.
export KS5_USER=ks5
${RUN_RABBIT} rabbitmqctl add_user $KS5_USER "ephemeralpw"
${RUN_RABBIT} rabbitmqctl set_permissions -p $RABBIT_VHOST $KS5_USER ".*" ".*" ".*"
${RUN_RABBIT} rabbitmqctl clear_password $KS5_USER
You should then create the X.509 key and certificate (valid for the default time period) for this user.
openssl genrsa -out $KS5_USER.key 4096
openssl req -new -key $KS5_USER.key -out $KS5_USER.csr \
-subj "/CN=${KS5_USER}" \
-addext "keyUsage=digitalSignature" \
-addext "extendedKeyUsage=clientAuth" \
-addext "subjectAltName=DNS:${KS5_USER}"
openssl ca -config ~/keysafe5-install/internalCA/internalCA.conf \
-out ${KS5_USER}.crt -in ${KS5_USER}.csr -batch
rm ${KS5_USER}.csr
kubectl create secret generic ks5-messagebus-tls \
--namespace nshieldkeysafe5 \
--from-file=ca.crt \
--from-file=tls.crt=ks5.crt \
--from-file=tls.key=ks5.key
Now remove access for the default guest
user.
${RUN_RABBIT} rabbitmqctl delete_user guest
We need to set up a variable to hold the RABBIT_URL
using the external IP address and port:
ipaddr=$(kubectl get svc -n rabbitns rabbit-chart-rabbitmq -o "jsonpath={.status.loadBalancer.ingress[0].ip}")
port=$(kubectl get service -n rabbitns rabbit-chart-rabbitmq -o "jsonpath={.spec['ports'][?(@.name=='amqp-tls')].port}")
export RABBIT_URL="$ipaddr:$port/${RABBIT_VHOST}"
The variable RABBIT_URL
holds the url required to connect to RabbitMQ from an external client.
If the variable ipaddr
contains a different value from HOSTIP
above, we will need to create a new certificate for the correct IP address, and point a new RabbitMQ to it.
This is the same process used for upgrading the RabbitMQ certificate when it expires.
First, we create a new certificate, then a new secret from it (using the same secret key that we created earlier):
openssl req -new -key ~/keysafe5-install/rabbit.key \
-out ~/keysafe5-install/rabbitmq.csr \
-subj "/CN=rabbitmq" \
-addext "keyUsage=digitalSignature" \
-addext "extendedKeyUsage=serverAuth" \
-addext "subjectAltName=DNS:rabbitmq,DNS:${DNS1},DNS:${DNS2},DNS:${DNS3},IP:${ipaddr}"
openssl ca -config ~/keysafe5-install/internalCA/internalCA.conf \
-out rabbit2.crt \
-in rabbitmq.csr -batch
rm rabbitmq.csr
kubectl create secret generic rabbitmq-certificates-1 \
--namespace=rabbitns \
--from-file=ca.crt \
--from-file=tls.crt=rabbit2.crt \
--from-file=tls.key=rabbit.key
Then we "upgrade" RabbitMQ, pointing it at the new secrets. This can also take a few minutes.
helm -n rabbitns get values --all --output yaml rabbit-chart > rabbit-chart-values.yaml
helm upgrade --install rabbit-chart \
--namespace=rabbitns \
--values rabbit-chart-values.yaml \
--set auth.tls.existingSecret=rabbitmq-certificates-1 \
--wait --timeout 10m \
bitnami/rabbitmq --version 12.13.1
MongoDB
Entrust recommends that you use your standard secure MongoDB Replica Set installation. This is just an example, and not production-ready.
By default the Bitnami MongoDB chart will create its own CA, and generate TLS keys for each of its servers from this CA. As we have an existing CA to use, we will pass the private key and certificate to MongoDB to use as its CA.
kubectl create namespace mongons
kubectl create secret generic mongodb-ca-certificates \
--namespace mongons \
--from-file=mongodb-ca-cert=ca.crt \
--from-file=mongodb-ca-key=internalCA/private/cakey.pem
Now we install the MongoDB chart, using the CA secret above. This may take a few minutes.
helm install mongo-chart \
--set image.tag=7.0.7-debian-12-r0 \
--set architecture=replicaset \
--set auth.enabled=true \
--set auth.usernames={dummyuser}\
--set auth.passwords={dummypassword} \
--set auth.databases={authdb} \
--set tls.enabled=true \
--set tls.mTLS.enabled=true \
--set tls.autoGenerated=false \
--set tls.existingSecret=mongodb-ca-certificates \
--namespace=mongons \
--wait --timeout 10m bitnami/mongodb --version 15.0.2
There will be a message listing the MongoDB server addresses. Save the addresses to environment variables for use later.
export MONGO1=mongo-chart-mongodb-0.mongo-chart-mongodb-headless.mongons.svc.cluster.local:27017
export MONGO2=mongo-chart-mongodb-1.mongo-chart-mongodb-headless.mongons.svc.cluster.local:27017
export MONGODB=${MONGO1},${MONGO2}
We now need to create a TLS key and certificate for securing communications between the backend services and MongoDB.
export MONGOUSER="ks5-mongo-user"
openssl genrsa -out $MONGOUSER.key 4096
openssl req -new -key $MONGOUSER.key -out $MONGOUSER.csr \
-subj "/CN=${MONGOUSER}" \
-addext "keyUsage=digitalSignature" \
-addext "extendedKeyUsage=clientAuth" \
-addext "subjectAltName=DNS:${MONGOUSER}"
openssl ca -config ~/keysafe5-install/internalCA/internalCA.conf \
-out ${MONGOUSER}.crt -in ${MONGOUSER}.csr -batch
rm ${MONGOUSER}.csr
kubectl create secret generic ks5-mongotls \
--namespace nshieldkeysafe5 \
--from-file=ca.crt \
--from-file=tls.crt=$MONGOUSER.crt \
--from-file=tls.key=$MONGOUSER.key
The database tables and access for MONGOUSER
also need to be set up.
For this we will pass commands to mongosh running on the database server itself as the root user.
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace mongons \
mongo-chart-mongodb -o jsonpath="{.data.mongodb-root-password}" \
| base64 --decode)
export MONGO_TLSUSER=$(openssl x509 -in $MONGOUSER.crt -subject -noout | cut -f2- -d= | tr -d '[:space:]')
echo $MONGO_TLSUSER
Make a note of the MONGO_TLSUSER
as you will need it shortly.
We need to start the mongosh command prompt.
kubectl -n mongons exec \
--stdin=true mongo-chart-mongodb-0 0 -- \
mongosh admin --tls \
--tlsCAFile /certs/mongodb-ca-cert \
--tlsCertificateKeyFile /certs/mongodb.pem \
--host 127.0.0.1 \
--authenticationDatabase admin \
-u root -p $MONGODB_ROOT_PASSWORD
At the command prompt enter these database commands to create the tables.
db.createRole(
{
role: "hsm-mgmt-db-user",
privileges: [
{
"resource": {"db": "hsm-mgmt-db", "collection": ""},
"actions": ["createIndex", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
db.createRole(
{
role: "sw-mgmt-db-user",
privileges: [
{
"resource": {"db": "sw-mgmt-db", "collection": ""},
"actions": ["createIndex", "dropCollection", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
db.createRole(
{
role: "codesafe-mgmt-db-user",
privileges: [
{
"resource": {"db": "codesafe-mgmt-db", "collection": ""},
"actions": ["createIndex", "find", "insert", "remove", "update"]
},
],
roles: []
}
)
We now need to create the user with access to the database.
Replace CN=ks5-mongo-user
with the value you got for MONGO_TLSUSER
, and enter it into the prompt.
use $external
x509_user = {
user : "CN=ks5-mongo-user",
roles : [
{"role": "codesafe-mgmt-db-user", "db": "admin" },
{"role": "hsm-mgmt-db-user", "db": "admin" },
{"role": "sw-mgmt-db-user", "db": "admin" },
]
}
db.createUser(x509_user)
Type exit
to exit the mongosh prompt.
Object Storage
For large object storage, create a Persistent Volume Claim, in the nshieldkeysafe5
Kubernetes namespace (the same namespace that we will deploy the application to).
Cluster-local Object Storage
If your Kubernetes cluster only has 1 worker node, you can choose to use local storage.
cat << EOF | kubectl -n nshieldkeysafe5 apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-nshield-keysafe5
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
EOF
NFS Object Storage
If your Kubernetes cluster has more than 1 worker node, you must use a type of storage that supports distributed access, such as NFS. For details on creating a PVC for NFS object storage, please see NFS Object Storage Configuration.
Install KeySafe 5
Bringing all the secrets and URLs created above, install KeySafe 5.
The commands below assume that a login is not required to pull from the Docker Registry. |
# Get Ingress IP address
export INGRESS_IP=$(kubectl --namespace istio-system get svc -l app=istio-ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
# Install the KeySafe 5 backend services
helm install keysafe5-backend \
--namespace=nshieldkeysafe5 \
--set codesafe_mgmt.image=$DOCKER_REGISTRY/keysafe5/codesafe-mgmt:1.3.0 \
--set hsm_mgmt.image=$DOCKER_REGISTRY/keysafe5/hsm-mgmt:1.3.0 \
--set sw_mgmt.image=$DOCKER_REGISTRY/keysafe5/sw-mgmt:1.3.0 \
--set database.type=mongo \
--set database.mongo.hosts="$MONGO1\,$MONGO2" \
--set database.mongo.replicaSet=rs0 \
--set database.mongo.auth.type=tls \
--set database.mongo.auth.authDatabase=authdb \
--set database.mongo.tls.enabled=true \
--set database.mongo.tls.existingSecret=ks5-mongotls \
--set messageBus.type=amqp \
--set messageBus.URL=${RABBIT_URL} \
--set messageBus.auth.type=tls \
--set messageBus.tls.enabled=true \
--set messageBus.tls.existingSecret=ks5-messagebus-tls \
--set objectStore.pvc=data-nshield-keysafe5 \
--wait --timeout 10m \
helm-charts/nshield-keysafe5-backend-1.3.0.tgz
# Install the KeySafe 5 UI
helm install keysafe5-ui \
--namespace=nshieldkeysafe5 \
--set ui.image=$DOCKER_REGISTRY/keysafe5/mgmt-ui:1.3.0 \
--set svcEndpoint="https://${INGRESS_IP}" \
--set authMethod=none \
--wait --timeout 10m \
helm-charts/nshield-keysafe5-ui-1.3.0.tgz
# Create the TLS secret for the Istio Ingress Gateway
openssl genrsa -out istio.key 4096
openssl req -new -key istio.key -out istio.csr \
-subj "/CN=${HOSTNAME}" \
-addext "keyUsage=digitalSignature" \
-addext "extendedKeyUsage=serverAuth" \
-addext "subjectAltName=DNS:${HOSTNAME},IP:${INGRESS_IP}"
openssl ca -config ~/keysafe5-install/internalCA/internalCA.conf \
-out istio.crt -in istio.csr -batch
kubectl -n istio-system create secret tls \
keysafe5-server-credential --cert=istio.crt --key=istio.key
# Configure Istio Ingress Gateway for KeySafe 5
helm install keysafe5-istio \
--namespace=nshieldkeysafe5 \
--set tls.existingSecret=keysafe5-server-credential \
--set requireAuthn=false \
--wait --timeout 1m \
helm-charts/nshield-keysafe5-istio-1.3.0.tgz
Access KeySafe 5
You can now access KeySafe 5 at https://$INGRESS_IP
.
For example, you could send curl requests as demonstrated below.
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/hsms | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/hosts | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/pools | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/feature-certificates | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/mgmt/v1/worlds | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/codesafe/v1/images | jq
curl -X GET --cacert ca.crt https://${INGRESS_IP}/codesafe/v1/certificates | jq
You can access the Management UI in a web browser at https://$INGRESS_IP
.
Configure KeySafe 5 Agent machines
To configure a host machine to be managed and monitored by this deployment, run the KeySafe 5 agent binary on the KeySafe 5 Agent machine containing the relevant Security World or HSMs.
Configure this KeySafe 5 agent to communicate with the RabbitMQ server installed previously. Ensure no firewall rules are blocking the AMQP port communication between the machine exposing the AMQP port from Kubernetes and the machine running the agent. |
After copying over the agent tar file, extract it and start configuring:
sudo tar -C / -xf keysafe5-1.3.0-Linux-keysafe5-agent.tar.gz
export KS5CONF=/opt/nfast/keysafe5/conf
sudo cp $KS5CONF/config.yaml.example $KS5CONF/config.yaml
Edit the config.yaml
, replacing the message_bus url 127.0.0.1:5671
with the value of $RABBIT_URL
that was produced setting up the services.
Typically the 127.0.0.1
would be replaced by the INGRESS_IP
, and a /nshieldvhost
appended to it.
Create the messagebus/tls directory and copy the ca.crt
file copied from the keysafe5-install
directory on the demo machine into it.
mkdir -p $KS5CONF/messagebus/tls
cp ca.crt $KS5CONF/messagebus/tls/
Create the private key and a certificate signing request (CSR) for this specific KeySafe 5 agent.
sudo /opt/nfast/keysafe5/bin/ks5agenttls --keypath=$KS5CONF/messagebus/tls/tls.key --keygen
sudo /opt/nfast/keysafe5/bin/ks5agenttls --keypath=$KS5CONF/messagebus/tls/tls.key --csrgen
The CSR should be provided to a KeySafe 5 administrator who, in a secure location/environment, creates a RabbitMQ service client TLS certificate using the CA trusted by the RabbitMQ server.
For this installation we copy the CSR to the demo machine, into the keysafe5-install directory, then sign it using OpenSSL.
openssl ca -config ~/keysafe5-install/internalCA/internalCA.conf \
-in ks5agent_demohost.csr \
-out ks5agent_demohost.crt -batch
RabbitMQ uses the first DNS name as provided in the subjectAltName field of the certificate. This is usually the basename of the file, but you may retrieve it using the following command:
export x509user=$(openssl x509 -in ks5agent_demohost.crt -noout -ext subjectAltName | tail -n1 | cut -f2 -d:)
echo $x509user
Using the username printed in the output of the previous command, configure the RabbitMQ server to allow access for this X.509 user in the appropriate virtual host.
export RUN_RABBIT="kubectl -n rabbitns exec rabbit-chart-rabbitmq-0 -c rabbitmq -- "
${RUN_RABBIT} rabbitmqctl add_user $x509user "ephemeralpw"
${RUN_RABBIT} rabbitmqctl set_permissions -p $RABBIT_VHOST $x509user ".*" ".*" ".*"
${RUN_RABBIT} rabbitmqctl clear_password $x509user
Transfer the resulting certificate ks5agent_demohost.crt
to the nShield Agent machine at /opt/nfast/keysafe5/conf/messagebus/tls/tls.crt
.
On the nShield Agent machine, if the hardserver is already running, use the KeySafe 5 install script to not restart it when installing the KeySafe 5 agent.
sudo /opt/nfast/keysafe5/sbin/install
Otherwise, use the nShield install script which will start both the nShield Security World software and the KeySafe 5 agent.
sudo /opt/nfast/sbin/install
Uninstall
KeySafe 5 services
helm --namespace nshieldkeysafe5 uninstall keysafe5-istio
helm --namespace nshieldkeysafe5 uninstall keysafe5-backend
helm --namespace nshieldkeysafe5 uninstall keysafe5-ui
helm --namespace rabbitns uninstall rabbit-chart
helm --namespace mongons uninstall mongo-chart
KeySafe 5 Agent
To uninstall the KeySafe 5 agent, run the KeySafe 5 uninstaller, then remove the files manually.
sudo /opt/nfast/keysafe5/sbin/install -u
rm -f /opt/nfast/lib/versions/keysafe5-agent-atv.txt
rm -f /opt/nfast/sbin/keysafe5-agent
rm -f /opt/nfast/scripts/install.d/12keysafe5-agent
rm -f /opt/nfast/keysafe5/sbin/install
rm -f /opt/nfast/keysafe5/bin/ks5agenttls
rm -f /opt/nfast/keysafe5/conf/config.yaml.example
The configuration for KeySafe 5 Agent is stored in the conf directory which can also be deleted.
rm -rf /opt/nfast/keysafe5/conf