Kubernetes Deployment Procedures
Prerequisites:
-
Dedicated Linux server with a working HashiCorp installation to build the container images. This is the same server above after completing all the steps previously outlined.
-
Container platform. Any Kubernetes platform is supported. The example process in this guide uses an OpenShift Kubernetes installation on a single machine.
Follow these steps to create a HashiCorp image which supports the HSM, generate the containers, and test the Kubernetes integration with the HSM.
Install nShield nCOP
-
Create the installation directory:
# sudo mkdir -p /opt/ncop
-
Extract the nShield Container Option Pack tarball:
# sudo tar -xvf /root/Downloads/ncop-1.1.1.tar -C /opt/ncop extend-nshield-application make-nshield-application make-nshield-hwsp make-nshield-hwsp-config examples/ examples/javaenquiry/ examples/javaenquiry/Dockerfile examples/javaenquiry/README.md examples/javaenquiry/cmd examples/nfkminfo/ examples/nfkminfo/Dockerfile examples/nfkminfo/README.md examples/nfkmverify/ examples/nfkmverify/Dockerfile examples/nfkmverify/README.md examples/nfweb/ examples/nfweb/Dockerfile examples/nfweb/README.md examples/nfweb/nfweb.py README.md images/architecture.png images/java-architecture.png license.rtf rnotes.pdf version.json
Install Docker
-
Add the Docker CE repository:
# yum-config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo Updating Subscription Management repositories. This system is registered to Red Hat Subscription Management, but is not receiving updates You can use subscription-manager to assign subscriptions. Adding repo from: https://download.docker.com/linux/rhel/docker-ce.repo
-
Verify the repo contains the stable version of Docker:
# yum repolist Updating Subscription Management repositories. This system is registered to Red Hat Subscription Management, but is not receiving updates. You can use subscription-manager to assign subscriptions. repo id repo name docker-ce-stable Docker CE Stable - x86_64 rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
-
Install Docker:
yum install docker-ce Updating Subscription Management repositories. This system is registered to Red Hat Subscription Management, but is not receiving updates. You can use subscription-manager to assign subscriptions. Docker CE Stable - x86_64 48 kB/s | 9.5 kB 00:00
See troubleshoot section for issues with the installation. -
Start Docker manually, start the
docker
service on startup, and verify it is running:# systemctl start docker # systemctl enable docker Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. # systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2022-01-06 10:02:30 EST; 37s ago Docs: https://docs.docker.com Main PID: 30667 (dockerd) Tasks: 13 Memory: 172.9M CGroup: /system.slice/docker.service └─30667 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock Jan 06 10:02:29 Red_Hat_8.3_HashiCorp_Containers dockerd[30667]: time="2022-01-06T10:02:29.01936988> Jan 06 10:02:29 Red_Hat_8.3_HashiCorp_Containers dockerd[30667]: time="2022-01-06T10:02:29.41964695> ... # docker --version Docker version 20.10.3, build 48d30b5
Build the nShield hardserver container
-
Stop the hardserver:
# /opt/nfast/sbin/init.d-ncipher stop -- Running shutdown script 90ncsnmpd -- Running shutdown script 60raserv -- Running shutdown script 50hardserver -- Running shutdown script 46exard -- Running shutdown script 45drivers
-
Mount the Security World ISO file:
# mount -t iso9660 -o loop /root/Downloads/SecWorld_Lin64-12.80.4.iso /mnt/iso mount: /mnt/iso: WARNING: device write-protected, mounted read-only.
-
Change directory:
# cd /opt/ncop
-
Build the nShield hardserver container:
# ./make-nshield-hwsp --from registry.access.redhat.com/ubi8/ubi --tag nshield-hwsp-pkcs11-redhat /mnt/iso Detecting nShield software version Version is 12.80.4 Unpacking hwsp... Extracting tools from ctls... Removing redundant files... Creating files... Building image... Sending build context to Docker daemon 234.6MB Step 1/24 : FROM registry.access.redhat.com/ubi8/ubi latest: Pulling from ubi8/ubi 26f1167feaf7: Pull complete adffa6963146: Pull complete Digest: sha256:228824aa581f3b31bf79411f8448b798291c667a37155bdea61cfa128b2833f2 Status: Downloaded newer image for registry.access.redhat.com/ubi8/ubi:latest ---> fca12da1dc30 Step 2/24 : RUN if [ -x /usr/bin/microdnf ]; then microdnf update && microdnf install shadow-utils libcap findutils && microdnf clean all; fi ---> Running in 8caca115e9f9 Removing intermediate container 8caca115e9f9 ---> 61a560b15baf ... Successfully built 21fc2835b68e Successfully tagged nshield-hwsp-pkcs11-redhat:latest
-
Unmount the Security World ISO:
umount /mnt/iso
-
Verify the built container:
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nshield-hwsp-pkcs11-redhat latest 21fc2835b68e 30 minutes ago 484MB ...
Build the HashiCorp Vault container
-
Create the
working
directory for Docker:# mkdir ~/working
-
Create a
/root/working/Dockerfile
. Notice the nShield files copied.FROM registry.access.redhat.com/ubi8 # Working directory. WORKDIR /root/working # nShield files. RUN mkdir -p /opt/nfast COPY cknfastrc /opt/nfast/ # Create Vault user and group. RUN groupadd --system nfast && \ groupadd --system vault && \ useradd --system --shell /sbin/nologin --gid vault vault && \ usermod --append --groups nfast vault # Download the Vault package from HashiCorp at https://releases.hashicorp.com/vault/. # Unzip the binary file and extract it to the working directory. RUN yum install -y wget && \ yum install -y unzip && \ wget https://releases.hashicorp.com/vault/1.9.2+ent.hsm/vault_1.9.2+ent.hsm_linux_amd64.zip && \ unzip vault_1.9.2+ent.hsm_linux_amd64.zip -d /usr/local/bin && \ yum remove -y wget unzip && \ rm -r *.zip # Set Vault permissions. RUN chmod 755 /usr/local/bin/vault && \ setcap cap_ipc_lock=+ep /usr/local/bin/vault # Create the Vault data directories. RUN mkdir --parents /opt/vault/data && \ mkdir --parents /opt/vault/logs && \ chmod --recursive 750 /opt/vault && \ chown --recursive vault:vault /opt/vault # Create a vault file in sysconfig. RUN touch /etc/sysconfig/vault # Expose the data and logs directory as a volume. VOLUME /opt/vault/data VOLUME /opt/vault/logs # 8200/tcp is the primary interface that applications use to interact with Vault. EXPOSE 8200 # Enable Vault. RUN export VAULT_ADDR=http://127.0.0.1:8200 # Starting Vault as follows fails with error "System has not been booted with systemd as init system (PID 1)#. # ENTRYPOINT systemctl start vault.service # Instead use the parameter ExecStart at /etc/systemd/system/vault.service. ENTRYPOINT /usr/local/bin/vault server -config=/etc/vault/config.hcl
-
Build the container:
# cd /root/working # docker build . --no-cache -t hashicorp-vault-enterprise-hsm Sending build context to Docker daemon 121.3kB Step 1/12 : FROM registry.access.redhat.com/ubi8 ---> fca12da1dc30 Step 2/12 : WORKDIR /root/working ---> Running in 247a31ee2196 Removing intermediate container 247a31ee2196 ---> ef51adc3a657 Step 3/12 : RUN groupadd --system nfast && groupadd --system vault && useradd --system --shell /sbin/nologin --gid vault vault && usermod --append --groups nfast vault ... Successfully built 58a728b21930 Successfully tagged hashicorp-vault-enterprise-hsm:latest
-
Verify the built container:
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE hashicorp-vault-enterprise-hsm latest 58a728b21930 2 minutes ago 635MB nshield-hwsp-pkcs11-redhat latest 21fc2835b68e 2 weeks ago 484MB ...
Build the nShield Vault application container
-
Stop the hardserver:
# /opt/nfast/sbin/init.d-ncipher stop -- Running shutdown script 90ncsnmpd -- Running shutdown script 60raserv -- Running shutdown script 50hardserver -- Running shutdown script 46exard -- Running shutdown script 45drivers
-
Mount the Security World ISO file:
# mount -t iso9660 -o loop /root/Downloads/SecWorld_Lin64-12.80.4.iso /mnt/iso mount: /mnt/iso: WARNING: device write-protected, mounted read-only.
-
Build the container:
# cd /opt/ncop # ./extend-nshield-application --from hashicorp-vault-enterprise-hsm --pkcs11 --tag nshield-vault-app-pkcs11-redhat /mnt/iso Detecting nShield software version Version is 12.80.4 NOTICE: --pkcs11 included by default with 12.60 ISO. Flag ignored Unpacking /mnt/iso/linux/amd64/hwsp.tar.gz ... Unpacking /mnt/iso/linux/amd64/ctls.tar.gz ... Adding files... Building image... Sending build context to Docker daemon 702.7MB Step 1/4 : FROM hashicorp-vault-enterprise-hsm ---> 58a728b21930 Step 2/4 : COPY opt /opt ---> 47a71ac5f1b0 Step 3/4 : RUN mkdir -p /opt/nfast/kmdata /opt/nfast/sockets && mkdir -m 1755 /opt/nfast/kmdata/tmp ---> Running in be7ad7b82bb5 Removing intermediate container be7ad7b82bb5 ---> 147827a9fc16 Step 4/4 : VOLUME [ "/opt/nfast/kmdata", "/opt/nfast/sockets" ] ---> Running in 4b1d7f697f36 Removing intermediate container 4b1d7f697f36 ---> 363024ec103d Successfully built 363024ec103d Successfully tagged nshield-vault-app-pkcs11-redhat:latest
-
Unmount the Security World ISO:
umount /mnt/iso
-
Verify the built container:
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE nshield-vault-app-pkcs11-redhat latest 363024ec103d 2 minutes ago 1.33GB hashicorp-vault-enterprise-hsm latest 58a728b21930 15 minutes ago 635MB nshield-hwsp-pkcs11-redhat latest 21fc2835b68e 2 weeks ago 484MB ...
Run the containers locally
This test is performed in the Linux server with the HashiCorp Vault installation used so far in all the steps outlined above.
-
Create a
/root/working/nfast/kmdata-config
directory for the nShield configuration file and cardlist file (if using OCS protection). Populate the directory.# mkdir /root/working/nfast/kmdata-config # cp /opt/nfast/kmdata/config/* /root/working/nfast/kmdata-config/.
-
Create a
/root/working/nfast/kmdata-local
directory for the nShield world and module files, and the keys created for the Vault. Populate the directory.# mkdir /root/working/nfast/kmdata-local # cp /opt/nfast/kmdata/local/* /root/working/nfast/kmdata-local/.
-
Create a
/root/working/vault-config
directory for the license, config, and other Vault files. Populate the directory.# mkdir /root/working/vault-config # cp /etc/profile.d/vault.sh /root/working/vault-config/vault.sh # cp /etc/vault/license.hclic /root/working/vault-config/license.hclic # cp /etc/vault/config.hcl /root/working/vault-config/config.hcl
-
Stop the nShield Hardserver and the Vault service. Otherwise the test will pass regardless of whether the container is running or not.
# /opt/nfast/sbin/init.d-ncipher stop # systemctl stop vault.service
-
Open a new command window and run the
nshield-hwsp-pkcs11-redhat
image:# cd /root/working # docker volume create socket1 socket1 # docker run --rm -it -v socket1:/opt/nfast/sockets -v $PWD/nfast/kmdata-config:/opt/nfast/kmdata/config nshield-hwsp-pkcs11-redhat
-
Open a new command window and run the
nshield-vault-app-pkcs11-redhat
image. Notice all the parameters passed.# cd /root/working # docker volume create socket1 socket1 # docker run --rm -it --privileged -v socket1:/opt/nfast/sockets -v $PWD/vault-config/license.hclic:/etc/vault/license.hclic -v $PWD/vault-config/config.hcl:/etc/vault/config.hcl -v $PWD/nfast/kmdata-local:/opt/nfast/kmdata/local --env VAULT_ADDR=http://127.0.0.1:8200 -p8200:8200 nshield-vault-app-pkcs11-redhat
-
Verify the containers are running:
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b466aeabe786 nshield-vault-app-pkcs11-redhat "/bin/sh -c '/usr/lo…" 56 seconds ago Up 56 seconds 0.0.0.0:8200->8200/tcp nice_fermi b1229bd35efb nshield-hwsp-pkcs11-redhat "/opt/nfast/sbin/nsh…" 22 hours ago Up 22 hours
Test containers running locally
-
Check the Vault status:
# docker exec -it b466aeabe786 vault status Key Value --- ----- Recovery Seal Type pkcs11 Initialized false Sealed true Total Recovery Shares 0 Threshold 0 Unseal Progress 0/0 Unseal Nonce n/a Version 1.9.2+ent.hsm Storage Type file HA Enabled false
-
Initialize the Vault:
# docker exec -it b466aeabe786 vault operator init -recovery-shares=1 -recovery-threshold=1 Recovery Key 1: 0ZJVfhRWIzFVU8aMqEe8I05yGVuV4SsgsdZu63fM0ts= Initial Root Token: s.Yy4pFr03KdfuV9ZKxUB9AZDv Success! Vault is initialized Recovery key initialized with 1 key shares and a key threshold of 1. Please securely distribute the key shares printed above.
-
Log in to the Vault:
# docker exec -it b466aeabe786 vault login s.Yy4pFr03KdfuV9ZKxUB9AZDv Success! You are now authenticated. The token information displayed below is already stored in the token helper. You do NOT need to run "vault login" again. Future Vault requests will automatically use this token. Key Value --- ----- token s.Yy4pFr03KdfuV9ZKxUB9AZDv token_accessor fu745qP8XmKeuN1OeMEGgXW8 token_duration ∞ token_renewable false token_policies ["root"] identity_policies [] policies ["root"]
-
Examine Vault secrets:
# docker exec -it b466aeabe786 vault list secrets No value found at secrets
Push the container images to your registry
-
Log into your remote registry:
# docker swarm init # docker login -u <your_user_id> https://registry.eselab.net Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded
-
Tag and push the images:
# docker tag nshield-hwsp-pkcs11-redhat:latest registry.eselab.net/hashicorp-vault-nshield-hwsp:latest # docker push registry.eselab.net/hashicorp-vault-nshield-hwsp:latest The push refers to repository [registry.eselab.net/hashicorp-vault-nshield-hwsp] 7124856b02e0: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat d04672a1fe0b: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 2336267e5c4c: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 25965ddea3b0: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat a6a76cb66da3: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 3c94a72beb86: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 945bd9297bdc: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 1d3ad63a37b6: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 1d212b6142a1: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat e4612491fc7d: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 5202f468f3c9: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 2a8d12e1343e: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 07728d479b42: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 7175fef03a4b: Mounted from hashicorp-nshield-hwsp-pkcs11-redhat 3ba8c926eef9: Mounted from hashicorp-nshield-app-pkcs11-redhat 352ba846236b: Mounted from hashicorp-nshield-app-pkcs11-redhat latest: digest: sha256:21f5cd82310012e6a16c6989425569ba2c2707485342e470210f6e0958c0d39d size: 3650
# docker tag nshield-vault-app-pkcs11-redhat:latest registry.eselab.net/hashicorp-vault-nshield-app:latest # docker push registry.eselab.net/hashicorp-vault-nshield-app:latest The push refers to repository [registry.eselab.net/hashicorp-vault-nshield-app] 11436f7c7418: Pushed 7516f7bce24a: Pushed efc7c4a8cc00: Pushed 20d2b70c4819: Pushed f7064b7a4790: Pushed 424bf5904064: Pushed fdb2287e8ad0: Pushed 144ac96160c8: Pushed 3ba8c926eef9: Layer already exists 352ba846236b: Layer already exists latest: digest: sha256:05bdba96ddc9409a9695ad0e864ede1f3c7b780d4916a7c20fb869064e399a0a size: 2410
-
Log out from registry:
# docker logout https://registry.eselab.net Removing login credentials for registry.eselab.net
-
Notice the
config.json
file that was created during the logging in process:# cat /root/.docker/config.json { "auths": { "registry.eselab.net": { "auth": "..." } }
Create the project in the container platform
Red Hat OpenShift is the container platform used in this integration.
-
Log into the OpenShift container platform server. If logging in as root, change to another user.
-
Add the nShield HSM as a client on the OpenShift server. Refer to section Install the HSM above.
-
Create the
pull-secret
file with the pull secret copied from https://cloud.redhat.com/openshift/create/local:# cat /home/testuser/Documents/pull-secret {"auths":{"cloud.openshift.com":{"auth":"b3Blbn...
-
Copy the
config.json
file created above in the dedicated Linux server to this OpenShift environment:$ ls -al /home/testuser/Documents/config.json -rw-rw-r--. 1 testuser testuser 147 Jan 21 10:49 /home/testuser/Documents/config.json
-
Start the Red Hat CodeReady environment:
$ crc start
-
Log into the OpenShift environment:
$ eval $(crc oc-env) $ oc login -u kubeadmin https://api.crc.testing:6443 Logged into "https://api.crc.testing:6443" as "kubeadmin" using existing credentials. You have access to 64 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default".
-
Create the project:
$ oc create -f /home/testuser/Documents/project.yaml project.project.openshift.io/hashicorpvault created
-
Change from the current project to the newly created project:
$ oc project hashicorpvault Now using project "hashicorpvault" on server "https://api.crc.testing:6443".
$ oc get namespaces NAME STATUS AGE default Active 78d hashicorpvault Active 7m55s ...
-
Create and retrieve the secret:
$ oc create secret generic hashicorpvault --from-file=.dockerconfigjson=/home/testuser/Documents/config.json --type=kubernetes.io/dockerconfigjson secret/hashicorpvault created Create secret - 1/2 steps
$ oc get secret NAME TYPE DATA AGE ... hashicorpvault kubernetes.io/dockerconfigjson 1 0s
-
Create the config map with the Connect details:
$ oc create -f /home/testuser/Documents/cm.yaml configmap/config created
-
Verify the nShield Connect configuration:
$ oc get configmap NAME DATA AGE config 1 0s kube-root-ca.crt 1 1s openshift-service-ca.crt 1 1s
$ oc describe configmap/config Name: config Namespace: hashicorpvault Labels: <none> Annotations: <none> Data ==== config: syntax-version=1 [nethsm_imports] local_module=0 remote_ip=10.194.148.33 remote_port=9004 remote_esn=201E-03E0-D947 keyhash=84800d1bfff6515ed5806fe443bbaca812d73733 privileged=0 BinaryData ==== Events: <none>
Create the persistent volumes
The following persistent volumes will be created. The first is for the communication between the Vault application and the nShield hardserver. The others are for configuration, keys, and license files.
-
/opt/nfast/socket
-
/opt/nfast/kmdata/
-
/etc/vault
-
/opt/vault/data
To create the persistent volumes:
-
Create the
/opt/nfast/sockets
persistent volume for the nShield hardserver communication with the Vault application. See the Sample YAML files appendix for YAML files.$ oc create -f /home/testuser/Documents pv_nfast_sockets_definition.yaml persistentvolume/nfast-sockets created
-
Create the
/opt/nfast/kmdata
persistent volume for the nShield configuration files:$ oc create -f /home/testuser/Documents/pv_nfast_kmdata_definition.yaml persistentvolume/nfast-kmdata created
-
Create the
/etc/vault
persistent volume for the Vault configuration files:$ oc create -f /home/testuser/Documents/pv_vault_config_definition.yaml persistentvolume/vault-config created
-
Create the
/opt/vault/data
persistent volume for the Vault storage backend:$ oc create -f /home/testuser/Documents/pv_vault_data_definition.yaml persistentvolume/vault-data created
-
Verify the persistent volumes created:
$ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfast-kmdata 1G RWO Retain Available manual 0s nfast-sockets 1G RWO Retain Available manual 0s vault-config 10M RWO Retain Available manual 0s vault-data 10M RWO Retain Available manual 0s
Claim the persistent volumes
-
Create the
/opt/nfast/sockets
volume claim:$ oc create -f /home/testuser/Documents pv_nfast_sockets_claim.yaml persistentvolumeclaim/nfast-sockets created
-
Create the
/opt/nfast/kmdata
volume claim:$ oc create -f /home/testuser/Documents pv_nfast_kmdata_claim.yaml persistentvolumeclaim/nfast-kmdata created
-
Create the
/etc/vault
volume claim:$ oc create -f /home/testuser/Documents/pv_vault_config_claim.yaml persistentvolumeclaim/vault-config created
-
Create the
/opt/vault/data
volume claim:$ oc create -f /home/testuser/Documents/pv_vault_data_claim.yaml persistentvolumeclaim/vault-data created
-
Verify the persistent volumes claimed:
$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfast-kmdata Bound nfast-kmdata 1G RWO manual 1s nfast-sockets Bound nfast-sockets 1G RWO manual 1s vault-config Bound vault-config 10M RWO manual 1s vault-data Bound vault-data 10M RWO manual 0s
Copy the configuration files to the cluster persistent volumes
-
Copy the
/root/working/nfast`
and/root/working/vaultconfig
required files from the dedicated Linux server above to OpenShift server. Then perform achmod 775 <all copied files>
.$ $ ls -al /home/testuser/Documents/nfast/kmdata/local total 96 ... -rwxrwxr-x. 1 testuser testuser 7180 Jan 18 21:27 key_pkcs11_ucb5df6e12703825562ce731e3286a4fb9f46e767a-ebc7da3d8e2f9aa86377dc4e5269157557cddd1c -rwxrwxr-x. 1 testuser testuser 5000 Jan 27 13:48 module_201E-03E0-D947 -rwxrwxr-x. 1 testuser testuser 1428 Jan 18 21:27 softcard_b5df6e12703825562ce731e3286a4fb9f46e767a -rwxrwxr-x. 1 testuser testuser 39968 Jan 18 21:27 world $ ls -al /home/testuser/Documents/vaultconfig total 4 ... drwxrwxr-x. 2 testuser testuser 6 Jan 21 12:53 config.hcl -rw-rw-r--. 1 testuser testuser 1202 Jan 21 12:50 license.hclic
-
Show the nodes:
$ oc get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS crc-ktfxm-master-0 Ready master,worker 78d v1.22.0-rc.0+a44d0f0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=crc-ktfxm-master-0,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
-
Label the nodes:
$ oc label node crc-ktfxm-master-0 nodeName=master-0 node/crc-ktfxm-master-0 labeled $ oc get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS crc-ktfxm-master-0 Ready master,worker 78d v1.22.0-rc.0+a44d0f0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=crc-ktfxm-master-0,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,nodeName=master-0
-
Create a
pod_dummy.yaml
application container for the purpose of populating the persistent volumes.$ oc create -f /home/testuser/Documents/pod_dummy.yaml pod/ncop-test-dummy-svhnn created
-
Verify the pods are running. This might take several minutes to download form the remote registry:
$ oc get pods NAME READY STATUS RESTARTS AGE ncop-test-dummy-kw5mv 2/2 Running 0 3m
-
Populate the persistent storage with configuration files:
$ oc cp /home/testuser/Documents/nfast/kmdata-config/config ncop-test-dummy-kw5mv:/opt/nfast/kmdata/config/config $ oc cp /home/testuser/Documents/nfast/kmdata-config/cardlist ncop-test-dummy-kw5mv:/opt/nfast/kmdata/config/cardlist $ oc cp /home/testuser/Documents/nfast/kmdata-local/world ncop-test-dummy-kw5mv:/opt/nfast/kmdata/local/world $ oc cp /home/testuser/Documents/nfast/kmdata-local/module_201E-03E0-D947 ncop-test-dummy-kw5mv:/opt/nfast/kmdata/local/module_201E-03E0-D947 $ oc cp /home/testuser/Documents/vaultconfig/config.hcl ncop-test-dummy-kw5mv:/etc/vault/config.hcl $ oc cp /home/testuser/Documents/vaultconfig/license.hclic ncop-test-dummy-kw5mv:/etc/vault/license.hclic
-
Populate the persistent storage with keys:
$ oc cp /home/testuser/Documents/nfast/kmdata-local/card_124902ced9e45399cfa993eabd5d5d9e5c7b5a7f_1 ncop-test-dummy-kw5mv:/opt/nfast/kmdata/local/card_124902ced9e45399cfa993eabd5d5d9e5c7b5a7f_1 ...
-
Spot check the copied files:
$ oc debug pod/ncop-test-dummy-kw5mv Starting pod/ncop-test-dummy-kw5mv-debug, command was: sh -c sleep 3600 Pod IP: 10.217.1.13 If you don't see a command prompt, try pressing enter. sh-4.4# ls -al /opt/nfast/kmdata/local total 92 drwxr-xr-x. 2 root root 4096 Jan 26 02:16 . drwxr-xr-x. 4 root root 33 Jan 25 04:37 .. -rwxrwxr-x. 1 1004 1004 904 Jan 26 02:07 card_124902ced9e45399cfa993eabd5d5d9e5c7b5a7f_1 -rwxrwxr-x. 1 1004 1004 112 Jan 26 02:07 cards_124902ced9e45399cfa993eabd5d5d9e5c7b5a7f -rwxrwxr-x. 1 1004 1004 7176 Jan 26 02:13 key_pkcs11_uc124902ced9e45399cfa993eabd5d5d9e5c7b5a7f-0ee55ef5b9b6c3d42cee681e3b8c056f2df00a8f -rwxrwxr-x. 1 1004 1004 7216 Jan 26 02:13 key_pkcs11_uc124902ced9e45399cfa993eabd5d5d9e5c7b5a7f-bfa1988aed796d05cbf852abccf5380ff90f4f91 -rwxrwxr-x. 1 1004 1004 7216 Jan 26 02:14 key_pkcs11_ucb5df6e12703825562ce731e3286a4fb9f46e767a-376268c6c89c1657fb22ca1f08fe4f20b58b1c07 -rwxrwxr-x. 1 1004 1004 7180 Jan 26 02:14 key_pkcs11_ucb5df6e12703825562ce731e3286a4fb9f46e767a-ebc7da3d8e2f9aa86377dc4e5269157557cddd1c -rwxrwxr-x. 1 1004 1004 3488 Jan 26 01:27 module_201E-03E0-D947 -rwxrwxr-x. 1 1004 1004 1428 Jan 26 02:16 softcard_b5df6e12703825562ce731e3286a4fb9f46e767a -rwxrwxr-x. 1 1004 1004 39968 Jan 26 01:27 world sh-4.4# exit exit Removing debug pod ...
Deploy the HashiCorp Vault nShield application
-
Create the
pod_hashicorpvault_nshield.yaml
pod running the HashiCorp Vault and nShield application.$ oc create -f pod_hashicorpvault_nshield.yaml pod/hashicorpvault-status-fglgv created
-
Verify the Vault is available:
$ oc logs -f pod/hashicorpvault-status-fglgv hashicorp-app Vault v1.9.2+ent.hsm (f7be55269a69543aedae108588e63688e6490b44) (cgo)
-
Start the Vault server:
$ oc debug pod/hashicorpvault-nshield-fglgv -c hashicorp-app Starting pod/hashicorpvault-nshield-fglgv-debug ... Pod IP: 10.217.0.79 If you don't see a command prompt, try pressing enter. sh-4.4# /usr/local/bin/vault server -config=/etc/vault/config.hcl
-
Verify the Vault status:
-
Open a second window and log into the OpenShift environment.
-
Set the project, and execute the following command:
$ oc exec hashicorpvault-nshield-fglgv -c hashicorp-app -- /usr/local/bin/vault status Key Value --- ----- Recovery Seal Type pkcs11 Initialized false Sealed true Total Recovery Shares 0 Threshold 0 Unseal Progress 0/0 Unseal Nonce n/a Version 1.9.2+ent.hsm Storage Type file HA Enabled false command terminated with exit code 2
-