Vault implementation for static secrets
Authored by Abhayjit Kumar
Let’s say you have multiple databases (SQL or NoSQL) and your application is running on K8s. Your credentials should be stored and managed securely. In such situations, secret, a Base64 encoded password on Kubernetes, is commonly used if you are not using an encrypted etcd storage solution. A secret is not the ideal solution as they are not secured. Access to secrets is not encrypted and your credentials could be leaked if the etcd backend was compromised.
Vault from Hashicorp is a secure and reliable solution
The above was a sample case for using static secrets.
This blog, the first of two parts, explores managing static secrets
Prerequisites are:
Tech Stack:
What is Vault?
Vault is an identity-based secrets and encryption management system
Architecture:
Installation
Install the Vault using Helm chart
Edit the helm chart:
- We are not going with the gp2 type of pv, we will use dynamodb for HA.
- Go to the helm default folder of your user .cache/helm/repository
2. Disable backend storage - dataStorage
3. Enable HA - For high availability
4. Under “ha”: provide dynamodb config details as shown below. Enable the UI so we can access the Vault server from the browser.
*the user credential which we need to pass must have dynamodb full access.
- helm install vault hashicorp/vault
- kubectl get pods -n monitoring |grep vault
Initialize and unseal one Vault pod
Vault starts uninitialized and in the sealed state. Before initialization, the Integrated Storage backend is not prepared to receive data.
kubectl exec vault-0 -- vault operator init \
-key-shares=1 \
-key-threshold=1 \
-format=json > cluster-keys.json
The operator init command generates a root key that it disassembles into key shares -key-shares=n (means it will split it n no of share key) and
-key-threshold=x (means to unseal the vault server the minimum no of share keys you will need to have is x).
-format=json (it will define the property of your credential file and you can put it in any path)
Here the output is redirected to a file named cluster-keys.json.
VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r ".unseal_keys_b64[]")
kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY
kubectl exec vault-0 -- vault status
Setting a static secret in Vault
kubectl exec -it vault-0 -- /bin/sh
vault secrets enable -path=internal kv-v2
vault kv put internal/database/clops-app DATABASE_USERNAME="clops" DATABASE_PASSWORD="random_passord" DATABASE_HOST="database_endpoint”
vault kv get internal/database/clops-app
Enabling Kubernetes authentication
Vault provides a Kubernetes authentication method
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
For a client that wants to read the secret from the Vault, the read capability should be enabled. We will be creating a policy in the Vault to read the path of the secret i.e internal/data/database/clops-app.
We then need a role on the policy with the bounded service account and namespace.
Write out the policy named internal-app that enables the read capability for secrets at the path internal/data/database/creol-api.
Create a Kubernetes authentication role named internal-app
vault policy write internal-app - <<EOF
path "internal/data/database/clops-app" {
capabilities = ["read"]
}
EOF
vault write auth/kubernetes/role/internal-app \
bound_service_account_names=internal-app \
bound_service_account_namespaces=apis \
policies=internal-app \
ttl=24h
The role connects the Kubernetes service account, internal-app, and namespace, with the Vault policy, internal-app. The tokens returned after authentication are valid for 24 hours (we can decide the TTL based on our use case)
Define a Kubernetes service account
The Vault Kubernetes authentication role defined a Kubernetes service account named internal-app.
Launch an application
Here is the deployment file
apiVersion: apps/v1
kind: Deployment
Recommended by LinkedIn
metadata:
name: clops-app
namespace: apis
labels:
app: clops-app
spec:
replicas: 1
selector:
matchLabels:
app: clops-app
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-inject-status: 'update'
vault.hashicorp.com/role: 'internal-app'
vault.hashicorp.com/agent-inject-secret-database-creol-api.env: 'internal/data/database/clops-appi'
vault.hashicorp.com/agent-inject-template-database-clopos-app.env: |
{{- with secret "internal/data/database/clops-app" -}}
export SPRING_DATA_MONGODB_URI="mongodb+srv://{{ .Data.data.DATABASE_USERNAME }}:{{ .Data.data.DATABASE_PASSWORD }}@{{ .Data.data.DATABASE_HOST }}"
{{- end -}}
labels:
app: clops-app
spec:
serviceAccountName: internal-app
containers:
- name: clops-app
image: registry.gitlab.com/cloudifyops/engineering/clops-app:3.1.21
imagePullPolicy: Always
command: ["/bin/sh"]
args:
['-c', 'source /vault/secrets/database-clops-app.env && env > /vault/secrets/testenv && java "$MAX_HEAP_SIZE" -jar /opt/clops-app/clops-app.jar']
ports:
- name: clopos-app
containerPort: 7400
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: clops-app
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: clops-app
key: AWS_SECRET_ACCESS_KEY
- name: GIT_ACCESSTOKEN
valueFrom:
secretKeyRef:
name: clops-app
key: GIT_ACCESSTOKEN
volumeMounts:
- name: logs
mountPath: /var/log
- name: promtail-container
image: grafana/promtail
args:
- -config.file=/etc/promtail/promtail.yaml
volumeMounts:
- name: logs
mountPath: /var/log
- name: promtail-config
mountPath: /etc/promtail
volumes:
- name: logs
emptyDir: {}
- name: promtail-config
configMap:
name: promtail-config-creol-api
imagePullSecrets:
- name: gitlab-auth
Vault.hashicorp annotation will inject the secret into the pod in /vaults/secrets path.
We should source the secret file from there and start the jar file.
The logs of the apis
We can see that connection with mongodb has been established and jvm is started.
Bonus tip
If you want to store files that have some sensitive information in the Vault, that can also be stored in KV Engen.
Let’s say in promtail config file we were having the uid and password of the loki database so we need to store that file in the same Vault database path.
vault kv put internal/database/clops-app DATABASE_USERNAME="clops" DATABASE_PASSWORD="random_passord" DATABASE_HOST="database_endpoint” PROMTAIL=@promtail.yaml
Follow us on our LinkedIn Page. To know more about our services, visit our website.
CTO at alphATM
1yMy Steps are : helm upgrade --install --namespace vault --create-namespace --values ./values.yaml vault oci://registry-1.docker.io/bitnamicharts/vault kubectl -n vault exec vault-server-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > cluster-keys.json VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r ".unseal_keys_b64[]") kubectl -n vault exec vault-server-0 -- vault operator unseal $VAULT_UNSEAL_KEY kubectl -n vault exec vault-server-0 -- vault status kubectl -n vault exec -it vault-server-0 -- /bin/sh vault login vault secrets enable -path=internal kv-v2 vault kv put internal/database/clops-app DATABASE_USERNAME="clops" DATABASE_PASSWORD="random_passord" DATABASE_HOST="database_endpoint" # Enable K8s auth vault auth enable kubernetes vault write auth/kubernetes/config kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" vault policy write internal-app - <<EOF path "internal/data/database/clops-app" { capabilities = ["read"] } EOF vault write auth/kubernetes/role/internal-app bound_service_account_names=internal-app bound_service_account_namespaces=apis policies=internal-app ttl=24h exit kubectl -n apis create sa internal-app
CTO at alphATM
1yvault write auth/kubernetes/role/internal-appi \ # shouldn't be "internal-app"? bound_service_account_names=internal-app \ bound_service_account_namespaces=apis \ policies=internal-app \ ttl=24h kubectl create sa internal-app # not sure which ns. expected to be client NS
CTO at alphATM
1ythere are some mistakes in the document, but works