Get started with Velocity
Join the Waitlist
Join Our Discord

Passing Secrets: Using Vault with Kubernetes

Jeff Vincent
Jeff Vincent
November 3, 2022

Use Vault to store secrets as an alternative to less secure options like committing the secrets directly into version control or hard-coding a secret in a config file that is passed to an application at startup.

Passing Secrets: Using Vault with Kubernetes

Vault is an open source secret management service developed by HashiCorp that allows developers to store secrets like database credentials in a secure, central location. It encrypts data at rest (i.e., when it is being stored in Vault) and in transit (i.e., when it is being delivered to your application). Vault ships with various plugins that allow it to integrate with external systems, like Kubernetes, which we’ll be using today.

Vault also encrypts data as a service and manages the associated encryption keys. This means that an application can send Vault data, via high-level APIs, that is generated during runtime – such as user-specific data. Vault then encrypts that data and returns it to the running application, which can then store the encrypted data in a local database.

Why use Vault?

As a follow-up to our first secrets blog post, in this example we’ll be using Vault to store secrets like database credentials as an alternative to less secure options like committing the secrets directly into version control, where they are stored as plain-text, or hard-coding a secret in a config file that is passed to an application at startup, which requires that the config file itself be stored securely somewhere.

Secret management services like Vault provide a secure location to store secrets in a single, centralized location, which solves the problem of “secret sprawl” – or the storing of secrets in multiple locations that each need to be updated any time a given credential changes. Moreover, Vault offers RBAC for both users and machine accounts, and it can rotate secrets dynamically as an added security measure.

Platform-specific solutions

AWS Secrets Manager, GCP Secret Manager and Azure Key Vault provide similar functionality to Vault, but they are each limited to their respective platforms, which means that if you decide to host your application with a different cloud provider, you’ll have to go through the additional step of updating your secret management as well.

Prepare our environment

Today, we’ll be using Vault to store encrypted database credentials. Specifically, we’ll be working with sample applications running in Kubernetes via Minikube for demonstration’s sake. In order to keep things focused and straightforward, we will be running Vault in the same K8s cluster and in development mode, which means that it doesn’t need to be “unsealed,” or provided a key to decrypt the information it stores, by an administrator. While this is handy for learning about Vault on Kubernetes, it is not secure for production deployments.

Once we have a K8s cluster running, we’ll walk through a manual and an automated process that you can use to securely transfer secrets from a running instance of Vault to an application deployed in K8s.


You’ll need to install Minikube and Helm.

First, we’ll start our Minikube cluster and install Vault with Helm, like so:

minikube start
helm repo add hashicorp
helm repo update
helm install vault hashicorp/vault --set "" --set "ui = true"

Vault UI

Vault offers a user-friendly UI (which we enabled in the above command). If you prefer using the UI to the command line, you can access it with the following commands:

kubectl exec vault-0 -- vault login root

Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                root
token_accessor       gHmzcPZ33ZGZE4RQD96P61R0
token_duration       ∞
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]
kubectl port-forward pod/vault-0 :8200
Vault sign-in screen

Vault from the command line

Next, we’ll need to exec into the Pod running Vault in order to configure it and add our secrets, like so:

kubectl exec -it vault-0 -- /bin/sh

With an active terminal session in the Vault pod, we’ll first need to enable Vault’s Key Value v2 Secrets Engine, which will allow us to create and store a simple key-value secret, like so:

vault secrets enable -path=internal kv-v2
vault kv put internal/database/config username="db-readonly-username" password="db-secret-password"

Next, we’ll need to enable Vault’s built-in K8s auth, so that it will be able to authenticate with Vault using a Kubernetes Service Account Token that we can get from our running cluster, like so:

vault auth enable kubernetes

Then we’ll need to add the K8s host IP to the Vault auth config:

vault write auth/kubernetes/config \

Next, we’ll need to create a Vault policy called internal-app that will allow read access to our defined path within Vault:

vault policy write internal-app - <<EOF
path "internal/data/database/config" {
capabilities = ["read"]

Then create a Vault role that includes references to the Vault role we just created, the K8s service account we’ll create in the next step, and the K8s namespace it will have access to within our Kubernetes cluster, like so:

vault write auth/kubernetes/role/internal-app \
  bound_service_account_names=internal-app \
  bound_service_account_namespaces=default \
  policies=internal-app \

After exiting our exec session, we’ll need to create the Service Account our Vault role refers to above in Minikube – also called internal-app:

kubectl create sa internal-app

Using a manual init container to load secrets

To demonstrate the manual approach, we’ll deploy our main application along with a K8s init container and the config map it needs to read secrets from Vault. As shown in the diagram below, we’ll be deploying a Pod with two containers that we will define in a YAML file – our main application and an init container that will create the Vault Agent sidecar container.

The Vault Agent container permissions to access secrets in Vault are defined in the associated ConfigMap. Once it is authenticated, it will retrieve the designated secrets from Vault and then transfer them to a local volume within our Pod that our main application will be able to read from.

init container map

Create the config map that the init container will read from:

apiVersion: v1
vault-agent-config.hcl: |
  # Comment this out if running as sidecar instead of initContainer
  exit_after_auth = true

  pid_file = "/home/vault/pidfile"

  auto_auth {
      method "kubernetes" {
          mount_path = "auth/kubernetes"
          config = {
              role = "internal-app"

      sink "file" {
          config = {
              path = "/home/vault/.vault-token"

  template {
  destination = "/etc/secrets/index.html"
  contents = <<EOT
  <p>DB Connection String:</p>
  {{- with secret "internal/data/database/config" -}}
  postgresql://{{ }}:{{ }}@postgres:5432/wizard
  {{- end -}}
kind: ConfigMap
 name: example-vault-agent-config
 namespace: default

Define our example app running in a Pod along with the init container:

apiVersion: v1
kind: Pod
 name: vault-agent-example
 namespace: default
 serviceAccountName: internal-app

   - configMap:
         - key: vault-agent-config.hcl
           path: vault-agent-config.hcl
       name: example-vault-agent-config
     name: config
   - emptyDir: {}
     name: shared-data

   - args:
       - agent
       - -config=/etc/vault/vault-agent-config.hcl
       - -log-level=debug
       - name: VAULT_ADDR
         value: http://vault-internal:8200
     image: vault
     name: vault-agent
       - mountPath: /etc/vault
         name: config
       - mountPath: /etc/secrets
         name: shared-data

   - image: nginx
     name: nginx-container
       - containerPort: 80
       - mountPath: /usr/share/nginx/html
         name: shared-data

Confirm that the Vault secret is available within the running Pod:

kubectl exec vault-agent-example --container nginx-container -- cat /usr/share/nginx/html/index.html

Using the Agent Injector to make our life easier

Next, we’ll look at an automated way to do essentially the same thing. This time, instead of manually defining a ConfigMap and an init container, we’ll have the Vault Agent Injector create those for us.

As shown below, when we deploy our app, the Vault Agent Injector (which we also deployed with Helm above) will augment our deployment with the additional K8s resources required for authenticating to Vault, getting the secret, and writing it to local storage.

The Vault Agent Injector will determine what secrets it needs to add, and where to write them, according to the Vault annotations included in the resource definition, as shown below.

Agent injector graphic

Create a deployment that includes Vault annotations:

apiVersion: apps/v1
kind: Deployment
 name: orgchart
   app: orgchart
     app: orgchart
 replicas: 1
     annotations: 'true' 'internal-app'       'internal/data/database/config' |
         {{- with secret "internal/data/database/config" -}}
         postgresql://{{ }}:{{ }}@postgres:5432/wizard
         {{- end -}}
       app: orgchart
     serviceAccountName: internal-app
       - name: orgchart
         image: jweissig/app:0.0.1

Confirm that the Vault secrets are available in the running Pod:

kubectl exec \
  $(kubectl get pod -l app=orgchart -o jsonpath="{.items[0]}") \
  --container orgchart -- cat /vault/secrets/database-config.txt


Vault can be used to securely inject secrets like database credentials into running Pods in Kubernetes so that your application can access them. Above, we looked at two ways to do this – manually and in an automated fashion. In both cases, an init container spins up a Vault Agent that authenticates with Vault, gets the secrets, and writes them to a local storage volume that your application can access during runtime.

Join the discussion!

Have any questions or comments about this post? Maybe you have a similar project or an extension to this one that you'd like to showcase? Join the Velocity Discord server to ask away, or just stop by to talk K8s development with the community.

Python class called ProcessVideo

Python class called ProcessVideo

Get started with Velocity