Get started with Velocity
Join the Waitlist
Join Our Discord
Blogs

Passing Secrets in Kubernetes: Techniques for Accessing Secrets

Jeff Vincent
Jeff Vincent
  
October 17, 2022

In this post, we discuss two options to consume secrets: directly from the secrets manager (via code in the app), or as a configuration provided by the infrastructure (via configuration file/environment variables).

Passing Secrets in Kubernetes: Techniques for Accessing Secrets

Our applications run in various environments (local, on-demand, staging, production, etc.), and we are used to decoupling their configuration from the code itself. Some parts of the configurations are considered secrets - values we like to keep private as they usually allow consuming and accessing the information on different components of our system. Once we add more parts to our environment, we face a new challenge: how will we distribute, manage and rotate secrets to avoid unintended exposures?

In this series, we will discuss two techniques to consume secrets in our applications and suggest practical integrations with popular secrets managers such as HashiCorp Vault, AWS Secrets Manager, AWS SSM Parameter Store, and GCP Secret Manager.

A note about secret distribution and management: in this series of articles we will not discuss secrets management and distribution techniques. Examples might include referrals to some secrets managers, but you should read the relevant documentation of your chosen solution to ensure you are using the vendor’s best practices when applying it.

What is a Secret anyway?

secret: something kept from the knowledge of others or shared only confidentially with a few
(
Merriam-Webster)

In an engineering context, we see secrets as values that should be kept confidential and used to authenticate with a different system. Values such as passwords, API keys, and certificates are usually considered secrets.

We expect secrets to be set during an integration (e.g., adding a connection to a database or consuming information from a 3rd party API) and rotated frequently to avoid unintended long exposure. These tasks are usually managed by the integrator, who is the administrator or the engineer responsible for the integration, but the secrets themselves are consumed by our application regularly (each startup / each request).

Consuming secrets

Applications have two options to consume secrets:

  1. Directly from the secrets manager (via code in the app)
  2. As a configuration provided by the infrastructure (via configuration file/environment variables)

As each technique has its pros and cons, we will describe each and compare them.

Applications pull secrets

Secrets managers have APIs and usually well-documented SDKs. As developers, we are used to consuming APIs, and it makes sense to initialize our application by pulling the latest secrets directly from the managers.

Workflow: Secrets Manager

As an example, this snippet uses boto3 to retrieve a secret from AWS SSM:

import boto3

ssm_client = boto.client('ssm')
db_password_param = ssm.get_parameter(Name='/Prod/Db/Password', WithDecryption=True)
db_password = db_password_param['Parameter']['Value']

Although the above snippet is quite simple, it lacks essential elements such as error handling and configurable secret names (Prod vs. Dev …).

When should I use this option?

  • You don’t control who runs your application, and you want to ensure only the application can see the secret.
  • You have an advanced use case that requires you to fetch secrets from different dynamic paths.
  • You are ok with writing implementation-specific code, as your choice of a secrets manager will not be changing anytime soon.

When should I NOT use this option?

  • You don’t want additional dependency in the code - this introduces a new code chunk which increases the complexity, requires proper error handling, and increases the bundle size.
  • Permissions - you can’t ensure the app itself will have the right permissions to access the secrets manager (e.g., cloud credentials when using AWS/GCP secrets managers).
  • Local development issues - running the application on your local machine will require the proper credentials to access the secrets manager.

Infrastructure provides secrets

Getting back to The Twelve-Factor App Config section, we can treat secrets as a config value and let the infrastructure handle their source. Whether by a configuration file or an environment, you can supply that config as a different part of your application.

Workflow: Secrets Manager

In Kubernetes, there’s a basic object named Secret which can be referenced by a directory or environment variables.

The following snippet uses a Secret to configure a MySQL database’s root password and share it with our “application”:

apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
  matchLabels:
    name: mysql
template:
  metadata:
    labels:
      name: mysql
  spec:
    containers:
      - env:
          - name: MYSQL_ROOT_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mysql-credentials
                key: root-password
        image: mysql:8.0
        name: mysql
        ports:
          - containerPort: 3306
            protocol: TCP
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-credentials
type: Opaque
stringData:
root-password: mypassword
---
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
restartPolicy: OnFailure
containers:
- name: alpine
  image: alpine:3
  command: [/bin/sh, -c]
  args:
    - printenv | grep MYSQL_
  env:
    - name: MYSQL_PASSWORD
      valueFrom:
        secretKeyRef:
          name: mysql-credentials
          key: root-password

When should I use this option?

  • You don’t want to couple your code with the secrets manager’s API or SDK.
  • You run your application in different environments - passing different values is easy through an environment variable.
  • You manage the infrastructure differently from your application and don’t want to depend on its changes, such as the secrets manager solution or the secrets page in that secrets manager.

When should I NOT use this option?

  • In Kubernetes, Secrets are stored in plain text in etcd (you should read the Information Security for Secrets.) If you don’t define good RBAC around them, other users of your cluster might consume them without your knowledge.
  • Secrets distribution - in the previous option every user was using the same secrets manager and could access its values. This option requires you to pass the value around (especially when running locally.)

Considerations for either approach

Regardless of which option you choose, there are still a few important things to keep in mind:

  • A secret is a concept. In our application, they are assigned to variables and might appear in log messages or transmitted in plain transport. It’s your job to make sure they are kept safe.
  • We are not looking to hide the secret in our application’s memory space. If someone gains access to the process' memory or gains permission to run code inside our application, they will probably be visible to this secret. (e.g., Heartbleed Bug)

When to use each option

Generally, I would suggest always passing the configuration to the application rather than pulling it from the code. It enables greater flexibility when running the application in different environments (production, development, and local) and allows the developers to focus on using the value rather than fetching it.

It also allows the infrastructure to grow independently - changing the secret location in the secrets manager, managing permissions, and even changing the secrets manager solution without requiring any applicative changes.

Join the discussion!

Have any questions or comments about this post? Maybe you have a similar project or an extension to this one that you'd like to showcase? Join the Velocity Discord server to ask away, or just stop by to talk K8s development with the community.

Python class called ProcessVideo

Python class called ProcessVideo

Get started with Velocity