Share:

Three Best Practices for Configuring Environments On-Demand

When public cloud infrastructure came along, it brought the promise of on-demand infrastructure at the click of a button. Together with CI/CD methodologies, which are now commonplace for all R&D teams, developers today ship code to production daily, and even several times a day, and that’s only getting more frequent.

To support code production on this kind of scale, developers need to have the right kind of environment setup. We did some research into different environment setups across industries, and we share what we learned in this post, together with some of our own best practices that help configure environments for Velocity.

We found two setups that are particularly common.

1. The shared environment

In a shared environment, each developer works locally on his or her own machine to code a new feature, fix a bug, or test existing code. At some point it’s no longer possible for a dev to work in his or her own environment, either because of a lack of local compute resources or the needs of third-party dependencies. So, devs offload some of their microservices into the shared environment and continue with their work.

Shared environments look something like this.

This results in a limited number of environments, because each team or group can share an environment, which is an advantage because it makes maintenance much easier, and it is far cheaper in terms of compute and cloud resources.

However, when multiple people are using the same environment, there’s a risk that one developer could override another’s work, undo a fix, or push new features on top of other ones. Developers can end up working at cross-purposes and struggling to complete their own tasks.

2.  The on-demand environment

With an on-demand environment, each developer can commission an environment whenever they like. The developer can work within their customized environment, test their code, and then commit changes and push them to version control. Once that’s done, they can delete the environment and the workflow returns.

An on-demand environment looks like this.

The benefits of on-demand environments are that each developer has their own production-like environment. They can work in isolation to code new features or fix bugs and then deploy them to the actual production space, but they can also collaborate on the same feature without getting in each other’s way. For example, frontend and backend engineers can work on the same feature concurrently, to decrease feature-to-market time.

The drawback is that this requires you to provision and maintain a large number of environments, which can be challenging in and of itself.

On-demand dev environment design considerations

If you’re ready to try it, here are 3 best practices that you need to keep in mind. All of them are based on the Twelve-factor app methodology for writing software service applications to build scalable application architecture.

1. Environment variables

Hard-coded values pose a problem when you’re provisioning on-demand environments. In this example, there’s a URL hard-coded inside the application code before it’s used to call a http request, followed by a hard-coded access key and secret key, which is a security problem in and of itself, and then a variable <s3_bucket_name> where the value is determined by the environment type.

There are a number of problems with this setup:

  • It can’t scale for on-demand environments, because there will be too many values to control across too many environments
  • There’s a risk that you might accidentally define the URL on your local machine and then push to production, which would cause production downtime
  • The setup violates twelve-factor app best practices, which require complete isolation between application configuration and application code

To decide whether a specific value should be defined as a constant inside your code or be externalized through an environment variable, ask yourself: what changes do I need to make to deploy my code in a different environment? If you need to make many changes, you should rethink how to decouple service configuration from the code.

If you want to unlock the real potential of on-demand environments, you’ll need to externalize environment-specific configuration through environment variables. The benefits of using environment variables include:

  • Being able to externally manage your application's configuration, like loading different configurations per environment.
  • Increasing security through rotation secrets and keeping sensitive information out of version control.
  • Fewer production errors, like a customer receiving the wrong email or notification, because you won’t be using production environment configuration on the testing environment.

2. Decouple configuration retrieval from underlying application

There are a number of ways that you can store configurations, including Kubernetes config maps or secrets; Vault by HashiCorp; or whatever solution your cloud provider offers. The problems begin when you need to fetch configurations from a third-party into the application.

Here’s a common scenario: a server side application that fetches config from a third-party within the same code base as the application, as part of the service startup, and passes the values on to the application. In this example, the server uses AWS SSM as the configuration storage.

The problem is that this methodology creates a tight coupling with the underlying infrastructure, because you’ll always have to use a specific third party to load configurations. In this situation, switching configuration vendors could cause an application code change. There are some scenarios where you might want to avoid using a third party, like if you’re working on your local machine and want to load environment variables from a .env file, or want to use a different third party when working in production. When you decouple config from the application, you’re free to choose whichever loading medium you prefer.

What’s more, this config retrieval adds another code that isn’t part of your application logic, but that you’ll have to write and maintain. It could contain bugs or unwanted behavior that you don’t expect, adding to your workload.

One common solution is to use init containers, which is a set of containers that run before application containers start. Init containers can be used for a whole range of tasks, like to register a port or remote server, fetch GitHub repository to a specific volume, or, in our case, to fetch configurations from a third party and pass the values on to the application container.

Kube-secret-init is a cool open source project for cloud secret injections that supports AWS and GCP managed secrets services. Here’s an example of how it works.

Imagine you have an environment variable called MY_API_KEY that has a sensitive value that you want to inject into the application.  With kube-secrets-init integrated with AWS SSM (secret manager), you can simply pass:

MY_API_KEY=arn:aws:ssm:$AWS_REGION:$AWS_ACCOUNT_ID:parameter/api/key to kube-secrets-init and at runtime, that will resolve the value into the actual secret.

The coolest part of the whole thing is that the project takes a Kube native approach, using an Init container and admission web hook, which is an HTTP callback that receives and acts upon admission requests. There’s a similar solution for Azure Corp too.

3. Inject runtime environment variables

create-react-app is a simple environment for creating a single page react application. Officially, environment variables are embedded at built-time, so you have to do a new build any time you want to change them. This simply isn't scalable for on-demand environments, because you need to change values multiple times, and it's not viable to make a new build every time.  

You want to run a React application as a docker container that’s built only once. In this example, there’s a React app and an environment variable called API_URL. You can control this value externally through the application, making it very flexible.

We’ve seen many companies do this by bundling a javascript config file into the index.html as a script tag.

Before starting the container, you can read the environment variables within it and use a Bash script to inject the values to the configuration file. The values are assigned to the global window object of the application, which makes them available across the application to replace environment variables placeholders with injected values.

How runtime environment variable is actually used.

However take care to never inject any secrets, because all the values are visible. There can be other downsides to this method, so it’s important to check it out according to your particular tech stack before trying it in production.

On-demand environments can be a reality

On-demand environments are a blessing for developers, helping them shift left in bug detection and work together more productively. These three best practices - to use environment variables, decouple config retrieval from the underlying application, and inject environment variables on runtime - make them a reality.

Alternatively, you can use Velocity to create on-demand production-like environments in just a few clicks. Want to learn how? See a demo of Velocity in action.