Learn how Velocity's ephemeral, production-like development environments can help speed up your application development process, so you can deliver new features and fix bugs faster.
Traditionally, application development lifecycles rely on shared staging environments for integrating features and services that have been developed in isolated development environments into the context of the larger application. Development teams that use this approach are then able to test the the new feature in an approximation of their production environment. And while common, this approach is less than optimal for several reasons, all of which constitute significant bottlenecks in the development process.
When your development team is using shared staging environments, there will inevitably be times when multiple developers need to access the same environment at the same time. This means they will have to wait until a shared environment is available.
This time spent waiting is costly for a variety of reasons. Time spent waiting for an environment is time not spent actively developing new features or fixing bugs. Moreover, the longer developers have to wait to test their code in the context of the larger application, the greater the context switching they have to do, which means that they lose even more time as they reengage with the logic related to the specific problem or bug they are working on.
If multiple developers are working on a single, static staging environment, it is very likely that one developer will change the environment’s configuration in one way or another that they forget to either revert those changes or notify the larger team of them when they’ve finished. This means that as additional developers make use of the staging environment, there will likely be an accumulation of unknown variations to the environment, which may make integration tests unreliable. So, even if a new addition to the codebase passes tests run in staging, it may well not work in production.
Both of the above results of shared staging environments combine to dramatically lengthen feedback loops in the development process. That is, the longer it takes for a developer to get meaningful test results related to a given feature or bug fix, the longer it will take to iterate on the next solution when needed. Slow feedback loops are widely recognized as a major bottleneck in the development process.
Velocity solves the problems of shared staging environments by allowing developers to easily spin up any number of self-served, ephemeral dev environments that are based on your production infrastructure as code (IaC) definitions.
Developers no longer have to wait for unreliable, shared staging environments to integrate their code into the larger application. Instead, they can spin up a Velocity environment, that mirrors your production environment, whenever they need one. And your team can spin up any number of environments at the same time in your K8s cluster.
Velocity environments are defined according to your production K8s manifests that have been augmented with Velocity-specific annotations and templates (similar to Helm templates), which don’t interfere with their ability to run in any K8s environment.
This way, your DevOps team doesn’t have to learn a new configuration language, and they only have to maintain a single set of familiar K8s manifests for dev, staging and production environments.
Whether your application has been microservice-based from the start, or you’re currently modernizing your monolithic application for optimal cloud-native performance, Velocity provides the tools your entire dev team needs to efficiently debug and develop new and existing features throughout your entire application.
For example, if your frontend devs need to debug a new UI feature that requires multiple internal APIs and databases, they can spin up an environment that includes a running version of each of those elements in your Kubernetes cluster – whether it’s hosted in the cloud, on-prem or even in a local Minikube cluster. And Velocity even allows you to seed your databases with production-like data, so you can develop with confidence that data-specific edge cases are covered from the start.
These same features lend themselves equally well for backend development, because in modern, cloud-native, distributed architectures backend services are developed and interlinked in much the same way that a traditional frontend is relative to a traditional, monolithic backend.
Once a Velocity environment is spun up, all of your developers, whether they work on the UI, the backend, dataOps – you name it, can then develop and debug any running service within your app and the code changes they make will be reflected immediately in the remote cluster – either by way of Velocity’s active, forward and reverse proxies networked to locally running code or hot reloading of the fully remote env.
Spinning up a Velocity environment is simple. Once an application is onboarded to Velocity, development environments can be spun up with a simple CLI command, from a JetBrains IDE, or even directly through your web browser.
For example, let’s walk through the process of creating a Velocity environment in the browser.
Once we’re logged in, we can simply click the “Create a New Environment” button, select the specific microservices we’d like to work on, and voila! We’ve got a working environment.
This is the really cool part of Velocity! The code runs in your Kubernetes cluster – which may be on premises, in the cloud (via AWS or GCP), or even in a local cluster on your laptop. All the services in the “Cloud” box in the image above are running on the cluster – except for the “Worker” service, which is being developed locally on a developer’s personal machine.
This means that developers can continue to work in a way that they are used to doing, and they get the added benefit of developing in the context of the larger app, rather than having to wait for a shared staging environment to see if their new code works. In fact, they get to see if it works in real time – the ultimate in shortening the feedback loop!
And notice those arrows between microservices? They represent a dependency graph that denotes which services need to be running for another given service to work. This way, you can reduce your cloud footprint by only spinning up the services that you need at a given time, rather than the entire app every time.
Try Velocity today in the free Velocity Labs.
Reach out to set up a live demo.