Share:

kubectl port-forward vs Velocity

Velocity is working to simplify the alignment of complex development environments with their associated production environments that require cloud-hosted orchestration engines, such as K8s. 

The vision was to run and develop a portion of an application locally – such as a front-end or a given micro-service – while offloading the remainder of the application’s services to the cloud in order to facilitate light-weight and continuously up-to-date local dev environments that would allow teams to collaborate in new ways and to simply work faster and more efficiently as they develop new application features. The catch was, of course, that the remote services needed to be available locally for the locally-running service to, well, run.

In our quest to solve this problem, we identified two relatively straight-forward ways to allow the local services to talk to the remotely running services. The first was to use port-forwarding from the remote cluster to a specific port on your local machine, and the second is to create an ingress for each remotely running service, which will allow them to be exposed to the public internet, so your locally running services could access them via a persistent connection.

Velocity’s Path to a Solution

Path 1 – Use Port-Forwarding to Access Remote Services

When we researched the possibilities for local development we had first tried to kubectl port-forward, as that’s already an out of the box solution, but we ran into some difficulties. Specifically, there were latency issues that proved to be blockers for end-users with time-sensitive requests. 

Additionally, we had to make sure that the local proxies were always up, and for some reason the local tunnels failed quite often. When this happened, there was no automated way for the connection to be reestablished. 

There was a similar problem whenever a pod would crash. As developers worked locally, they would sometimes cause the remote dependencies to crash, because of the newly developed behavior of locally running services. When a pod would crash, the port-forwarding would also fail, and this meant that we also had to monitor pod crashes – for all dependencies – as well. 

And finally, an end-user wasn’t able to develop a service that was triggered by another service that was running remotely – i.e, serviceA is triggered by serviceB (which was running remotely in the cluster). 

Path 2 – Create an Ingress for Remote Services

The second option, creating an ingress for each of the remotely running services, requires setting up traefik, getting a certificate in order to encrypt the communication (i.e. using https versus http), and buying a domain. 

Ultimately, we chose this second path, because of both the reduced latency and the superior user experience. The result was veloctl, Velocity’s CLI. veloctl  automatically creates forward proxies to all services running in the cloud, which the local service – referred to in Velocity speak as the “development candidate” – relies upon. 

This creates the configuration with all the configuration needed for the service either by an .env  file or injecting to the service all the environment variables.

veloctl also creates all the reverse proxies needed for services dependent on the debugged service, creating a “flat” network model so it communicates the same way it did in the cluster. (utilizing k8s services for this)

Benchmarking

In order to benchmark the given solutions in an effective way I set up a deployment in k8s, which contains ComplexHTTPServer (python) and it was running on port 21458, which the ingress pointed to and the service mapping was “port: 3002, ServiceTargetPort:21458”

The baseline for the test is just an ingress pointing to that service. That’s the most network optimized solution. The local port 3002 is the local development of a service having that service as a dependency using Velocity, and the port 21458 is the local development of a service having that service as a dependency using kube port forward.

In order to test I ran wrk  pointing to a 1 kilobyte file in all the different solutions.

You can see that in the most optimized solution, we were able to download said file 3406 times in one minute with an average latency of 185.71ms. The Velocity solution, on the other hand, downloaded said file 1469 times with an average latency of 421.04 ms, and the kube-forward solution was able to download the file only 5 times with an average latency of 751.36ms.


Ingress - optimized solution

kubefwd

Velocity

In the kubectl port-forward graph you can also see that the latency graph starts getting to the peak in a way lower percentile and hangs in the higher latency.

What’s Possible with Velocity that’s not with Native Kubernetes

Upsides of our solution

The following image represents my environment:

For demonstration’s sake, we’ll dive into website-container, which is an application running in my environment serving my frontend. Its ingress is https://website-container-benchmarking.virondev.com/, which looks like so:

I want to run my service (website-container) locally and debug its connectivity to the backend, which is inside the same namespace, inside the same cluster.

If I run the previous test on this endpoint, I’ll get the following:

And if I run the website locally and access it through the same ingress, which is possible ONLY because Velocity sets up a reverse proxy, tunneling cluster network to local machine. As well as a forward proxy, tunneling the local network to the remote cluster. Take a look at the  network graph below to visualize it.

The whole ingress → service → local-machine when developing locally performs a lot like the one with only forward proxies. But it’s very responsive – allowing effective collaboration between a UX designer and an engineer or the running of UI tests on a real, disposable internet entity. Fundamentally, you can run a service “locally” and run the same UI tests on that URL from whatever system you’re using.

Network Graph

Network Flow Explanation

The big picture is this:

  • Ingress pointing to a service
  • Service pointing to a pod
  • Deployment creates the pod the service is pointing to

To read further for a deeper understanding, have a look at: https://kubernetes.io/docs/concepts/cluster-administration/networking/

Before developing the service locally

  • Ingress pointing to website-container
  • website-container service is pointing to website-container pod
  • website-container deployment creates a single replica of website-container  pod

During local development

  • Ingress pointing to website-container
  • website-container service is pointing to traffic-proxy pod
    This is done using service pod selectors, making the entire network flow inside Kubernetes still coherent instead of replicating the entire network and tunnel locally.
  • traffic-proxy deployment creates a single replica of traffic-proxy pod.

Note: when developing locally veloctl, Velocity’s CLI, initializes forward and reverse proxies to ensure the network graph is the same as it is in the cluster.

  • Reverse proxies all the services ports → local port (container port).
  • Forward proxies all the service’s dependencies service ports usage.

Network graph - kubectl port-forward vs veloctl