Get started with Velocity
Join the Waitlist
Join Our Discord
Blogs

Getting Started with K8s Init Containers

Jeff Vincent
Jeff Vincent
  
November 10, 2022

Kubernetes init containers provide a means of configuring an environment in K8s for an application to run that doesn’t require changing the application’s source code. In this post, we discuss how init containers work, when you would use one, and show an example in a sample app.

Getting Started with K8s Init Containers

Init containers in Kubernetes can be used for a variety of purposes, but they all center around the idea of getting things ready for the main process – i.e. the application container – to run as desired. That is, they initialize the environment for the main process in some way.

This generally involves accessing some data outside of the Pod and moving it to an accessible location within the Pod for the application container to read from. However, this is only one of many possible use cases. They can also, for example, start a sidecar container within the Pod that will run alongside the application container.

Init containers are – generally speaking – very similar to application containers; however, there are a few important differences. Specifically, the processes they run must complete successfully before any other processes in a given Pod can run. Because of this, init containers also provide a means of ensuring that all required preconditions within the Pod are met before any application-specific processes start.

Why use an init container?

Init containers allow you to separate configuration-specific tasks from your application’s source code. For example, if your application needs access to a set of database credentials that are stored in some external resource, you could write code within the application that queries this external resource directly, but then you would need to change the source code itself anytime there is a change related to that external resource – like, say, if the IP address it is hosted at changes.

Within the same vein, init containers also allow you to separate configuration-specific processes from your application container itself. For example, you could have the application container itself query this same external resource for your database credentials, but then the credentials required for querying the resource would be available during the runtime of the application, which constitutes an unnecessary security risk.

Because init containers must run to completion before any other processes can start within a Pod, using an init container to query an external resource will allow you to keep the application container as small as possible, which will ultimately make the application more efficient and secure during runtime.

How do init containers work?

Init containers run after all networking and storage have been provisioned for the Pod. This means that they have access to local volumes at runtime. After these peripheral elements of a Pod are created, init containers run to completion in the order that they are defined in the K8s resource definition. If an init container doesn’t complete successfully, the Pod will retry the init container according to the defined restart policy; however, the Pod will not be in a ready state until all init container processes have been completed successfully, which means that if the init container fails, the Pod will have failed as well.

Init container example

Below, we’ll look at a very simple example of an init container that is accessing some dynamically generated data outside of the Pod and storing it locally for the application container to access at runtime.

Specifically, our init container will be sending a Curl request to a free weather forecasting API to get the current weather at the Velocity offices. This is a totally arbitrary and simple example of retrieving data from an external resource that doesn’t require any form of authentication, so we can focus only on the high-level process being carried out.

The result of the Curl request is then written to a local, EmptyDir volume called shared-data that has been mounted onto both the init container and the application container, so they can both read from it and write to it.

Finally, the NGINX application container spins up and serves the downloaded file via HTTP on port 80. Once this container has started, we can use Kubernetes port-forwarding to view the downloaded data in the browser.

Init Example

The three high-level steps in the above diagram and the following K8s resource definition are as follows:

  1. The local volume shared-data is created.
  2. The init container sends a Curl request to an external resource and writes that data to the local volume.
  3. The application container starts and reads the data it needs from that same local volume.

Run it locally

To run this process locally in minikube, copy the following into a file called example.yaml.

apiVersion: v1
kind: Pod
metadata:
 name: init-container-example
 namespace: default
spec:
 volumes:
   - emptyDir: {}
     name: shared-data
 initContainers:
   - name: download-config
     image: curlimages/curl:7.85.0
     args: ["https://api.open-meteo.com/v1/forecast?latitude=32.068227&longitude=34.794876&current_weather=true", "-o", "/usr/share/nginx/html/index.html"]
     volumeMounts:
       - mountPath: /usr/share/nginx/html
         name: shared-data
 containers:
   - image: nginx
     name: nginx-container
     ports:
       - containerPort: 80
     volumeMounts:
       - mountPath: /usr/share/nginx/html
         name: shared-data

Next, start a minikube cluster, like so:

minikube start

Then, create the defined resources by running:

kubectl apply -f example.yaml

Finally, to see the weather data that has been dynamically retrieved from the external API, run the following, and navigate to the full IP address provided in the terminal output (i.e. 127.0.0.1:50706 in the example output below).

kubectl port-forward init-container-example :80

Example output:

Forwarding from 127.0.0.1:50706 -> 80
Forwarding from [::1]:50706 -> 80
Handling connection for 50706
Handling connection for 50706

Conclusion

Kubernetes init containers provide a means of configuring an environment in K8s for an application to run that doesn’t require changing the application’s source code. They run after networking and storage resources have been provisioned, but before any other processes run, and they must run to completion before any other processes, such as the application container itself, can begin.

Above, we demonstrated this process with a single Pod that creates a local EmptyDir volume, starts the init container, which writes data to the local volume and then serves the data with an NGINX application container that reads from the same local volume.

Join the discussion!

Have any questions or comments about this post? Maybe you have a similar project or an extension to this one that you'd like to showcase? Join the Velocity Discord server to ask away, or just stop by to talk K8s development with the community.

Python class called ProcessVideo

Python class called ProcessVideo

Get started with Velocity