Argo Workflows is a K8s native workflow engine that allows you to run all kinds of workflows in Kubernetes by leveraging native resources, such as K8s Pods, to execute the individual steps of the workflow. Workflows can be kicked off in all kinds of ways, which – along with the enormous range of customizations possible – is what makes Argo Workflows such a versatile tool for running cloud-native workflows.
ML elements of applications often require a significant amount of processing time. Learn how to speed up your microservice-based ML app with asynchronous networking and in-memory data caching in this post.
Golang is arguably the best choice of language for writing cloud native applications. In this post we quickly cover Golang’s strengths, and then create a sample blog application that combines synchronous and asynchronous networking for distinct workflows within the app.
Goroutines provide a means for multiprocessing in a Golang application, allowing multiple processes to run simultaneously. In addition, Channels and WaitGroups enable passing data between threads or blocking one thread until another completes. In this post, we explore examples of each.
As part of the LF Live series, Velocity recently teamed up with the Linux Foundation to host a webinar about different methods and tools to build and manage multi-tenant Kubernetes clusters. Watch the recording here.
Request-response based networking in microservice architectures can result in unwanted latency in your cloud native application. Learn the basics of event-driven architectures built with Redis as a way to increase your application’s processing speed in this post.
Kubernetes init containers provide a means of configuring an environment in K8s for an application to run that doesn’t require changing the application’s source code. In this post, we discuss how init containers work, when you would use one, and show an example in a sample app.
Often, in order to keep an application responsive at scale, you’ll need to pass long-running jobs to another process that can handle them asynchronously in the background. In this post, we’ll look at a simple way to achieve this with Python, Redis and Redis Queue.
This is part two in a multi-part series on using OpenTelemetry and distributed tracing to level up your system observability.
In this post, we discuss two options to consume secrets: directly from the secrets manager (via code in the app), or as a configuration provided by the infrastructure (via configuration file/environment variables).
This is the third and final part in a multi-part series on using OpenTelemetry and distributed tracing to level up your system observability.