DragonflyDB is a drop-in replacement for supercharging your Redis workloads. It works with any Redis SDK, and works just like Redis — caching, task queues, you name it — with a fraction of the compute overhead. Because of this, you can handle much higher volumes of throughput without the need to scale your Redis deployment, so you can reduce your cloud footprint and, with it, your overall deployment costs.
Today, we'll demonstrate this with a microservice app written in Go with a React frontend that leverages DragonflyDB to send async status notifications to the React frontend as shown in the diagram below.
Specifically, the React frontend will accept a file upload, which will be sent to the Web-API service, and it will also open a websocket connection with a separate Notifications service. When the Web-API service receives the file, it creates a task queue in DragonflyDB for a given userID with the status “processing.” It then sends the file to a File-Processing service, which simulates a file-processing workflow, and then sends a status of “complete” to the same userID-based task queue in DragonflyDB.
As this is happening, the Notifications service is listening to the various task queues in DragonflyDB, and — via the existing websocket connection(s) with the browser — it relays the file processing status in real time to the user.
The full project is available in GitHub.
In the above file, we connect to DragonflyDB, and define our Gin router. Then we create a single endpoint to handle a file upload and a userID as form data. From that, we parse the incoming file, and send it, along with the userID, to the file-processing service via an HTTP POST request. We also create a DragonflyDB task queue — the key of which is specific to the incoming userID.
We then send a “processing” status to the task queue, and log the success or failure of each of the above.
Above, we again connect to DragonflyDB and define a Gin API with a single endpoint. This time, though, it is written to handle an incoming websocket request from the React frontend. Once the websocket connection is established, the Notifications service then starts a Go routine — a parallel process — in which it listens to the DragonflyDB task queue that was created when the Web-API service received the file upload. When it detects a message in the queue, it “pops” it from the queue, i.e. removes the first item in the queue in a first in, first out fashion.
This “popped” item is then relayed to the frontend via the existing websocket connection. These items include a “processing” status, which is sent by the Web-API and a second “completed” or “failed” status, which is sent by the File-Processing service.
Finally, we have the File-Processing service, which again connects to DragonflyDB, and exposes a file upload endpoint, which receives the file from the Web-API. After simulating the file processing workflow, it sends a status update to the DragonflyDB task queue. It is worth noting that any async process could be executed here — for example, a file could just as easily be uploaded to an S3 bucket.
The Frontend is a simple React app with a single component — fileUpload.js. When the userID is entered via the UI, the websocket request is sent to the Notifications service. Likewise, when the upload button is clicked, the file and userID are sent as form data to the Web-API.
To deploy our application in Kubernetes, we'll first need to containerize it. For this we'll use Docker, which will require a Dockerfile for each service. We'll build and then push the resulting image from each Dockerfile to a remote registry, so that they can then be pulled by Kubernetes when the various services start up.
Note that all of the above services are available in the full project in GitHub.
The frontend container will consist of a Nginx base image into which we copy the build artifacts generated by running `npm run build` locally, and it will also include a `nginx.conf` file which will allow the Nginx web server to route traffic to our React app.
As each of the backend services are built similarly, the following example is representative of all three. It is a multi-stage build in which the Go binary is built in a “builder” stage, and is then copied into an Alpine container to be run directly as a binary. Add this file to the root directories of the Web-API, Notifications and File-Processing services.
To deploy the application to Kubernetes, we'll use Helm, a popular package manager for K8s.
You can read more about Helm Charts in the Helm docs, but the gist of it is that there are two core components involved. There are templates, and there is a values.yaml file.
The templates are fundamentally standard Kubernetes resource definitions, but they are templates in the sense that they dynamically resolve to include values defined in a separate values file — the values.yaml file. This way, you can change the configuration values associated with a given application via the values file, rather than having to update the resource definitions directly.
Next, we will deploy the resources we defined above. Below, we have an example of a Kubernetes manifest for the frontend service. It contains definitions for three Kubernetes resources — a deployment, a service, and an ingress. Each of the other microservices — i.e. the Web-API, Notifications and File-Processing services — will require only a Kubernetes deployment and Kubernetes service definition, which will be very similar to that shown below. Again, the full application resources are available for download in GitHub.
And we can deploy these resources to the cluster withe the following command:
Currently, the File-Processing service just sleeps for 10 seconds to simulate a file processing workflow, and to update it we would historically have needed to go through most of the above deployment steps — building and pushing the images and deploying to Kubernetes — again in order to update the running microservice.
But with Velocity, we can simply start a development session, update our local source code and the changes will be reflected in the cluster.
As an example, let's update the File-Processing service to parse a CSV file that we upload and print the lines of the file to our Velocity console.
To do so, simply start a Velocity development session as follows:
In your GoLand IDE, navigate to the JetBrains Marketplace, each for Velocity, and click “Install.”
Next, click “Login” to log in with either a Google or a GitHub account. Finally, click the debug icon with the default run configuration, “Setup Velocity.”
Check to make sure that the auto populated fields are correct, click “Next” and then click “Create.”
Next, with the Velocity development session running, update the File-Processing service with the following changes:
Then, save your changes, and upload a CSV file, and you'll see output like the following in your Velocity console!
DragonflyDB is a high-performance, drop-in replacement for Redis. It works with any Redis SDK, and it does everything Redis does, such as caching, task queues and more. Above, we looked at one use case for DragonflyDB — an async notifications service to relay status messages to the frontend for async backend processes, such as file upload processing.
And we also saw how Velocity can dramatically simplify and accelerate development and debugging of micro service applications running in Kubernetes! Before Velocity, in order to update the file processing service, we would have had to update our local source code, delete and then rebuild the file-processing image in Minikube, delete the file-processing Kubernetes deployment, and then redeploy our updated code.
But with Velocity, we were able to simply update our code as needed, and our changes were immediately reflected in the remote environment!
Python class called ProcessVideo
Python class called ProcessVideo