Get started with Velocity
Join the Waitlist
Join Our Discord
Blogs

Build a Microservice-based App in Golang with Gin, Redis and MongoDB and Deploy it in K8s

Jeff Vincent
Jeff Vincent
  
December 14, 2022

Golang is arguably the best choice of language for writing cloud native applications. In this post we quickly cover Golang’s strengths, and then create a sample blog application that combines synchronous and asynchronous networking for distinct workflows within the app.

Build a Microservice-based App in Golang with Gin, Redis and MongoDB and Deploy it in K8s

Golang is one of the best choices for writing cloud-native applications for several reasons – it is remarkably fast, as it is statically-typed, and it can be compiled to ”cross-platform” binary, which allows it to run without an interpreter in any container.

These factors combine to allow for lightweight, highly responsive containers that are more efficient to deploy to cloud environments, as they require fewer compute resources and can handle large volumes of throughput before they will need to be scaled up with replicas.

Additionally, cloud-native, open-source projects are often written in Go, which makes it friendly for anything cloud-native related.

Today, we’ll look at writing Golang services of two varieties as we build a microservice-based blog with an analytics feature that reports the number of times each blog post has been viewed.

The app will include a Redis instance for queueing asynchronous background operations and two MongoDB instances – one for storing the blog posts themselves and another for recording our analytics data – in order to illustrate the best practice of data isolation across microservices.

One of our two varieties of microservices will include an HTTP API defined with the Golang GIN web framework for request/response based networking. And the second variety will be asynchronous worker services that subscribe to Redis queues defined with Rpush and Blpop, which will execute a given workload when they receive data from Redis.

Finally, we’ll deploy the application illustrated below to Kubernetes with an ingress to handle inbound web traffic.

Miroservice-Based Application

Topics we’ll cover

  • Combining synchronous and asynchronous networking
  • Golang’s Gin HTTP API framework
  • Golang and MongoDB (MongoDB Go Driver)
  • Golang and Redis (Go-Redis)
  • Redis queues defined with Rpush and Blpop
  • Multi-stage Docker builds for Golang
  • Deploying the app in K8s

What we’ll end up with

Our blog app will be a practical example of a K8s deployment of Golang microservices that mix request/response-based networking via HTTP and asynchronous, event-based networking via Redis queues.

The full project is available on GitHub.

Why mix and match networking approaches?

Some operations in an application need to return data directly to the end-user clients – i.e. the browser or the mobile device etc. These operations require synchronous processing of requests, so that the result can be returned. This is the most common approach to microservice networking, and it is generally facilitated with HTTP traffic between services.

However, when all processing between microservices is synchronous, you are effectively adding more and more “links to a chain” with each service that is added to a given data flow, which can add significant latency to your application when the “chain” of services gets very big. A data flow is only as fast as the sum of its components, so if one service gets bogged down, the whole app slows down as well.

To mitigate this increased latency, we can make operations that don’t return data directly to the user asynchronous, so they can run independently of the synchronous data flows that return application  data directly to the user. This means that the application will be able to return data to the user as quickly as possible, while carrying out “peripheral” tasks in a separate data flow.

For example, in our app when a user requests a blog post, that post will need to be returned to the browser for the user to view. However, our app will also be tracking the number of times a given blog post has been viewed.


If this analytics process was synchronous, the user would have to wait for this additional workflow to complete before the blog post could be returned to the browser. And while this is a trivial amount of time in this example app, if this app were more fully developed and it was handling real web traffic, such a process could significantly slow the application response time during periods of high request volumes, which will result in a unresponsive and frustrating customer experience.

So, in order to return the blog post as quickly as possible, we’ll handle that analytics process independently, so it won’t slow down the app from the user’s perspective.

Web API

NOTE: each of the files detailed here are excerpts. For a full working example, clone the repo.

Our Web API will serve as an entry point for web requests coming into the app.

First, let’s define our main function in which we will connect to Redis, define our Gin router, and define three routes for Gin to handle.

Redis

We’ll use the Go module Go-Redis to manage our interactions with Redis. In the first block of code within our main function, we’ll create our connection with the ParseURL and NewClient methods Go-Redis provides.

With our client defined, we can add new blog posts to our Redis queue, which will allow for the asynchronous upload operations illustrated above.

Specifically, in the body of the HTTP POST /posts route handler – with our context defined – we use this new client to push data to a Redis queue via the Rpush method. To do so, we need to define and pass the key name that we will use to access our queue – “queue:new-post” – along with our context and our defined JSON payload.

router.POST("/posts", func(ctx *gin.Context) {
   title := ctx.PostForm("title")
   author := ctx.PostForm("author")
   body := ctx.PostForm("body")
   new_post := BlogPost{Title: title, Author: author, Body: body}
   payload, err := json.Marshal(new_post)
   if err != nil {
       log.Error().Err(err).Msg("error occured while decoding response into Doc object")
       ctx.JSON(http.StatusUnprocessableEntity, gin.H{"error": "Upload failed"})
   }
   err = json.Unmarshal(payload, &new_post)
   if err != nil {
       log.Error().Err(err).Msg("error occured while decoding response into Doc object")
       ctx.JSON(http.StatusUnprocessableEntity, gin.H{"error": "Upload failed"})
   }
   if err := rdb.RPush(ctx, "queue:new-post", payload).Err(); err != nil {
       log.Error().Err(err).Msg("error occured while decoding response into Doc object")
       ctx.JSON(http.StatusUnprocessableEntity, gin.H{"error": "Upload failed"})
   }
})

The DB Worker instance will be listening to this “Upload” queue, and will insert the data it receives into MongoDB, as shown above.

Route handlers with Gin

After defining our gin.Default() router, we can handle incoming HTTP requests according to type with the following syntax: router.POST(...) to which we pass the Gin Context, the route to be handled, and the corresponding function to be called when a given route is hit.

Note that below, in order to not create a new Redis client rdb for each incoming request to the HTTP POST /post handler, we define the route handler within the body of the router method itself.

Alternatively, we can simply refer to the handler function by name, as shown in the GET /posts/:title and /views/:title routes included below.

Parsing HTTP requests with Gin

Gin makes it simple to access request data via the provided gin.Context. For example, we can access form values such as our blog title, author and body from an incoming request with our Gin Context’s PostForm method. Likewise, we can access Path parameters with the Param method shown in the GetPost function defined in the next code snippet.

func main() {
   opt, err := redis.ParseURL(redis_uri)
   if err != nil {
       panic(err)
   }
   rdb := redis.NewClient(opt)
   router := gin.Default()
   router.LoadHTMLGlob("templates/*.html")
   router.GET("/", index)
   router.POST("/posts", func(ctx *gin.Context) {
       title := ctx.PostForm("title")
       author := ctx.PostForm("author")
       body := ctx.PostForm("body")
       new_post := BlogPost{Title: title, Author: author, Body: body}
       payload, err := json.Marshal(new_post)
       if err != nil {
           log.Error().Err(err).Msg("error occured while decoding response into Doc object")
           ctx.JSON(http.StatusUnprocessableEntity, gin.H{"error": "Upload failed"})
       }
       err = json.Unmarshal(payload, &new_post)
       if err != nil {
           log.Error().Err(err).Msg("error occured while decoding response into Doc object")
           ctx.JSON(http.StatusUnprocessableEntity, gin.H{"error": "Upload failed"})
       }
       if err := rdb.RPush(ctx, "queue:new-post", payload).Err(); err != nil {
           log.Error().Err(err).Msg("error occured while decoding response into Doc object")
           ctx.JSON(http.StatusUnprocessableEntity, gin.H{"error": "Upload failed"})
       }
   })
   router.GET("/posts/:title", getPost)
   router.GET("/posts", getAllPosts)
   router.GET("/views/:title", getPostViews)
   router.GET("/views", getAllViews)
   router.Run()
}

Next, let’s define the additional functions that we are using as route handlers in the above snippet. Specifically, we’ll define a function called getPost , and one called getPostViews.

func getPost(ctx *gin.Context) {
   title := ctx.Param("title")
   address := fmt.Sprintf("http://%s:%s/posts/%s", blog_service_host, blog_service_port, title)
   resp, err := http.Get(address)
   if err != nil {
       log.Error().Err(err).Msg("error occured while fetching posts from posts service")
       ctx.JSON(http.StatusInternalServerError, gin.H{"error": "Get posts failed"})
   }
   defer resp.Body.Close()
   val := &Doc{}
   decoder := json.NewDecoder(resp.Body)
   err = decoder.Decode(val)
   if err != nil {
       log.Error().Err(err).Msg("error occured while decoding response into Doc object")
       ctx.JSON(http.StatusUnprocessableEntity, gin.H{"error": "Get posts failed"})
   }
   ctx.JSON(http.StatusOK, val)
}
func getPostViews(ctx *gin.Context) {
   title := ctx.Param("title")
   address := fmt.Sprintf("http://%s:%s/views/%s", analytics_service_host, analytics_service_port, title)
   resp, err := http.Get(address)
   if err != nil {
       log.Error().Err(err).Msg("error occured while fetching views from views service")
       ctx.JSON(http.StatusInternalServerError, gin.H{"error": "Get views failed"})
   }
   defer resp.Body.Close()
   val := &Doc{}
   decoder := json.NewDecoder(resp.Body)
   err = decoder.Decode(val)
   if err != nil {
       log.Error().Err(err).Msg("error occured while decoding response into Doc object")
       ctx.JSON(http.StatusUnprocessableEntity, gin.H{"error": "Get views failed"})
   }
   ctx.JSON(http.StatusOK, val)
}

Blog Service

Next, let’s create another HTTP API with Gin that our Web API will be sending requests to with the getPost handler defined above. Again, let’s first define our main function. Here, we will connect to MongoDB as well as define a Gin router in order to facilitate the synchronous networking approach described and illustrated above.

That is, this service will be listening for an HTTP request from the Web API service. When it receives one, it will query MongoDB for the requested blog post and return it to the browser synchronously.

func main() {
   mongoClient, err := mongo.NewClient(options.Client().ApplyURI(mongo_uri))
   if err != nil {
       log.Error().Err(err).Msg("error occured while connecting to mongo")
   }
   ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
   defer cancel()
   err = mongoClient.Connect(ctx)
   if err != nil {
       log.Error().Err(err).Msg("error occured while connecting to mongo")
   }
   defer mongoClient.Disconnect(ctx)
   router := gin.Default()

   router.GET("/posts/:title", func(ctx *gin.Context) {
       title := ctx.Param("title")
       result, err := getPost(ctx, mongoClient, title)
       if err != nil {
           log.Error().Err(err).Msg("error occured while fetching post from mongo")
       }
       ctx.JSON(http.StatusOK, gin.H{
           "Data": result,
       })
   })
   router.Run()
}

Querying MongoDB with Go

Next, let’s define the getPost function called above. Here, we will query MongoDB for a given blog post by its title, and if it is found, we’ll publish the blog title to our “Analytics” queue in Redis, so that the Analytics worker can asynchronously update the associated record in our second MongoDB instance.

func getPost(ctx *gin.Context, mongoClient *mongo.Client, title string) (bson.D, error) {
   coll := mongoClient.Database(databaseName).Collection(collectionName)
   var result bson.D
   err := coll.FindOne(ctx, bson.D{{"title", title}}).Decode(&result)
   if err != nil {
       log.Error().Err(err).Msg("error occured while fetching posts from posts mongo")
       ctx.JSON(http.StatusInternalServerError, gin.H{"error": "Get post failed"})
       return nil, err
   }
   Publish(ctx, title)
   return result, err
}

func Publish(ctx *gin.Context, payload string) {
   opt, err := redis.ParseURL(redis_uri)
   if err != nil {
       log.Error().Err(err).Msg("error occured while connecting to redis")
       ctx.JSON(http.StatusInternalServerError, gin.H{"error": "Analytics error"})
   }
   rdb := redis.NewClient(opt)
   if err := rdb.RPush(ctx, "queue:blog-view", payload).Err(); err != nil {
       log.Error().Err(err).Msg("error occured while publishing to redis")
       ctx.JSON(http.StatusInternalServerError, gin.H{"error": "Analytics error"})
   }
}

Publish to Redis with Rpush

As described above, our Publish function – which is called in the body of the above getPost function – uses the Redis RPush functionality to store data in Redis with a known key that can later be accessed by one of our worker instances.

Analytics Worker

Now for the other variety of microservice described above – one of our two worker instances. Inside of our main function, we’ll wrap a Redis BLpop statement in a for loop to continuously listen for data that has been inserted into a Redis queue with a given key – queue:blog-view in this case.

Listening to Redis queue with Blpop

Redis' BLPop functionality allows us to access data stored with this key, and to “pop” it out of Redis once it has been read. This way, we can be sure that a given item in the queue is only handled once, as opposed to a “fan-out” approach, such as is facilitated by a Pub/Sub channel, in which all workers listening to Redis would perform the same task in parallel, which would result in duplicate work being done across worker instances.

func main() {
   ctx := context.Background()
   mongoClient, err := mongo.NewClient(options.Client().ApplyURI(mongo_uri))
   if err != nil {
       log.Error().Err(err).Msg("error occured while connecting to mongo")
   }
   err = mongoClient.Connect(ctx)
   if err != nil {
       log.Error().Err(err).Msg("error occured while connecting to mongo")
   }
   defer mongoClient.Disconnect(ctx)
   opt, err := redis.ParseURL(redis_uri)
   if err != nil {
       log.Error().Err(err).Msg("error occured while connecting to redis")
   }
   rdb := redis.NewClient(opt)
   for {
       result, err := rdb.BLPop(ctx, 0, "queue:blog-view").Result()
       if err != nil {
           log.Error().Err(err).Msg("error occured while fetching data from posts redis")
           continue
       }
       updateAnalytics(mongoClient, result[1])
       fmt.Println(result[resultKeyIndex], result[resultValueIndex])
   }
}

Notice that our worker performs two of three tasks in sequence each time it reads a given blog post title from the Redis queue. First, it checks MongoDB to see if there is an analytics record associated with this title with the getDoc function. If no record exists, it creates such a record with a Views value of 1, because this will have been the first time this particular blog post was requested.

Alternatively, if an analytics record is found, the associated Views value is increased by 1 via the updateAnalytics function defined below.

type AnalyticsData struct {
   Title string
   Views int
}

func getDoc(mongoClient *mongo.Client, title string) (AnalyticsData, error) {
   coll := mongoClient.Database(databaseName).Collection(collectionName)
   var result AnalyticsData
   err := coll.FindOne(context.TODO(), bson.D{{"title", title}}).Decode(&result)
   if err != nil {
       log.Error().Err(err).Msg("error occured while fetching post from mongo")
       return result, err
   }
   return result, err
}

func insertDoc(mongoClient *mongo.Client, title string) (*mongo.InsertOneResult, error) {
   coll := mongoClient.Database(databaseName).Collection(collectionName)
   data := AnalyticsData{Title: title, Views: 1}
   result, err := coll.InsertOne(context.TODO(), data)

   if err != nil {
       log.Error().Err(err).Msg("error occured while inserting post to mongo")
       return result, err
   }
   return result, err
}

func updateAnalytics(mongoClient *mongo.Client, title string) {
   existingDoc, err := getDoc(mongoClient, title)
   if err != nil {
       log.Error().Err(err).Msg("error occured while fetching post from mongo")
   }
   if existingDoc.Title == "" {
       insertDoc(mongoClient, title)
   } else {
       views := existingDoc.Views + 1
       coll := mongoClient.Database("blog").Collection("views")
       _, err := coll.UpdateOne(
           context.TODO(),
           bson.M{"title": existingDoc.Title},
           bson.D{
               {"$set", bson.D{{"views", views}}},
           },
       )
       if err != nil {
           log.Error().Err(err).Msg("error occured while updating analytics")
       }
   }
}

Building our Docker images

One of the key benefits of writing microservices in Golang is the fact that Go can be compiled to binary, which can then run in a very lightweight container, such as Alpine. In order to achieve this, we’ll leverage a multi-stage build of our container image.

This will allow us to install all the dependencies required to compile the application code into binary, and then transfer the resulting binary to a “fresh” container that contains only the required elements for the application to run.

FROM golang:1.18 as builder

# first (build) stage

WORKDIR /app
COPY . .
RUN go mod download
RUN CGO_ENABLED=0 go build -v -o app .

# final (target) stage

FROM alpine:3.10
WORKDIR /root/
COPY --from=builder /app ./
CMD ["./app"]

Notice that we are starting with a golang:1.18 in which we install all dependencies and compile the app to binary. We then define a second, Alpine container into which we copy the compiled binary, so we can keep the container image as small as possible.

Deploying the app to K8s

Once all of our services are containerized and pushed to a registry, we can begin work on spinning up the full application in K8s. To do so, we’ll need to create one or more Kubernetes resource definitions for each of our application services – i.e., our Web API, Blog Service, Analytics Worker etc.

These resources will consist of three varieties – an ingress to handle incoming, external web traffic to our Web API, a K8s Cluster IP Service for all application services that need to accept internal network connections within our K8s cluster, and finally a K8s Deployment for each of our application services.

Below, you’ll find examples of each of these resource types for the Web API. For a complete, working example, you can clone the repo.

Ingress

The following K8s manifest defines an ingress that routes all inbound web traffic to our Web API service.

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-api
spec:
ingressClassName: {{ .Values.webAPI.ingress.ingressClassName | quote }}
rules:
  - host: {{ .Values.webAPI.ingress.host | quote }}
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: web-api
              port:
                number: 8081

Deployment

The following deployment manifest defines the container we wish to run – i.e., the containerized Web API – along with environment variables that container needs and other data, such as the number of container replicas we’d like to spin up – three in this case, and the ports the container will need to expose for the application running therein.  

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-api
labels:
  app: web-api
spec:
selector:
  matchLabels:
    api: web-api
replicas: 3
template:
  metadata:
    labels:
      app: web-api
      api: web-api
  spec:
    containers:
      - name: web-api
        image: {{ .Values.webAPI.containers.image | quote }}
        env:
          - name: REDIS_HOST
            value: {{ .Values.webAPI.envVars.REDIS_HOST | quote  }}
          - name: REDIS_PORT
            value: {{ .Values.webAPI.envVars.REDIS_PORT | quote  }}
          - name: ANALYTICS_SERVICE_HOST
            value: {{ .Values.webAPI.envVars.ANALYTICS_SERVICE_HOST | quote  }}
          - name: ANALYTICS_SERVICE_PORT
            value: {{ .Values.webAPI.envVars.ANALYTICS_SERVICE_PORT | quote  }}
          - name: BLOG_SERVICE_HOST
            value: {{ .Values.webAPI.envVars.BLOG_SERVICE_HOST | quote  }}
          - name: BLOG_SERVICE_PORT
            value: {{ .Values.webAPI.envVars.BLOG_SERVICE_PORT | quote  }}
        ports:
          - name: web-api
            containerPort: 8081
            protocol: TCP

Cluster IP Service

Finally, the associated Cluster IP service allows the above ingress to make an internal network connection to the deployment, also defined above.

---
apiVersion: v1
kind: Service
metadata:
name: web-api
spec:
ports:
  - port: 8081
    targetPort: 8081
    name: web-api
selector:
  app: web-api
type: ClusterIP

Creating a Helm Chart

Helm is a templating tool that allows users to define a single set of K8s manifests that can be populated with different values according to the specific needs of a given environment. For example, we can use the following file structure to deploy our app locally on Minikube, or to any other K8s environment simply by passing an environment-specific values.yaml file, such as the example provided below.

├── k8s
│   ├── Chart.yaml
│   ├── templates
│   │   ├── analytics_service.yml
│   │   ├── analytics_worker.yml
│   │   ├── blog_service.yml
│   │   ├── db_worker.yml
│   │   ├── mongo1.yml
│   │   ├── mongo2.yml
│   │   ├── redis.yml
│   │   └── web_api.yml
│   ├── values.yaml
│   └── velocity-values.yml

values.yaml

This is an excerpt from the complete file:

webAPI:
containers:
  image: jdvincent/gin-redis-web-api:latest

envVars:
  REDIS_HOST: redis
  REDIS_PORT: "6379"
  ANALYTICS_SERVICE_HOST: analytics-ser
  ANALYTICS_SERVICE_PORT: "8080"
  BLOG_SERVICE_HOST: blog-service
  BLOG_SERVICE_PORT: "8080"

 ingress:
  ingressClassName: kong
  host: null

Run it in Minikube

To run the app locally in Minikube, you’ll first need to start a cluster and enable the Kong ingress controller add-on, like so:

minikube start
minikube addons enable kong
minikube tunnel

Then you can run the following from the root directory of the cloned repository to start the app:

helm template . --values values.yaml | kubectl apply -f -

Conclusion

Golang is an excellent choice for writing microservices, because it allows for lightweight, highly responsive containers that perform well when deployed to the cloud. Moreover, combining synchronous and asynchronous networking will further improve our application’s performance by allowing us to process and return data to the user as quickly as possible.

Above, we walked through the process of writing such an application and deploying it to Kubernetes both locally and in Velocity’s rapid development environments.

Join the discussion!

Have any questions or comments about this post? Maybe you have a similar project or an extension to this one that you'd like to showcase? Join the Velocity Discord server to ask away, or just stop by to talk K8s development with the community.

Python class called ProcessVideo

Python class called ProcessVideo

Get started with Velocity