Get started with Velocity
Join the Waitlist
Join Our Discord
Blogs

Connect to CockroachDB with SQLAlchemy and Add Slack Webhooks

Jeff Vincent
Jeff Vincent
  
January 1, 2024

Connect to CockroachDB with SQLAlchemy Running in Kubernetes (and Add Slack Webhooks!)

Connect to CockroachDB with SQLAlchemy and Add Slack Webhooks

CockroachDB offers fully managed, distributed SQL databases for data-intensive applications. SQLAlchemy is a popular object-relational mapper (ORM) for Python applications that store data in a SQL database.

In this post, we'll walk through creating a free Postgres database with CockroachDB and connecting to it from a Kubernetes deployment running a simple Python application built with FastAPI and SQLAlchemy. I went with CockroachDB’s serverless database, because I wanted to focus on the application itself, and I didn’t know in advance the exact resources that the database would need were it deployed in the cluster itself. 

After we deploy the application and connect it to our Postgres instance, we'll add a Slack notification feature to the application while it is running in the Kubernetes cluster with Velocity, a development tool that allows you to write and debug code locally, and automatically apply those changes to a running deployment in Kubernetes without time-intensive manual build processes.

Create a CockroachDB Account and Database

First, go to cockroachlabs.com and click “Start Instantly” to create a free account.

After creating your account and logging in, you can create a free cluster by clicking “Create Cluster.”

Next, select “Serverless,” and click “Create Cluster.”

When you do, you'll then be prompted to create your SQL user with a generated password. Copy this password and store it locally, as we'll need it to connect to the Postgres database. 

After clicking “Next,” you'll see a CURL command that you need to run in your terminal in order to download the root.crt file we'll also need to connect to the database, as CockroachDB requires an encrypted connection. The file will be written to ~/HOME/.postgresql/root.crt, and we'll use this local file to create a Kubernetes configMap to inject it into our application's container at startup.

Start a Kubernetes Cluster

With the database created, we can now start a Kubernetes cluster to run our application. For that example, we'll use Minikube to run a local development cluster.

On Mac, you can run the following to install and start Minikube (with Docker installed):

brew install minikube
minikube start
minikube addons enable kong
minikube tunnel

Add Credentials and Certificates Securely to Kubernetes

Now that our cluster is up and running, we can securely add our CockroachDB credentials and certificate file that are required to connect to our newly created Postgres database. For this, we'll use a Kubernetes Secret and ConfigMap to store our password and .crt files, respectively.

To create these resources from the command line, you can run the following in a second terminal:

kubectl create configmap certificate-configmap --from-file=root.crt=.postgresql/root.crt -n default
kubectl create secret generic cockroachdb-credentials --from-literal=password= -n default

NOTE: Both of the above values — i.e., your root.crt file and your CockroachDB password were generated above when we created our CockroachDB cluster.

Define the FastAPI Application

import json
import os
from fastapi import FastAPI, Form, Request, Depends
from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

POSTGRES_USERNAME = os.environ.get('POSTGRES_USERNAME')
POSTGRES_PASSWORD = os.environ.get('POSTGRES_PASSWORD')

DATABASE_URI = f'cockroachdb://{POSTGRES_USERNAME}:{POSTGRES_PASSWORD}@mobile-feline-2840.g95.cockroachlabs.cloud:26257/postgres?sslmode=verify-full'
engine = create_engine(DATABASE_URI)

SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()

class User(Base):
    __tablename__ = 'users'

    id = Column(Integer, primary_key=True)
    device_type = Column(String)

# Create the table in the database
Base.metadata.create_all(bind=engine)

app = FastAPI()

@app.get("/api/sign-up")
def create_user(request: Request):
    device_type = request.headers._list[9][1].decode('utf-8')
    with SessionLocal() as session:
        user = User(device_type=device_type)
        session.add(user)
        session.commit()
        return user

@app.get("/api/users")
def get_users():
    with SessionLocal() as session:
        r = session.query(User).all()
        return r

Above, we connect to the database with the environment variable POSTGRES_PASSWORD, which we will access from the Kubernetes Secret created above. Then, we define our database table with SQLAlchemy as a Python class that inherits the Base class, which is an instance of the declarative_base import.Then, we call Base.metadata.create_all(bind=engine) to create the table in Postgres, and finally, we have a simple API defined with two endpoints — “sign-up” and “users.”Both endpoints will accept a GET request; “sign-up” will parse the headers of the incoming request and create a new db record for each request containing the device type that made the request. The “users” endpoint will simply query the db for all records in the User table.

requirements.txt

anyio==3.7.1
certifi==2023.5.7
charset-normalizer==3.1.0
fastapi==0.99.1
idna==3.4
psycopg2-binary==2.9.6
pydantic==1.10.11
python-multipart==0.0.6
sniffio==1.3.0
SQLAlchemy==2.0.18
starlette==0.27.0
typing_extensions==4.7.1
urllib3==2.0.3
uvicorn==0.22.0
sqlalchemy-cockroachdb==2.0.1

NOTE: for SQLAlchemy to connect to CockroachDB, you must install sqlalchemy-cockroachdb as there are some differences between standard Postgres and CockroachDB.

Define the React Frontend

In order to interact with the backend we just defined, we'll create a simple React app with a single button that, when clicked, will send a GET request to the “sign-up” endpoint.

In the root directory of your project, run the following to create a React app:

npx create-react-app frontend

Next, create a components directory in the src directory in the newly created frontend:

mkdir components

Then, in the components directory, create a file called SignUp.js and add the following:

import React, { useState } from 'react';
import './components.css';

const SignUp = () => {
  const handleSignUpClick = async () => {
    await sendSlackNotification();
  };

  const recordSignUpEvent = async () => {
    const request = new Request(`api/sign-up/`, {
      method: 'GET',
    });
    try {
      const response = await fetch(request);
      if (!response.ok) {
        throw new Error('Failed to record signup event');
      }
    } catch (error) {
      console.log(error.message);
    }
  };

  return (
    <div className="service-container">
      <div className="service">
        <h2 className="service-title">My Cool Service</h2>
        <p className="service-description">
          Lorem ipsum dolor sit amet, consectetur adipiscing elit.
        </p>
        <button className="sign-up-button" onClick={handleSignUpClick}>
          Sign Up
        </button>
      </div>
    </div>
  );
};

export default SignUp;

Next, we'll need to include this new component in the frontend/App.js file, as follows:

import SignUp from "./components/signUp";

function App() {
  return (
    <div>
      <SignUp/>
    </div>
  );
}

export default App;

Containerize and Deploy the Application

Next, we'll need to create Dockerfiles for each of our services — the frontend, and the backend.

Backend - /backend/Dockerfile

# Use a lightweight Python image
FROM python:3.11-slim-buster

# Set the working directory
WORKDIR /app

# Copy the requirements file
COPY requirements.txt .

# Install the Python dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy the source code
COPY main.py .

# Start the application using uvicorn
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]

Frontend - /frontend/nginx.conf

server {
  listen 80;
  server_name localhost;

  location / {
    root /usr/share/nginx/html;
    try_files $uri $uri/ /index.html;
  }
}

/frontend/Dockerfile

# Use an official Nginx image as the base
FROM nginx:alpine

# Remove default Nginx configuration
RUN rm -rf /etc/nginx/conf.d

# Copy custom Nginx configuration
COPY nginx.conf /etc/nginx/conf.d/default.conf

# Copy the built React app from the local machine to the container
COPY build /usr/share/nginx/html

# Expose a port for the container
EXPOSE 80

# Start the Nginx web server
CMD ["nginx", "-g", "daemon off;"]

Build and Push the Docker images based on the above Dockerfiles

Next, you'll need to define the Dockerfiles for each service and build the images in Minikube.

Frontend

npm run build \
&& minikube image build -t frontend:latest .

Backend

minikube image build -t backend:latest .

Create Kubernetes Deployments

Now that the above Docker images have been built and pushed to a registry, we need to define the Kubernetes objects — i.e. Deployments, Services and an Ingress, which will pull and run the images, allow in-cluster HTTP connectivity, and finally allow HTTP connectivity from outside of the cluster as well.

backend.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  labels:
    app: backend
spec:
  selector:
    matchLabels:
      app: backend
  replicas: 1
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
        - name: backend
          image: 
          imagePullPolicy: IfNotPresent
          env:
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cockroachdb-credentials
                  key: password
          volumeMounts:
            - name: cert-volume
              mountPath: /root/.postgresql
              readOnly: true
      volumes:
        - name: cert-volume
          configMap:
            name: certificate-configmap
---
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  ports:
    - port: 8000
      targetPort: 8000
      name: backend
  selector:
    app: backend
  type: ClusterIP

Notice in the above Kubernetes deployment definition, we are including our Postgres password as the Kubernetes Secret we created above, and we're adding our root.crt file as a Kubernetes volume, which is populated by the configMap we also created above. We also created a ClusterIP service, so this service will be accessible via HTTP inside of the cluster.

frontend.yaml

For the frontend, we'll need to create a Deployment and a Service as we did with the backend, and we'll also have to create an Ingress, which will allow HTTP traffic from outside of the cluster.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  selector:
    matchLabels:
      app: frontend
  replicas: 1
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: frontend
          image: 
          imagePullPolicy: IfNotPresent
          ports:
            - name: frontend
              containerPort: 80
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  ports:
    - port: 80
      targetPort: 80
      name: frontend
  selector:
    app: frontend
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: frontend
spec:
  ingressClassName: kong
  rules:
  - http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: frontend
              port:
                number: 80
        - path: /api
          pathType: Prefix
          backend:
            service:
              name: backend
              port:
                number: 8000

Notice that we have routes for both of our services defined in our Ingress with distinct path prefixes. This is required because when the frontend is running, it is actually running in the browser, which means that requests from the frontend to the backend will actually be coming from outside of the cluster, so this requires a specific Ingress rule for access to the backend.

Deploy the application

With all of the above in place, you can run the following to deploy the application to Kubernetes:

kubectl apply -f backend.yaml -n default
kubectl apply -f frontend.yaml -n default

Test the Application

Navigate to http://localhost in your browser, and you will see the following:

When you click the “Sign Up” button, it will send a GET request to the “sign-up” endpoint we defined in the backend that includes the following device type data that we parsed from the incoming HTTP request.

Add a Slack Webhook

Next, let's create a Slack Workflow that will send a notification every time the “Sign Up” button is clicked. To do so, select the “Workflow Builder” in the “Tools” section of the dropdown menu shown below:

Next, select a Webhook workflow:

Click “Set Up Variables.” 

Create a variable called “sign_up_from,” and click “Continue.”

Select the user or channel that will receive the notification:

Publish the workflow, and click on the “Starts with a webhook” tile to get the webhook's URL.

Start a Velocity Development Session

Next, we’ll start a Velocity development session, so that we can continue to develop the service as it is deployed, with all relevant configuration for our webhook and database in place already.

And since we are developing Python, I went with the  PyCharm IDE, but Velocity works with VSCode as well. In either case, navigate to your IDE’s Marketplace, each for Velocity, and click “Install.”

Next, click “Login” to log in with either a Google or a GitHub account. Finally, click the debug icon with the default run configuration, “Setup Velocity.”

Add the Slack URL as an Env VarStop the Velocity session, and select “Edit Configurations…” from the Run Configurations dropdown. Add the Slack webhook URL that you copied above as an env var called SLACK_WEBHOOK_URL, click “OK,” and restart the Velocity development session.

Update the FastAPI Backend

Update the requirements.txt

Add the requests library, and save your change:

...
typing_extensions==4.7.1
urllib3==2.0.3
uvicorn==0.22.0
sqlalchemy-cockroachdb==2.0.1
requests

Update main.py

With your Velocity development session running, paste the following into your backend and save the change.

import json
import os
import requests
from fastapi import FastAPI, Form, Request, Depends
from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

POSTGRES_PASSWORD = os.environ.get('POSTGRES_PASSWORD')
POSTGRES_USERNAME = os.environ.get('POSTGRES_USERNAME')
SLACK_WEBHOOK_URL = os.environ.get('SLACK_WEBHOOK_URL')

DATABASE_URI = f'cockroachdb://{POSTGRES_USERNAME}:{POSTGRES_PASSWORD}@mobile-feline-2840.g95.cockroachlabs.cloud:26257/postgres?sslmode=verify-full'
engine = create_engine(DATABASE_URI)

SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()

class User(Base):
    __tablename__ = 'users'

    id = Column(Integer, primary_key=True)
    device_type = Column(String)

# Create the table in the database
Base.metadata.create_all(bind=engine)

app = FastAPI()

@app.get("/api/sign-up")
def create_user(request: Request):
    device_type = request.headers._list[9][1].decode('utf-8')
    payload = json.dumps({'sign_up_from': device_type})
    requests.post(SLACK_WEBHOOK_URL,data=payload)
    with SessionLocal() as session:
        user = User(device_type=device_type)
        session.add(user)
        session.commit()
        return user

@app.get("/api/users")
def get_users():
    with SessionLocal() as session:
        r = session.query(User).all()
        return r

When you do, Velocity will rebuild the image with your new code and deploy it directly to your Kubernetes cluster without any of the manual build and push steps outlined above. If we hadn’t used Velocity, we would have had to save our changes and commit our changes, probably wait for relevant CI processes to complete, and then finally build and deploy a new image.

Now, click the “Sign Up” button in the browser again, and you'll see a notification in Slack!

Conclusion

CockroachDB is a highly-available, fully managed SQL database provider. Above, we wrote a simple application that includes a front and backend — which connects to our CockroachDB instance, and we deployed that application in Kubernetes.

Then, after the application was up and running in the cluster, we used Velocity to update the running image with a call to a Slack webhook endpoint, so that our application would not only store the sign-up event in the database, but would also send a Slack notification. Velocity made this possible without the time-intensive steps of building and pushing the image — instead, we were able to simply write code, and it was automatically deployed to the cluster.

Python class called ProcessVideo

Python class called ProcessVideo

Get started with Velocity