CockroachDB offers fully managed, distributed SQL databases for data-intensive applications. SQLAlchemy is a popular object-relational mapper (ORM) for Python applications that store data in a SQL database.
In this post, we'll walk through creating a free Postgres database with CockroachDB and connecting to it from a Kubernetes deployment running a simple Python application built with FastAPI and SQLAlchemy. I went with CockroachDB’s serverless database, because I wanted to focus on the application itself, and I didn’t know in advance the exact resources that the database would need were it deployed in the cluster itself.
After we deploy the application and connect it to our Postgres instance, we'll add a Slack notification feature to the application while it is running in the Kubernetes cluster with Velocity, a development tool that allows you to write and debug code locally, and automatically apply those changes to a running deployment in Kubernetes without time-intensive manual build processes.
First, go to cockroachlabs.com and click “Start Instantly” to create a free account.
After creating your account and logging in, you can create a free cluster by clicking “Create Cluster.”
Next, select “Serverless,” and click “Create Cluster.”
When you do, you'll then be prompted to create your SQL user with a generated password. Copy this password and store it locally, as we'll need it to connect to the Postgres database.
After clicking “Next,” you'll see a CURL command that you need to run in your terminal in order to download the root.crt file we'll also need to connect to the database, as CockroachDB requires an encrypted connection. The file will be written to ~/HOME/.postgresql/root.crt, and we'll use this local file to create a Kubernetes configMap to inject it into our application's container at startup.
With the database created, we can now start a Kubernetes cluster to run our application. For that example, we'll use Minikube to run a local development cluster.
On Mac, you can run the following to install and start Minikube (with Docker installed):
Now that our cluster is up and running, we can securely add our CockroachDB credentials and certificate file that are required to connect to our newly created Postgres database. For this, we'll use a Kubernetes Secret and ConfigMap to store our password and .crt files, respectively.
To create these resources from the command line, you can run the following in a second terminal:
NOTE: Both of the above values — i.e., your root.crt file and your CockroachDB password were generated above when we created our CockroachDB cluster.
Above, we connect to the database with the environment variable POSTGRES_PASSWORD, which we will access from the Kubernetes Secret created above. Then, we define our database table with SQLAlchemy as a Python class that inherits the Base class, which is an instance of the declarative_base import.Then, we call Base.metadata.create_all(bind=engine) to create the table in Postgres, and finally, we have a simple API defined with two endpoints — “sign-up” and “users.”Both endpoints will accept a GET request; “sign-up” will parse the headers of the incoming request and create a new db record for each request containing the device type that made the request. The “users” endpoint will simply query the db for all records in the User table.
NOTE: for SQLAlchemy to connect to CockroachDB, you must install sqlalchemy-cockroachdb as there are some differences between standard Postgres and CockroachDB.
In order to interact with the backend we just defined, we'll create a simple React app with a single button that, when clicked, will send a GET request to the “sign-up” endpoint.
In the root directory of your project, run the following to create a React app:
Next, create a components directory in the src directory in the newly created frontend:
Then, in the components directory, create a file called SignUp.js and add the following:
Next, we'll need to include this new component in the frontend/App.js file, as follows:
Next, we'll need to create Dockerfiles for each of our services — the frontend, and the backend.
Backend - /backend/Dockerfile
Frontend - /frontend/nginx.conf
Next, you'll need to define the Dockerfiles for each service and build the images in Minikube.
Now that the above Docker images have been built and pushed to a registry, we need to define the Kubernetes objects — i.e. Deployments, Services and an Ingress, which will pull and run the images, allow in-cluster HTTP connectivity, and finally allow HTTP connectivity from outside of the cluster as well.
Notice in the above Kubernetes deployment definition, we are including our Postgres password as the Kubernetes Secret we created above, and we're adding our root.crt file as a Kubernetes volume, which is populated by the configMap we also created above. We also created a ClusterIP service, so this service will be accessible via HTTP inside of the cluster.
For the frontend, we'll need to create a Deployment and a Service as we did with the backend, and we'll also have to create an Ingress, which will allow HTTP traffic from outside of the cluster.
Notice that we have routes for both of our services defined in our Ingress with distinct path prefixes. This is required because when the frontend is running, it is actually running in the browser, which means that requests from the frontend to the backend will actually be coming from outside of the cluster, so this requires a specific Ingress rule for access to the backend.
With all of the above in place, you can run the following to deploy the application to Kubernetes:
Navigate to http://localhost in your browser, and you will see the following:
When you click the “Sign Up” button, it will send a GET request to the “sign-up” endpoint we defined in the backend that includes the following device type data that we parsed from the incoming HTTP request.
Next, let's create a Slack Workflow that will send a notification every time the “Sign Up” button is clicked. To do so, select the “Workflow Builder” in the “Tools” section of the dropdown menu shown below:
Next, select a Webhook workflow:
Click “Set Up Variables.”
Create a variable called “sign_up_from,” and click “Continue.”
Select the user or channel that will receive the notification:
Publish the workflow, and click on the “Starts with a webhook” tile to get the webhook's URL.
Next, we’ll start a Velocity development session, so that we can continue to develop the service as it is deployed, with all relevant configuration for our webhook and database in place already.
And since we are developing Python, I went with the PyCharm IDE, but Velocity works with VSCode as well. In either case, navigate to your IDE’s Marketplace, each for Velocity, and click “Install.”
Next, click “Login” to log in with either a Google or a GitHub account. Finally, click the debug icon with the default run configuration, “Setup Velocity.”
Add the Slack URL as an Env VarStop the Velocity session, and select “Edit Configurations…” from the Run Configurations dropdown. Add the Slack webhook URL that you copied above as an env var called SLACK_WEBHOOK_URL, click “OK,” and restart the Velocity development session.
Update the requirements.txt
Add the requests library, and save your change:
With your Velocity development session running, paste the following into your backend and save the change.
When you do, Velocity will rebuild the image with your new code and deploy it directly to your Kubernetes cluster without any of the manual build and push steps outlined above. If we hadn’t used Velocity, we would have had to save our changes and commit our changes, probably wait for relevant CI processes to complete, and then finally build and deploy a new image.
Now, click the “Sign Up” button in the browser again, and you'll see a notification in Slack!
CockroachDB is a highly-available, fully managed SQL database provider. Above, we wrote a simple application that includes a front and backend — which connects to our CockroachDB instance, and we deployed that application in Kubernetes.
Then, after the application was up and running in the cluster, we used Velocity to update the running image with a call to a Slack webhook endpoint, so that our application would not only store the sign-up event in the database, but would also send a Slack notification. Velocity made this possible without the time-intensive steps of building and pushing the image — instead, we were able to simply write code, and it was automatically deployed to the cluster.
Python class called ProcessVideo
Python class called ProcessVideo