GrubHub saves time and enhances collaboration to develop, debug, and deliver multiple features in parallel, faster.
More than 33 million diners, across over 4,000 U.S. cities have GrubHub delivery people to thank for bringing them the food they love, but it’s also thanks to the software developers working behind the scenes to deliver a seamless, automatic experience for the customers to make sure the orders arrive in the first place.
GrubHub, which recently merged with European food delivery leader Just Eat Takeaway, operates a state-of-the-art engineering team that nonetheless struggles with a problem recognized by developers worldwide: The need for innovation and accelerated feature-to-market delivery is not keeping pace with the tools required to test and fully bake the software being released into production.
It goes without saying that GrubHub’s infrastructure is vast and made up of a lot of components, different technologies, and languages, which can make it more complex when developing on multiple microservices. “Every new employee I came across who was onboarded, saw how complex and a burden the environment’s infrastructure was,” says Elran Shefer, Fullstack Engineer at GrubHub. It was impossible to work locally with all the dependencies needed, meaning they weren’t able to be as productive as needed when developing.
As a developer, it’s important to find the best way to productively get new features out, and prevent bugs from being introduced that could disrupt live business operations, as well as other failures, complicated rollbacks, and delays to recover. “It became difficult to spin up an environment locally on my computer,” says Shefer. “We didn’t have all the dependencies and resources needed. We were limited to working on a single specific service that every engineer connected to the shared QA environment, which was not good enough for our needs.”
One of the main services they work on is the “core server,” which was heavy to spin up a development environment. At first, the GrubHub team started to look at other options internally to see how they could overcome this challenge, which resulted in a long and lengthy process just to try and set something up. The team looked at using other solutions that were trying to ease the local development in front of remote Kubernetes clusters, and make their development environment as close as possible to production.
“We wanted every developer to have their own isolated environment so they could see each change as their change, without it affecting someone else,” Shefer commented. They found some improvement with their short-term solutions, but they were only able to work with their single microservice, which they connected to the shared QA environment. This led to bottlenecks sharing a staging environment but was a step up from working just locally.
There was also a slower feedback loop, which made it difficult when everyone was working from home due to COVID-19. The makeshift solution only synced between Shefer’s local files to a remote container, instead of working locally, which made for a slow and tedious process.
That’s when Shefer reached out to Velocity.
“One of the strong points that Velocity offers is the ability to develop and debug locally while connecting to everything I need in the cloud,” says Shefer. “It’s super intuitive and straightforward.”
With Velocity, Shefer was able to run and debug his microservice locally, while Velocity’s tunnel connected his microservice to its dependencies on his remote cloud environment. Even if Shefer needs to restart his computer, he can continue from the same point he was working on, including the same state of the databases and storage. Additionally, when Shefer finishes his work on a specific feature, he can use Velocity’s automatic sync with production to create a new, fully updated remote environment to work with.
One of the main problems with the solutions they had looked at before coming to Velocity was with problems of synchronization to the production environment and infrastructure. In one case, Shefer was working with a container whose file sync got disconnected with his local code files, which led to regular delays and wasted 1-2 hours just to try and fix it.
The Velocity onboarding process was seamless and took less than an hour to start working with their first environment. “I’m not wasting time,” says Shefer.
“As I work more and more with Velocity, I see more and more use cases that I can use it for, such as developing multiple services or parts in parallel until I have one full feature that works, which is an incredible feature. It’s the first time I can do something like this.”
Before, Shefer and his team had to worry about which environment they were working in, who had to maintain it, and who it might affect if he made any changes. With Velocity’s on-demand environments he can destroy the environment when he’s finished, and create another fully up-to-date environment from scratch for his next needs.
“Even if my computer crashes for whatever reason. It’s no big deal. To spin up an environment is as easy as doing a click and I can create an environment from nothing,” says Shefer. “It’s amazing all these capabilities that I can do thanks to Velocity.”
As a full-stack engineer, Shefer also develops the frontend so there is often a ping pong back and forth between himself and the product team to make sure the feature is good enough, and the UI/UX is what was defined in the specs. With a Velocity isolated production-like environment, he automatically gets a Custom URL that can be shared with stakeholders to get feedback faster and improve collaboration.
“Before, I didn’t have a way to show anything to the product team until it was released to a shared, QA staging environment. With Velocity, I have my own isolated environment, with my own URL to give to the product team. They can just click the link and see the isolated environment with the feature I added. It’s a changing point.”
Today, more and more of the GrubHub team are using Velocity on a regular basis to spin up isolated environments with minimal effort, and they plan to use it for their upcoming project of providing mobile ordering for resorts in Las Vegas.
“This is the tool we were looking for,” says Shefer. “One CLI action and you’re done. That’s amazing in my opinion.”