This is the first post in a series about exploring the changing landscape of modern applications.
When we talk about modern apps, we often talk about cloud-native. But cloud-native isn’t just about cloud. It’s about moving faster, being more agile, and getting more from your data. But how did we get here? What’s the history behind the technologies powering cloud-native apps, like containers and Kubernetes? This series will explore the challenges the IT industry has experienced and responded to over the decades.
To kick off this series, I’m diving into the history of DevOps so we can get a better picture of how and why the industry has taken this new approach to developing and managing software—and how that approach has ushered in other changes.
How DevOps Got Its Start
Originally, there were developers and operators, and they pretty much functioned in separate worlds. Developers wrote software. They were computer programmers first and eventually became software engineers.
Ops people were administrators. Software engineers wrote code while administrators wrote scripts. This setup created a kind of hierarchy: Developers were higher up on the ladder, and administrators were lower down. That obviously isn’t the best way to foster a collaborative work environment. And, as you might expect, it created inefficiencies.
In fact, it meant that dev folks and ops folks were walled off from one another. Software developers would write code and basically throw it over the wall. In other words, they would take the code that they had run and tested and then send it over to ops—where it became someone else’s responsibility. If there was a problem, the ops team had to deal with it.
But that revealed yet another weakness in the structure. Often, the ops team wouldn’t have the knowledge of the code base or computer programming to solve the problem programmatically. That basically turned them into firefighters, expecting calls in the middle of the night to put out a fire.
So one group created the code, another group was entirely responsible for its operation, and never would the two meet. It’s not the best way to build trust—or mutual admiration—among teams. The result was dev and ops looking a bit like the Hatfields and the McCoys. And that’s our starting point—a sort of long-running feud.
Responding to the Culture War
The cultural divide between dev and ops had just about everything to do with reliability problems. Building software this way made it really unreliable. It broke often and, when it did, it was hard to figure out the root cause. Being unable to figure it out also meant not being able to fix it.
What was the solution everyone went for first? Bigger and bigger hardware. If memory or CPU was spiking through some confluence of events, the next move was to upgrade the server. But it was often just that an algorithm or the library to process the particular function needed tweaking.
As organizations began moving from monolithic applications to microservices and seeing the benefits of breaking applications into smaller components, some thought leaders advanced the idea of doing the same with teams—but with a twist. They floated the idea of breaking their large engineering and ops organizations into smaller teams that had both ops people and developers.
The result of this experiment: Teams could more easily identify the root cause of production issues and increasingly solve them with software instead of hardware. That translated into decreased costs and increased reliability—and maybe an end to the long-running feud. It also fostered mutual admiration and respect within teams, creating a sense of camaraderie. In fact, the nickname for these agile DevOps teams is “Two-Pizza Teams.” It indicates their size—you’d need to order two pizzas to feed the team—and their more collegial atmosphere, as they’re willing to have a meal together.
Ultimately, that model—while not yet ubiquitous—is becoming increasingly popular. The people who write the software are those who operate it. So now, if they happen to get a call in the middle of the night, they look for the root cause and solve it with automation and code so it doesn’t happen again.
The DevOps Revolution
There’s an obvious connection between DevOps and microservices since they both are focused on agility and efficiency. But other developments are happening around the DevOps model as well.
One of those developments is continuous integration and continuous delivery (CI-CD). Organizations have all these microservices now, and they’re constantly updating them. They need to get the new code into their user-facing apps and deploy it to their production servers, which means they have to continuously integrate and deploy the code.
I like to imagine this process as one of those moving walkways at the airport. They’re constantly moving and delivering people from one spot to the next. It’s the same thing with CI-CD—it’s constantly bringing new code into the application and deploying it into production. The DevOps model obviously makes this process much easier.
Another development in DevOps is the move toward DevSecOps—short for development, security, and operations. Everyone is beginning to see that they can’t just bolt on security at the end and say they’ve done a good job where security is concerned. Instead, it needs to be an integral part of the process. This, too, came about as a result of the collaborative DevOps environment. If writing, deploying, and updating code are all part of the same team’s responsibilities, then building in security also has to be a responsibility. And that means focusing on all aspects of security.
Increasingly, DevSecOps is related to the term “shift left,” which has historically applied to testing. If you think about developing an application on a timeline, the right of the timeline is the end, and the left is the beginning. The goal is to focus on security from the start, looking for and fixing security risks along the entire timeline, rather than bolting it on at the end. The approach delivers additional efficiencies to the process. With CI-CD, there isn’t really a beginning and end to the process. You still have the loop of the moving walkway, but the point is to integrate security from the get-go.
Version control goes by a lot of names: source control, version history, and others. The basic idea is that once you’ve coded something, it doesn’t change. You could create a new version of the app that is different from the previous one, but the version itself doesn’t undergo any changes. It’s associated with platforms like GitHub and GitLab.
The purpose of this is to allow teams to maintain a record of changes over time and give them the ability to branch off and test new code without impacting the current version. It also means that if you want to test a few new options, each team member could tackle an option without having to wait for someone else to finish their work. And once everyone is happy with whatever test version you’ve been working on, you can just merge it into the main line of versions.
Again, it’s all about being agile and efficient—and that’s what the microservices revolution has brought about.
Next in the Series: We’ll dive into the history of containers and how they’ve solved another set of problems.