This is the second post in a series about exploring the changing landscape of modern applications. In Part 1 of this series, I discussed how organizations started knocking down the wall between software engineers and administrators. By creating agile DevOps teams, organizations can increase reliability, decrease costs, and, as a bonus, end the war between the two cultures. 

In this post, I’m diving into the history of containers and looking at how they solved problems for developers and enterprise IT. 

Virtualization: How Containers Got Their Start

In the good old days, we ran one application on a server: one application, one server. If we needed two apps, we needed two servers. And so on.

Who liked this setup? The server vendors, and that’s about it. It was highly inefficient. After all, chances were pretty good that none of those applications consumed all the resources on each of the servers. Solving that problem is where the story of containers begins. 

A bunch of smart people in Silicon Valley started figuring out how to virtualize each server so it could be sliced up into multiple virtual machines. Each machine would have its own operating system and be completely independent of all other virtual machines running on that server.

As you might imagine, this became the engine that drove data center consolidation. It also drove the success of VMware and eventually fueled the rise of the public cloud. The public cloud is essentially a virtualized data center that’s available for rent.

It’s pretty difficult to overstate virtualization’s huge impact on enterprise IT.

Containerization is another form of virtualization. It allows us to virtualize a server but share a single operating system kernel. As a result, independent operating environments can share one operating system instead of recreating multiple operating systems every time. This is what makes containerization a lighter-weight form of virtualization. That’s important for data center consolidation, but containers don’t only reduce costs or increase efficiency. What they do is fuel competitiveness, helping you move faster and giving you the ability to get more from your data. 

Moving from Virtualization to Containers

Think of containers as a lightweight form of virtualization. Virtualization dates back to the 1970s, when IBM had a hand in it. It started to become popular in the early 2000s with FreeBSD jails. FreeBSD is an operating system that was surpassed by Linux. Linux started to introduce containerization concepts—notably cgroups and namespaces

In 2014–2015, Docker popularized cgroups and namespaces by doing something pretty cool. They took these things that were just sitting in the Linux kernel and wrapped them in an API. This helped developers solve a lot of complex problems they had been struggling with for a while. 

That’s why containers started to become popular: Containers solved problems. 

Relieving Headaches with Containers

What problems inspired Docker and this containerization 2.0 movement that has brought us things like Kubernetes and Portworx®? If further data center consolidation wasn’t the reason containers became popular, what problems did containers solve?

One of the big headaches of being a developer in 2012–2014 came up whenever you started a new job. It could take up to three weeks to get your development machine set up with everything you needed to code the application you were working with.

For example, let’s say your project was to work on a feature for an e-commerce platform. Your headache would start almost immediately because the e-commerce platform was running on Amazon, which is very different from your laptop. 

You would have to install a bunch of libraries and get various binaries. Then you would have to access this software and that software, run a VM on your machine.… You get the idea. 

And the headache would continue even after all that preliminary work. After you developed on your laptop and deployed your software to production, you would often find bugs. Or, things would work on your laptop but wouldn’t in production. Then, if someone else looked at the bug in their environment, they couldn’t recreate it. 

These differences in environments made it difficult to develop software. Docker created a packaging format that would allow you to take your code and bundle it with all of the dependencies needed to run that application in any environment. That includes the various libraries, binaries, and other things required to run an application. Docker packaged it all together into a container, and that container could then run anywhere, almost without modification. 

That didn’t just cure developers’ headaches—it also solved an application distribution problem. Enterprises wanted to run apps in multiple environments—not just in a development environment. For example, some e-commerce providers might have wanted to migrate from Amazon to Microsoft Azure because they didn’t want to put their data on their biggest competitor anymore. Or, an organization might have wanted to move apps to the cloud from on-prem without having to make any modifications (or only minor ones) to them between those environments. 

Containers are a packaging mechanism. They enable us to construct applications in a way that makes it easy to run them in multiple environments. One of the things that developers and ops people loved about Docker was that they could tell it to run their container and that the app would run in any environment. That mobility and that portability ended up being way more valuable than the lightweight virtualization angle of containers. 

Containers were a game-changer, but they weren’t without some controversy. There was a well-grounded fear that malicious users could break out of a container and access other containers running on that same host. If you think about doing that in an untrusted multi-tenant environment like a cloud, that’s obviously scary, but these problems have been largely resolved. 

Most containers are running in a VMware environment on-prem or in the public cloud. Because it’s encapsulated inside a VM, you can’t break out into the rest of the host operating system. But even in environments where there isn’t virtualization, things like SELinux and others have largely solved those security concerns. 

Where We Go from Here

To run faster, organizations started organizing their teams with a DevOps mindset. They broke down their applications into microservices, then packaged and ran those microservices as containers. That brought about a new concern: managing it and making sure an application is always available and always scalable and making sure it’s accessible even if something unimaginable happens, like a 100x increase in traffic overnight. 

And that’s where Kubernetes comes in. 

Next in the Series: We’ll look at Kubernetes as a technology platform and a business driver in the next part of this series.