Jamie Perkins

containerizer the pain away: part 1

Docker

Like all sexy, young and beautiful companies we are looking at how Docker can help us simplify our infrastructure and delivery.

This will be a two-part post covering what we currently have, what we think we want and how we are getting it.

what we have?

So we’re entirely and exclusively hosted on the Amazon Web Services cloud. We like Amazon and are comfortable committing ourselves to their offerings. In fact we’ve come to be of the opinion that the more stuff we can pay Amazon to deal with on our behalf the better. Read more about why here.

Our instances run in Auto Scaling Groups so that they can heal themselves on failure. We run Puppet on our all hosts to deploy our services which it does by inspecting instance user data.

Puppet manifests describe our services, Hieradata configures them and Upstart/SysV or a script runs them. Chuck in some AMIs, tags, more scripts, a little bit of luck and still more scripts for a near complete description of how our infrastructure is setup and maintained.

This generally sort of works most of the time, but we think we can do much better.

what do we want?

Or, what don’t we want?

We don’t want to have underused instances for one service and overused ones for another. Docker solves this problem by being able to pack containers onto hosts as densely as is sensible.

By defining the resources a container needs Docker can pick a host that can provide those resources. Because each container has an entirely isolated file system there is no need to worry about two services conflicting on the same host.

We don’t want our development, testing and production environments to differ as much as they do now. Dockerfiles force one to think up front about what is on the file system, what is in the environment and how a service will connect with the world. This makes it easier to reason about the environment our code will be running in and to test code in that environment.

With the likes of Docker Compose we can define all of the dependencies of our code in code. If a service needs a Postgres, ElasticSearch, Nginx and a JVM we can express that in a Docker Compose file. It then comes down to running a single command on a dev machine to spin up the entire architecture of an application.

We don’t want to have to wait minutes for instances to spin up and configure themselves in response to high volumes of traffic. Docker containers can be started much quicker than an entire instance. Launching a VM means emulating an entire host, starting up virtual hardware and running its own kernel. Docker containers share the kernel of the host they are running on and have far less baggage then a whole virtual machine to deal with at startup.

These are some of the problems we are trying to solve with our move to Docker. Stick around for part two where I’ll cover the tools we’re using to make life easier for ourselves.

These are some of the problems we are trying to solve with our move to Docker. Stick around for part two where I’ll cover the tools we’re using to make life easier for ourselves.

If you enjoyed the read, drop us a comment below or share the article, follow us on Twitter or subscribe to our #MetaBeers newsletter. Before you go, grab a PDF of the article, and let us know if it’s time we worked together.

blog comments powered by Disqus