Jamie Perkins

containerizer the pain away: part 2

Docker

This is the second half of a two-part series on our move to Docker. You can read the other half here. Last time we briefly covered our current infrastructure set up and the way we’d like it to look in the future. This post is to go over the tools we are using to get there.

consul

Consul is a key/value store with service discovery built on top of it. It exposes both HTTP and DNS APIs for discovering services. This allows us to query Consul for something simple like elasticsearch.consul and in return receive the IP address of any healthy Elasticsearch node registered with Consul.

How services get registered in Consul in the first place is covered later.

vault

Vault is another piece of software by Hashicorp, who also make Consul. Vault is a secrets store for securely storing and retrieving encrypted configuration. It supports many different backends, of which we are using Consul.

envconsul

Envconsul is responsible for populating our container environments with configuration stored in Consul. All services that need environments populated via Consul have as their startup command envconsul. This first pulls the relevant config for a container using a key prefix from Consul and then calls into the entry point for your service.

Envconsul also handles pulling in secret configuration from Vault and exposes it the same way as config pulled from Consul.

Combining this with the environment configuration overriding in Dropwizard helps to simplify the way our services load configuration. For some services that predate Dropwizard, we have JVM-system-properties-mangling scripts to decide what configuration should be overridden. For Dropwizard services that predate Docker, we have Hiera merging YAML files. Neither of these approaches make it particularly easy to see where configuration came from at runtime.

With Consul and Envconsul, we have one single set of configuration defined before any container is started that can be inspected at any time.

This means no baked in configuration at build time and no rebuilding images for configuration changes.

registrator

Registrator performs an important bit of magic for us. It runs in a container on all our container hosts (EC2 instances) and listens for new containers spinning up. When it detects a new container starting it looks for the ports the container exposes and automatically registers that container in Consul. Add a little bit of configuration in each container to tell Consul the service it should be registered under and you have automatic service discovery of any containers that expose ports.

git2consul

git2consul handles another important bit of magic. Most of our service configuration is stored in the form of YAML files that are in turn stored in a Git repository. If the name isn’t clue enough, git2consul takes this Git repository and flattens all YAML files into keys and values in Consul.

For example, say we have a file ‘infra-prod.yaml’ as:


This becomes INFRA_PROD_SERVICE_PORT=9090 when loaded into a container’s environment. This saves us headaches of changing how we store and load configuration for our services whilst in that tricky migration phase.

It also allows us to keep our versioned configuration in Git while also having the latest version of that config available from Consul. We use a CI job with a Git hook to run git2consul whenever a commit is pushed to Git.

nginx and consul templates

Nginx is a web server. Consul Templates is used as a way of rendering template files based on values in Consul. These two things go together very nicely. Nginx configuration can be updated and generated in response to Consul changes.

This means we can update the routing our Nginxs perform in real time as containers come and go, being registered and unregistered from Consul in the process.

The combination means we can have a layer of Nginx containers that perform routing for all our services, thus keeping all that logic in one place, not spread out across various hosts each with their own Nginx/Apache.

We can simply point all metabroadcast.com subdomains at our Nginx routing layer and let Nginx figure out what to do with each request.

but wait, forget all that

After figuring out how these tools fit together and how we might use them, we also gave Kubernetes a try. Kubernetes is Google’s container orchestration software, although they say it’s more akin to choreography. Kubernetes does many of things the above set of tools combine to do out of the box. We were initially hesitant to tie ourselves to the Google way of doing things. Unlike simply tying ourselves to AWS’s infrastructure, this would be tying ourselves to Google’s design patterns and abstractions, essentially dictating how we deploy, operate and scale our software.

After having played around with it though we have become comfortable with this idea. It turns out Google, with their decade and a half experience running production systems at the largest scale, have figured out most of this stuff and made those solutions available to us all.

hows Kubernetes compare?

In comparison to our set of tools above, Kubernetes uses etcd where we used Consul. It has its own service discovery and auto-registration using SkyDNS on top of etcd.

It has built in secrets management to replace Vault, using etcd again to store these secrets. Where we were using git2consul to store configuration in Consul, Kubernetes has its own ConfigMap thing that can store configuration in etcd and magically inject that configuration into a container at runtime, with no extra tools needed.

While we no longer need Consul templates and a layer of Nginx for services whose routing is as simple as finding a DNS record, we do need it for more complex URI based routing. For this, we intend to use confd, which appears to be a complete alternative to Consul templates that also supports etcd.

It’ll also automatically set up log handling and metric recording for your containers using Fluentd, Kibana and Grafana right out of the box. So on the software and features side, Kubernetes seems to be lacking nothing so far.

Kubernetes has freed us from having to build out our own tooling on top of ECS and it’s saved us from having to define and design all of these abstractions ourselves.

Furthermore, Kubernetes’s ethos is that you should be able to write cluster and container aware applications. In practice this means Kuberenetes exposes a ton of information through its APIs about the state of the cluster and so on.

If you’re thinking you probably need the flexibility of ECS to implement complex workflows, it’s worth checking Kubernetes doesn’t already offer everything you need, letting you have your cake and eat it.

The community is highly active, I recommend lurking in the Kubernetes Slack channel as a good way of picking up details and learning more.

wrapping up

Hopefully this is a useful roundup and comparison of tools to those looking to do a similar thing. We’re still just starting out and getting our heads around everything, so this may change for us again. If you’ve got any experience migrating to Docker or you want to hear from us, why not get in touch?

If you enjoyed the read, drop us a comment below or share the article, follow us on Twitter or subscribe to our #MetaBeers newsletter. Before you go, grab a PDF of the article, and let us know if it’s time we worked together.

blog comments powered by Disqus