we write about the things we build and the things we consume
Garry Wilson

peeling off the varnish

I’ve written a few times now about Varnish, and how we have been using it to provide a caching layer for any part of MetaBroadcast that might take a lot of traffic fairly quickly.

Until recently, we were using Varnish for three main components: widgets, resized images, and caching some traffic to reduce load on the API servers.

Widgets and images are considered immutable once created, and will be cached long-term based on the headers we provide. When we need to make changes to a widget, we do so with versioned directories containing the various JavaScript or CSS; we then also cache these new versions of each widget’s files.

As for the API traffic, we use Varnish to provide a level of protection by caching only for a small amount of time, ensuring the app servers don’t get overloaded by a rush of requests.

so what’s the problem?

Whilst we like Varnish and it serves an important role, it’d be nice to rely on it less for critical traffic, and to separate the three roles we’re giving it more cleanly. Also, the Varnish Configuration Language is not exactly fun to follow or update — it would be great to strip the business logic that has accumulated there to somewhere we can manage it more easily.

We have been using CloudFront for some of our front-end projects for a while. It gives us the benefit of caching, plus the ability to add SSL easily, which we wouldn’t get by just hosting from S3 directly. It seems like a great candidate to take over the widgets and images from Varnish.

caching all the things

Using CloudFront with widgets is straightforward, as we already have those in an S3 bucket. We can simply set up a CloudFront distribution that will cache those S3 assets.

Images are a little trickier, because we provide the option to resize or transform images with querystring parameters. Again, images are stored in an S3 bucket, but get called via the resizer by Varnish.

To have the images work with CloudFront, we will use our nginx proxy as the CloudFront origin. Nginx will proxy requests to the resizers, in the same way that Varnish used to, and also add any of the headers we would like to set cache age. Nginx is where we concentrate all other header and cache logic, so it makes sense to continue that for this purpose. That means the resizers don’t need to be updated.

One important thing to note here is that our CloudFront distribution is set to cache images separately based on all querystring parameters. So, as we can append “width” and “height” to an image request to have it resized, each variant of each image will be cached within CloudFront for up to 1 year. Typically, anyone requesting images from us will make many requests to a small number of image variations, depending on the dimensions they need for where the image will be placed in their app or site.

extra benefits

With widgets and images out of the picture (no pun intended), it means we can greatly simplify our Varnish configuration. Instead of having different logic for those three cases, and different caching and header concerns for each, we can now use it as a simple cache layer in front of our Nginx proxy, caching for 10 seconds, to maintain that safety buffer from a surge in requests.

And even better, CloudFront means we can now offer both SSL and HTTP2 across our widgets and images. On top of the edge locations that CloudFront provides as standard, it means we can greatly reduce the load times on the images and widgets used by our clients and their users.

If you enjoyed the read, drop us a comment below or share the article, follow us on Twitter or subscribe to our #MetaBeers newsletter. Before you go, grab a PDF of the article, and let us know if it’s time we worked together.

blog comments powered by Disqus
sign up to #metabeers
slideshow