While in general for our larger (mostly Java-based) projects we do our deploys to multiple instances, which pull complete (and versioned) binaries straight out of our Artifactory repo, there are some more lightweight projects, mostly in Node.js, where deploys are done directly from source, straight out of our Git repositories.
In either case, we maintain a logical directory structure on each machine for each service, containing all the binaries/source, logs, (links to) data storage, an init script, and a deploy script. These are all put in place automatically by Puppet, and work consistently across machines and throughout time.
For example, in the case of the simple Node projects I’m talking about, the structure is something like this:
- dr-x repo — local checkout of git repo
- -r-x service — init script that sorts out all the other paths… (linked to from /etc/init.d)
- drwx logs — folder for logs
- drwx data — optional folder (or link to data partition) for working storage
- -r-x postCheckout — optional script to modify the repo with custom config and correct permissions
- -r-x deploy — deploy script
the deploy script
The most frequently-used thing from an ops perspective is the deploy script. This performs a combined restart/update, “bringing live” the latest version (or in the case of versioned binaries, the one configured through an AWS instance tag).
The node one’s Puppet template looks like this:
what does it do?
Lines 9-12 make sure the script runs as deployer—the user that owns the Git checkout. Every engineer has permission to sudo as deployer.
This is distinct from the service’s own user which has write permissions only on the appropriate data folders. These are either externally kept in data and linked to, or chmodded specially, in either case by the service’s own postCheckout script.
The variable declarations in lines 16-18 are filled in by Puppet (it’s an ERB template of a bash script…), ending up as straightforward string declarations. Separating the parameters in this way makes the differences between each instance of a deploy script obvious.
Lines 24-28 are basically just responsible for checking out a remote Git branch locally. This is a five line process for multiple reasons:
- We want the deploy script to work consistently for the first creation of the service folders / first clone as well as for any subsequent updates.
- We want to do updates properly through a pull-type operation rather than re-cloning the whole repo each time.
- We want the script to smoothly replace the folder contents if we alter the configured remote or branch name in puppet.
This is actually quite straightforward—running git init in an empty folder turns it into an empty repo, while running it in an existing repo does nothing. The process of setting the origin remote, fetching from it and then resetting the working directory to its branch, also consistently has the desired effect regardless of any changes.
The only complexity comes from an annoying property of the git remote commands. While most git commands have an idempotent form, “remote add” only works if the named remote doesn’t already exist, and “remote set-url” only works if it does. Since we can just do both and safely ignore any failure, we deal with this using the old “|| true” fix.
While deploying from the source repo is probably technically not a great idea, it is quite easy and does work well enough for smaller/simpler projects:
- App server holds the complete Git repo, which isn’t technically necessary.
- No versioning – we just pull the latest head of a named branch (although we could potentially use an AWS tag, just as we do with Maven versions, but with the versions denoted by Git tags).
- No need to define a complex build process apart from the postCheckout script.
- No need to separately store versioned “binaries” for deployment (e.g. .zip files in the case of Node projects).
Anyway, hopefully some of the stuff I’ve mentioned is somehow helpful, and if you have any other relevant nice tips please Twitter them at us or comment or something. Cheers!