More and more I keep running into assertions that Git is a version control tool, and that if you use it for deployment, you're doing it wrong.

Why?

At Freelock we find it to be a very effective deployment tool, and I'm not seeing a solution that meets our needs any better.

Two presentations in particular caught my attention recently mentioned this:

... but those are just two that come to mind. I feel like I've heard this message from lots of different directions, particularly among some of the broader PHP community.

But again, why? What is wrong with using git as a deployment tool?

Reasons to not use git as a deployment tool

As best I can tell, there are three main arguments against using git for deployment. These boil down to:

  1. Storing environment-specific information, including security risks around database credentials, API keys, etc.
  2. Storing object files, generated files, code unrelated to the development of the project.
  3. Disk usage, wasted space taken up by having the full history of a project in a production deployment.

I think the main thrust of this line of thinking is that "git is for source code management, you don't need that in delivered production code, and storing all of this other binary/environment stuff gets in the way of developing the upstream code for a project."

When we started using git for deployment, we ran into these issues, and lots of challenges particularly around managing multiple environments. But they are entirely solvable, and git brings with it a number of big positives as one part of a deployment toolchain.

Why I think git is a great deployment tool

Git brings several big benefits to deployment:

  1. It's extremely fast, can merge in a huge number of changes in a few seconds.
  2. It's very fast and easy to roll back changes that go awry. Especially if you use tags effectively.
  3. For managing lots of related websites, it's an extremely effective way to provide patches across a bunch of different sites.
  4. With a cryptographic hash of every code file, it's extremly simple to detect unauthorized modifications of a file -- e.g. detect if you've been hacked.

Git does not provide these benefits just by using it. For it to be an effective deployment tool, you need to have a very clear organization, relatively strict control over your processes, and some guiding principles. But with those in place, I haven't seen any tool that does a better job for deployment than git.

How we use git for deployment

We've spent years refining our git deployment process, and it actually feels very dialed in at this point. It's not the only tool we use -- we use a bunch of other tools and scripts to help automate and streamline this process, but here are a few of the ways we've addressed the problems with using git for deployment, and leveraged its benefits:

Keep environment-specific settings out of git

Acquia does this. So does Pantheon. Most hosts that rely on git either provide their own environments entirely, or leverage some sort of include structure.

We commit the settings.php file, and can add $conf variables there that need to be available in all environments. At the bottom of the settings.php file, we use a PHP include statement to include a local settings file that is excluded from git, and in that file we store database credentials, environment indicator settings, API keys (for production), enable/disable variables for things like securepages and reroute_email, and anything else that only applies to a single environment.

We catalog environments in drush alias files, one for each client site. These aliases include modules to enable/disable whenever a database sync is performed into an appropriate environment (e.g. into staging, sanitize emails, enable dev modules, disable payment modules, etc).

We distribute drush alias files using Salt Stack, a configuration management tool similar to Puppet, Chef, or Ansible. We're working on moving more environment-specific settings (mainly for provisioning Docker containers or kicking off test scripts) into Salt.

Add generated code to git

You'll find all sorts of guidance to exclude vendor code, things that can be pulled down with make files, things generated with Sass/Compass/other tools out of git.

Bullshit.

Coming from an operations perspective, why on earth would you want build tools present on a production server? A huge amount of setting up tools like Compass and Composer can put you into dependency hell, even with composer.lock and gem.lock files. It seems Docker is what's in vogue these days for managing that -- run a bunch of build scripts in a Docker container, and then copy that up to production -- except that nobody who does that dares to run these Docker containers in production. And you've added a bunch of steps to getting something you can actually run on a server, requiring re-compiling and rebuilding everything for the tiniest change.

Git handles this way, way better. And for any scenario where you need tools to build something that you're actually delivering, why not commit that working CSS or whatever you're generating, into your build and pushing that out? No interim container builds necessary. No risky build tools lying around on your production server. No downtime waiting for builds to run on production -- just deploy and clear some caches.

Buy more disk space and grab a coffee

Having a copy of every single Drupal core commit ever in every single production copy may be unnecessary. It does make a clone of the git tree take a few minutes, and take up a couple hundred megabytes of disk.

While that doesn't really add any significant value to a deployment process, and while there's really no benefit to keeping all that around in so many copies, we've found that it's just not worth cleaning all that out.

First of all, these penalties really only make any difference whatsoever the very first time you create a new copy of the site. After that, updates are really really fast.

But the bigger reasons are all related to our core business: keeping dozens of Drupal sites up-to-date.

We've written about our branching and updating strategy before. In short, we maintain our own cloned copy of the main Drupal.org git repository, with branches for Drupal 6, 7, and Pressflow. Upgrading to a new point release of Drupal means merging the new tag into our branch. Then we've written a script to update all the contrib modules we curate across most of our sites. We apply patches to both Drupal core and contrib (for example, patching this core issue, and many changing patches for Media 2.x and other modules).

We call this clone our "upstream" and from there we can simply merge these updates into all of our client's dev sites, run update scripts, and test.

Trying to remove or squash the commit history of Drupal always seems to cause a lot more problems in this deployment process than it solves, so we just live with the extra disk usage and go refill the coffee cup.

Git is a "distributed version control system"

... This seems to be the biggest point that these "purists" who think git should only be used for version control miss: it's extremely cheap and easy to clone and maintain different branches with git. That means you can keep your upstream pristine clone for developing a project that you want to publish on Drupal.org, and very easily use a different git clone to manage it for deployment in a particular site. There is no reason you can't do both with git.

On to the stuff we leverage...

Git post-update hooks for automated deployment

We use a central git server running gitolite. In our gitolite server, we've added a post-update hook that looks for commits pushed that have a branch name matching "release/*" or "hotfix/*". If such a branch is pushed, it passes the repository name, branch name, user who pushed it, and a couple other details into Jenkins.

Jenkins runs a shell script that makes heavy use of drush to connect to the site's stage copy, and automatically deploy there. If the branch already exists on the stage site, it skips the database copy -- otherwise it imports a fresh copy of the production database, runs through some sanitization, checks out the new code, runs updates, reverts features, and notifies us in chat when it's done. Then we can review the Jenkins log and the state of the stage instance to determine if there are manual steps we've overlooked, how long the deployment will take (e.g. to run update scripts, apply features), and whether or not we should plan to take the production site offline for the upgrade (we lean towards keeping it online unless there will be many minutes of entirely bad user experience...)

Production tags and deployment

We don't automatically deploy to production, but we have written scripts to support our process. Our stage cleanup script merges the current release code to master and develop branches, cleans up the central repository, leaves the stage copy on master and the development copy on develop. And it created a git tag for the particular release.

On production, we have nightly backups of the database, and the ability to easily take ad-hoc database snapshots. Before deploying we do a few quick checks:

  • Is the database snapshot current enough? Take a new one if there are any update scripts.
  • git status - Is there any uncommitted code? (Might indicate a bigger problem).
  • drush fl - any features that are not up-to-date? If so, might need to update/pull back to development and re-roll the release.
  • git log/git tag -- identify the current state of the code base -- should already be tagged with the previous release version. If not, create a new tag so we have an easy restore point.

Once ready, we simply pull down the new tag from master, run the updates, apply the features, and then follow any manual deployment steps that need to be done.

Roll back when hell breaks loose

With git, deployment of new code is fast. Once you've fetched the code updates, the merge process only changes files that have changed in the new release, and this process takes fractions of a second for small updates, a few seconds for large ones.

Rolling back is just as fast. Find a bunch of substantial problems? Simply git checkout release-x-1. Find some individual problem in a particular module? git checkout <myoldrelease> path/to/module, and then you can use other git tools to bring that back to development for a proper fix.

What other tool solves these deployment challenges?

Rolling back seems to be the most critical, and that's something you want to do immediately.

How can you do this with other tools?

What other tools are out there that can help with these challenges?

The main ones I hear about: Docker, build scripts/make files, alternative images.

Docker

We've actually started using Docker in a pretty big way, but not for managing deployments of site code -- git does a far better job for that, as we've already seen. At least if you're managing a bunch of different but similar sites -- if you are trying to deploy exactly the same site to a bunch of different servers to handle high loads, Docker may well have some advantages.

But the main way to roll back with Docker is to stop the container running the new site, destroy it, then run a new container based on the old image. You end up with some downtime, and if you're using Docker to isolate processes, you now need all the other linked containers to get connected to the new one, possibly triggering a cascade of Docker restarts and far more downtime.

Build Scripts/Make Files

What, are you crazy? Since your site isn't in git, you now need to go back to your development machine, check out an old copy of your site with the old versions of the make files -- if those had any version tags to begin with. If not, now you've got to figure out what version to go back to to get to a working state.

Utter madness. You might as well go back to your previous night's backup, but now you risk losing a bunch of data, too!

Alternative production images

I don't know of anyone using this deployment strategy with Drupal. But while doing some consulting for Microsoft, I ran into this pattern as the preferred way of releasing on Azure. With this pattern, you have not one but two copies of your production site, with essentially a load balancer in front of them. You deploy to the offline one, get it all up-to-date, and then switch all traffic to it. If anything goes wrong, you simply switch back to the other one.

Database schema changes were similarly captured as they were made, using tools that could reverse those schema/data changes if necessary.

Now that sounds like a very intriguing deployment strategy, especially the database schema management. But that's one tool the Drupal community does not (to my knowledge) have at its fingertips... yet.

And even so, the git deployment strategy we use complements this approach very well -- you still need to get the code onto the production instances somehow, and git seems far better than FTP/SFTP... and all the build-script/make file security issues still apply.

What's your take?

Our approach to deployment comes from a Dev-Ops, risk management point of view. We've developed our practices through thousands of releases on hundreds of different sites, with no major issues. We strive to make our production servers extremely secure, easily recoverable from bad releases, and running with as little downtime as possible. From this perspective, I'm not seeing any viable alternative to git for deployment that does as good a job...

What do you use for deployment? Why shouldn't we use git this way? Is there some magical deployment tool we're missing from our arsenal?

Share this article with your friends!

Comments (12)

I also use git as a deployment tool, in spite of all the naysayers. As long as your project fits git and its strengths and weaknesses - no big binary blobs, and so on - and you know git well enough not to get yourself in a mess, git is great for deployment.

16 Jun, 2015

How do you keep track of the local settings files?
Do you have a separate git repository with all the settings files and aliases for all your projects?
Or one for each project?
An then why use Salt to deploy those and not git? Or why not use Salt to deploy your sites?

17 Jun, 2015

Hi,

Right now the local settings files are untracked, but we do have a backup system that snapshots these (along with everything else) keeping particular snapshots for 16 months. We use Backuppc for that, so it is easy to detect changes.

We do keep all the drush aliases in a separate git repository, and that one is automatically deployed to all our dev machines using Salt...

Salt uses git to deploy our Drush aliases. Right now, Jenkins handles code deployments (for now only to a stage copy) using... git.

We could just as easily use Salt to deploy instead of Jenkins -- it's an accident of how we developed our systems that makes Jenkins the current job runner there. In either case, it's still git...

As to why we're looking at deploying local settings files with Salt, we've been moving more of our infrastructure into Docker. So provisioning a new production site means creating a database and account, creating a Docker PHP container, mounting the code read-only, mounting the assets read-write, and dropping in an Nginx site configuration that points to the correct Docker container. That's all environment stuff... and maybe I'm dense, but I haven't figured out how to get Docker to manage that, especially across upgrades with minimal downtime... If we're doing all that other provisioning, it becomes trivial to have Salt write out a local settings file.

17 Jun, 2015

"except that nobody who does that dares to run these Docker containers in production"

Actually in this paragraph you describe something very similar to Platform.sh (it uses containers, but not Docker itself). An advantage of this process is that - because the container is rebuilt every time - the actual infrastructure can be managed in Git with just as much precision as the application code.

17 Jun, 2015

Hi,

Do you use these same containers for development/site updates?

It sounds like the trend in the Docker world is to use it to bottle up/isolate processes in containers with minimal stuff installed. This makes sense to me, because I see these containers primarily as process isolation/security boundaries. This lets us run old PHP versions in an extremely locked down way.

But to add all the stuff we need to actually do development opens up a huge number of security holes. Root is root, even in a container, and even if you can't see much that you can damage, if you can specify/mount a volume inside a container that has anything running as root, your attacker now potentially has root on the box...

So I am liking containers a lot, I'm just very, very careful about how I put them in production... and those goals seem entirely counter to the "put all my dev tools in a container so I can make it easy for developers". Still trying to resolve that!

And we're finding containers don't streamline deployment, they complicate it. Now if something in the container needs to be updated, we have to stop the container, delete it, and start a new container. While that's not much slower than restarting a service, it is slower... and harder to manage dozens. Salt to the rescue!

17 Jun, 2015
mradcliffe

I found the comment about make files and build processes to be aggressive and it made me question the value of the blog post as a whole. That section is full of assumptions. That entire section is complete hyperbole and should be ignored.

Firstly I think that git and VCS are tools, and that we can use these with other tools in wildly different ways. There are many methodologies that use a VCS, but the one that is described here is what I call the "site tree" methodology. I will make the assumption that the article is a "The case for storing a site tree into git as a deployment tool" instead and address some risks of this approach.

Additionally, another benefit of storing the entire site tree in a VCS that was not stated in the article is that it helps with auditing what's actually on production.

However a risk of storing the site tree without also tracking versions is that there is nothing built in that makes it easy to audit or track patches that have been applied to the code base. And let's face it, despite years of our evangelizing, people still try to hack core or put things into places they should not. This is a solved problem when using a make file.

At the moment I also work in an environment that stores both the site tree AND a make file into a VCS. The make file is used to build the site tree and then that is committed.

I still find that this presents problems for a multi-developer team in that in order to work locally, I need to blow away the changes in the VCS anyway in order to re-clone modules and themes that I need to work on. This is somewhat solved by using git sub-modules in a site tree VCS, but sub-modules are finnicky and hard to get rid of. Once again a make file solves the developer problem with a "--working-copy" flag.

Finally I also subscribe to the "wasting space" argument and try to be a git purist, but it is a weak argument compared to the other risks I mentioned and the benefits of a make file build system. I prefer storing a make file in a VCS, and then having a build process that builds the site in parallel. I've been doing this for years since Drush 3.

N.B.: I tried typing this on my phone on the way to work, but the captcha failed validation wouldn't work despite saying that I was a human.

17 Jun, 2015

Thanks for your comment!

I found the comment about make files and build processes to be aggressive and it made me question the value of the blog post as a whole. That section is full of assumptions. That entire section is complete hyperbole and should be ignored.

Really? It was certainly not meant to denigrate make files and build processes. Just to point out they are far worse of a deployment tool than git.

Those are build tools, not deployment tools. Git, in spite of lots of people claiming the contrary, is a very fine deployment tool -- that's the point of my post... do you have any specifics about what makes one of these tools better for that task?

Additionally, another benefit of storing the entire site tree in a VCS that was not stated in the article is that it helps with auditing what's actually on production.

Completely agreed ... and that's my point #4 on why git is good for this...

For the rest of your comment, I don't disagree at all. We choose to clone the Drupal git tree because it's easy to manage patches. We do occasionally use make files for that, but find the downsides of completely blowing away the site as you describe outweigh the benefits for our needs, so we mainly use them to configure new functionality. I see no downside to storing make files in the VCS -- what I am arguing for is storing the generated tree in git for deployment purposes, which is what I'm hearing all over the place as being "bad git practice -- it's a VCS." I'm standing up to call bullshit on. And it sounds like you agree/practice that too!

17 Jun, 2015
matthewn

Thank you for posting this. I have been using git this way for years. A couple of post-receive hooks take care of variations from environment to environment. Works like a charm.

17 Jun, 2015

We also tried using git to deploy but switched to rsync after it blew up a few sites due to deleted/renamed folders.
With rsync we push the changes first without --delete flag, then call updb and fra and such and then run rsync with the --delete flag.

18 Jun, 2015

Hi,

Thanks for commenting...

That seems like a pretty risky strategy to me! There have been many contrib module updates over the years that break if a file is still present in the tree that has been removed by the new version...

How do you roll back if you have a problem?

What "blew up" on your sites? Git handles moved/deleted files and directories just fine, if you use it right... If you are actually moving modules to different locations, you might need a registry rebuild, but that's nothing to do with git.

Rsync is a great tool, we use it for backups and lots of different copy operations. But anything code-related, we want the exact desired state of the entire tree, so we can roll forward/back or even revert individual commits if there's a problem with just one module we updated...

The other critical thing that might be missing from your environment is a test of the deployment process itself. Whenever rolling a release, we automatically deploy it to stage with a fresh copy of the production db. That gives us a preview of what will happen when we go live, and gives us an opportunity to do some final QA tests before launching. Just yesterday, some automated screenshot comparison tools we run as part of this release process caught some unintended layout changes to pages unrelated to some work we were doing on a couple specific pages...

18 Jun, 2015

As you can see we run rsync twice. The 2nd time with --delete so that the remote server is in the exact state we want (minus the excluded .git etc.) We also specify the backup flag so all changes are also stored in a separate backup dir. And a drush archive-dump is also made of course. And the sync is done from a local clone of the required tag, so rollback can also be performed by checking out the previous tag and redoing the deploy.

We found that doing the changes all at once would break drupal bootstrap when features or modules were moved or renamed.
This due to missing files. And no bootstrap means no drush (to do a registry rebuild) either.
That's why we do the
1. sync changes and additions to server
2. run updates, deploy-module, feature reverts, registry rebuild, etc.
3. sync deletions to server

And since rsync uses a similar algorithm to git (using deltas) it's fast too. And no git repo overhead on the prod server.

You are correct in stating that our pre-release testing is incomplete however. But due to regulatory restrictions for many of our clients we cannot simply take a copy of the production database. Thanks for reminding me to work on this :)

19 Jun, 2015

Rsync still sounds like extra work to me, with no benefits over git deployment and missing some nice features! If you're using git and tags on a clone to get to the right state on another copy, and then doing an additional sync to a production server...

Check out https://www.drupal.org/project/registry_rebuild ... after installing in your profile, a drush rr runs before bootstrap, so it can resolve file moves, etc.

We used to hit a lot more upgrade issues before drush removed all module files before installing new ones. May not be that frequent an issue, but we've definitely had upgrade problems with stale files leftover from previous module versions... Perhaps that's not an issue for upgrade, only for execution... but we've never had problems with module upgrades that cleanly replace the older version, the only time we need drush rr is if we move a module to a different path. And that would still cause issues in your process.

Rsync is fast for transfer, but you still have to go through a network, which is a lot slower than extracting out of the local disk (git pull does the network fetch first, then it does a local merge). And that git repo overhead to us brings with it the benefit of built-in file integrity checking.

Regarding getting a copy of the production database, take a look at drush sql-sanitize. We haven't had to deal with this issue directly yet, but we have talked with prospects who need that. Our thinking is to set up a backup copy of the database on the production server/other governed environment, and run sql-sanitize there with some additions to clean out or replace all sensitive data as appropriate. And then use a dump of that database for deployment testing.

19 Jun, 2015

Add new comment

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.