Panacea, or disaster? Drupal 8 Configuration Management was supposed to solve all our woes when it came to dealing with deploying configuration. In many ways it's a vast improvement, but in some ways it has almost made matters worse.

Configuration Management has made it possible to store all Drupal configuration in code. This is a huge win, of course. But there are several gotchas that make it challenging to work with:

  • If you don't track individual changes, a configuration export/import is an all-or-nothing proposition. If you have people making changes in multiple environments, merging these changes takes manual effort and coordination.
  • It's easy to clobber changes made in one environment, when you synchronize changes.
  • It takes extra effort to maintain different settings in different environments.
  • Some configuration relies upon content entities that may only be present in one environment (for example, custom block placement, panels). This configuration does not deploy successfully, and can break sites if you're not careful.

The extra complexity of working with Drupal configurations might prove to be one more factor raising the bar for amateur web developers, making it useful primarily if you have other tools available to help manage it all, and a strong process that enforces best practices.

Gotchas

We've had a couple of issues catch us by surprise and affect sites in production, but for the most part, we've managed to avoid disaster. Here are some scenarios we see as challenges around configuration management. Most of these we've caught before they break stuff, but a couple have been "learning experiences".

Client makes a configuration change on production

Many of the larger enterprise users of Drupal may actually remove admin interfaces and lock down configuration so administrators can't make quick changes on production. That's too cumbersome for many of our clients -- one of the advantages of Drupal is how easy it is for non-programmers to make quick changes. We have clients who edit views headers or footers, or add blocks, or update panels, with the day-to-day operations of their organizations.

Then, when we go to roll out things we work on, if our dev database has gotten out of sync with production, when we deploy and import the changed configuration, we clobber all the client's work, gone, gone gone.

That does not make for a happy client.

Content does not deploy with new functionality

On a dev site, you might build out screens for new functionality, and drop in panes of content to support it. In Drupal 8, every block you create, or any other entity for that matter, gets created with a UUID. If you create a block and show it on a panel, it exports the panel configuration to code just fine -- but with the UUID of the block you created.

As it stands now, there's no way to deploy that block to production. If you try to create the content blocks on production first, they have different UUIDs, and so panels won't find them.

This one caught us on one site -- we deployed some changes to a panels page, and suddenly was greeted on production with "Block not found" errors visible right on the home page of a site!

Development modules and settings enabled on production

Deploying a new module to production has become quite easy in Drupal 8 -- turn it on on a development site, configure it, export the config, and then roll out and import the config on production, and presto, it's live. However, this also means that unless you take some extra measures, ALL of your configuration between production and development gets synchronized. Which means production sites end up with modules useful for development deployed, enabled, and configured -- consuming resources, potentially opening up attack vectors, allowing clients to muck around with stuff they should be leaving alone.

So how do we deal with these gotchas?

Strategies for managing configuration

We've largely lifted our process from managing dozens of Drupal 6 and 7 sites using features, and applied it to Drupal 8. The same basic conflicts and issues arise there -- the key difference is that with Drupal 8, ALL configuration gets exported/imported, so you need to be far more disciplined in following a process.

Here are some of our strategies.

Keep all configuration in sync with code

The challenge here is that nothing in Drupal complains loudly if you change any configuration in a site. You have to go looking for these changes.

We run a nightly check on all of our production and dev (integration) sites. If this check finds any configuration that is out of sync with the code of the currently checked out branch, it nags us in the project's chat room.

This check alerts us when a client has made a configuration change on production that we need to bring back to dev and merge into the next release. Our developers learn to commit all of their work, including configuration changes, each night, and keep the dev configurations clean.

Configuration that is stored in code can be tracked, much more easily merged, reverted, or managed -- but this takes discipline.

Create content entities on production, and pull down database

This can lead to a multi-step deployment process. For example, to set up some memberships on a new site, first we have to deploy code to support the new membership entity types and entities. Then we need to create some of the content that supports those screens on production, hidden from public view. Then we need to copy down the database to our development server, and then finally we can put the pieces together and do the final deployment manually.

It sounds to me like a lot of shops just build everything up on development and copy up the database -- the only time we allow this is for an entirely new site, otherwise our cardinal rule is the production database never gets overwritten, and all production data gets sanitized when we copy it down to development.

Use local configuration overrides, and exclusions

The D8 settings.php file comes with a commented out include to load a "settings.local.yml" file meant for environment-specific configuration. The great thing about the configuration management system is that you can override any configuration in the settings file, hard-coding it to whatever value you want in use without affecting the actual configuration that will get deployed. This is the most powerful, and most used way to manage environment-specific configurations.

Drush also supports excluding modules from the configuration when exporting or importing configuration.

Our setup

To support using configuration management effectively, you need to do a bit of setup on each site up front. After installing the site, we do the following:

  1. Edit the settings.php file to include settings.local.php
  2. Move the $database array and other environment-specific configuration into the settings.local.php file
  3. Set the $config_directories['sync'] variable to a directory inside sites/default but outside of sites/default/files, so that we can manage the site configuration in git
  4. Set .gitignore to track settings.php but ignore settings.local.php
  5. Set all the production settings for twig debugging, aggregation, etc using "drupal site:mode prod" and commit the services.yml file
  6. Set .gitignore to ignore the services.yml file -- this keeps the file in git but ignores any changes to it, because development settings change this file
  7. Add the list of modules we want to designate as "development" to an appropriate drushrc file's command-specific "skip-modules" list
  8. Export the site configuration and commit.

Supporting workflows

When you want to actually use the configuration management system, you end up having several different workflows to support different scenarios.

Handle nightly check warnings

Our Matrix bot will alert us each night to sites that have configurations that are out of sync.

If it's the production site, and we haven't deployed there in the past day, we can be pretty certain that it's a change from a customer (or possibly an attacker! Side benefit of a nightly check -- detect unexpected changes!) We export the production configuration, evaluate the nature of the changes, and if they are expected, commit and push back up to the master branch. Then, after verifying the development site is clean, merge those into the development site and import the configuration there.

If it's the dev site, we essentially do the same thing -- export the configuration, review for expected/desired changes, and commit.

If it's both sites, we export/commit both sites before attempting to merge -- this generally prevents us from clobbering any work done on development and makes the merges as simple as possible.

The longer a site goes out of sync, the harder it is to merge changes like these as people forget why a change might be made. We've found that adding these nightly checks to be a huge benefit!

Deploy configuration

Our process is pretty similar to what is written elsewhere. We export the configuration on dev, commit it, deploy the code, and import it on the target environment. Easy peasy.

In our case, we have a few extra steps that are mostly automated -- all having to do with quality checks. First we commit on a develop branch and push to our central repo. This triggers our Behat test runs, which let us know what passes/fails when testing the development site with all our "Behavior Driven Design" (BDD) tests.

Then we check out the "release" branch and merge in the develop code, and push. This triggers a deployment to our stage environment, and kicks off our visual regression testing, which highlights every pixel difference betwen stage and production.

Finally, when all that looks good, we ask our bot to deploy to production. It grabs a snapshot of the current production database, tags the release, pushes the code to the production environment, and applies the configuration.

On the next nightly check, our bot automatically does a fresh database copy from production to stage, to clean the decks and get ready for the next release.

Deploy a complex feature that depends upon content

We treat our stage site as an ephemeral testing location, routinely blown away and recreated from a fresh, sanitized copy of the production database, and the code/configuration to be rolled out in the next release. Our bot also maintains release notes, where we can add any manual steps needed for a successful deployment or to complete the job, as well as a list of commits in the release.

So when something needs to be deployed that will include content entities with unique UUIDs, it becomes a multi-release process.

To do this, we stage a release as usual with all the supporting code necessary to build up the content. Then after verifying that the changes are not "leaking" to people who shouldn't see them yet, we roll out to production. We create all the necessary supporting content, and then copy the database back down to stage, and then to development. Then we build out the remaining functionality and roll another release.

CMI is great, but watch out for dragons

All in all, we really like the new configuration management in Drupal 8, but we've found it essential to use other tools to manage the actual process, and make sure we're not destroying work along the way. Your team needs to understand how configuration management works, what it's capable of, where its shortcomings are, and how to merge and resolve conflicts. And if you don't enforce a "clean environment" policy, you will end up eventually spending more time sorting out what has changed where, in a nasty place where the wrong move will burn...

What would be the ideal way to deal with Code/Config/Content dillemas?

Does anyone have a comparison of how other CMS products deal with this?

Hi,

Other CMSs don't deal with this... in general, they don't put any complex configuration in the database, beyond things like some basic settings and strings that don't ever get committed to code.

Drupal's great flexibility of being able to arbitrarily add fields to any entity, create arbitrary views, panels, layouts of various kinds, all through an admin interface, is really unusual.

WordPress (and many frameworks) make you define more structural elements in code. While there are site builders out there, they usually either edit code files, or don't effectively support deploying changes across environments.

It's a huge strength of Drupal, but also a big complication for effectively managing configuration...

Cheers, John

We ran into all this stuff quite quickly but couldn't find any honest treatment, like this one, that wasn't a "configuration management is awesome!!1" post. I appreciate the solutions proposed, as some of the others I've heard of include a ton of git Magic to automate. This is more manual, but I think makes more sense in our current environment.

I'd love to see the code that checks config state and notifies you in the chat channel! Can it be gisted or something?

Thanks again.

Hi Chris,

Ha, well, this is not quite so easy!

In our system, the nightly checks are kicked off by our custom bot, named Watney. It wakes up, builds a list of sites to check, and for each site, wakes up the corresponding Concourse CI pipeline and triggers the "check clean" job. This job runs in a Docker container, obtains credentials for the appropriate server, and then runs these checks:

#!/bin/bash

. pipelines/scripts/setup-container.sh
drush use @${PROJECT_ALIAS}.prod

# Things to check:
# - All files committed
# - if features is enabled, all features other than excluded are Default
# - Site is on appropriate branch -- develop for dev, master for master
# - branch commit id matches upstream

OUT=0

# snip - checks for correct branch/commit, uncommitted files

## Feature status
if [ "$version" == "8" ]; then
  if drush ssh drupal config:diff sites/default/config/sync -n |grep "no changes"; then
    set_data prod_config OK
    echo "Config is clean."
  else
    ((OUT++))
    cd=$(drush @${PROJECT_ALIAS}.prod ssh drupal config:diff sites/default/config/sync -n)
    add_message "Prod has config overrides."
    set_data prod_config "$cd"
  fi
  # Not sure yet
else
# snip - d6, 7 stuff
fi

if [ "$OUT" -gt "0" ]; then
  error_exit "${SITE_ENV} is NOT CLEAN."
fi

When this script exits, Concourse then passes the data created using the add_message, set_data, and error_exit functions to the Matrix Notification Resource, which drops it into the configured Matrix room.

In our model, we pass a lot of data back to Matrix in a JSON blob attached to a message event. We put these all into a deployment room that acts as a log of success and failure. Watney listens to this room, parses the JSON into various state checks, and updates the room state for a room specific to this website that our devs are in. This state can then be used to block deployment if there's something not clean, and helps us easily find out the state of the website. If there is an issue, Watney sends an actual message and the room pops up to the top of our list with an unread message we can then take action upon. If it's clean, the state is updated but no message is sent.

Very complicated, and it took a long time to get all the pieces up and running and working well. But it's working fantastically well for us, and has already caught several production changes we might've clobbered otherwise! And it's making us deal with configuration changes as they are made, so merging config has gotten a whole lot simpler.

Permalink

Thanks so much! This post will help me a lot to start working with D8; this kind of real-world knowledge is highly appreciated!

Permalink

Between the features module and a responsible VCS, I'm not sure there is an issue to address here. We use features to create and modify content types and various configuration points within our dev environment. We use git to control deployment of those feature-based modules, maintain our version history, and track and enforce differences between environments. Merging config changes, even between environments, is a cake-walk.

Hi, Steve,

So... does anyone ever change things on your production sites? How do you handle changes made on production?

Do you prevent all changes on production, using config read-only or something similar? If so, how do you add new block content?

Do you ignore all configurations that you don't export to a feature? Or do you import only specific configuration changes when you deploy? If so, how do you keep track of which configurations need to be imported?

I agree that merging config changes is so much easier with straight Yaml files, and making small commits that only contain the changes make it possible to merge everything effectively. But in our case we've really had to get disciplined about exporting and reviewing configuration changes pretty much as they are made, with the nightly reminders if we fail to do so.

How do you "track and enforce differences between environments"? I think that's the crux of all of this, and the challenge is evaluating which differences are actually desired, and what is the result of experimentation or mistakes. As you get more people involved in managing site configurations, this challenge gets hugely more complicated.

Permalink

Had a few "brown pants moments" with config management and D8!

especially when also using composer, drupalvm environments, git and trying to find a decent workflow to merge all of this together...

Still not satisfied. Everyone handles these things differently, there's not a "safe and easy path" to follow.

Permalink

well! many people realized that Drupal 8 run slower Drupal 7. in my point of view, It would be useful to see also memory consumption besides benchmark results. If memory is too high (e.g. > 2-3 MB), then optimization should go into autoloader/bootstraper and fix lazy loading. If memory would be OK, then it's too much scanning files in directories or some another app reason. But I am prone to believe that Drupal suffers mostly from overtaking (overengineered) Symfony modules, which is known for poor performance.

Um, what are you talking about?

Drupal 8 is far snappier than Drupal 7, in our experience. Sites we've moved to D8 have had page rendering times drop by 1/3 to 1/2. Memory footprint is far lower (but way above 2-3MB -- most Drupal sites need at least 128MB to not have memory issues). Lazy-loading is what makes Drupal 8 so much better than Drupal 7, which loaded every single enabled .module file on every single request, whereas D8 only loads the code necessary to deliver a particular request.

And that's all before you get to discussing the granular caching systems, intelligent cache invalidation, and more...

We've always used update hooks in custom modules/profiles on deployments, and run them before any config imports. This allows us to script content changes, if there are any necessary for the config. For example, creating a block with \Drupal\block\Entity\Block::create($values) - and you can just hardcode a UUID inside the values there, to match the value in the config. (Though, yes, that feels icky!)

As you say, part of the issues about the new config workflows are that it's often all or nothing, when using core's configuration synchronisation functionality (or drush commands). Well, there is a --partial argument for drush config-import, which means you don't have to clobber 'allowed' changes, and several other tools for doing similar. We've developed our own (https://www.drupal.org/project/cm_config_tools) which uses the /config/install directory that install profiles & modules can have to store configuration (a bit like Features, which itself is still an option), so we only ever import/export the config we actually want to 'manage', allowing anything else to be changed by the client on the live site. (Obviously some of that might still be prone to the issues you've outlined, like views headers, but it reduces the likelihood - especially as anything the clients add can't get clobbered.) The config_update project is invaluable for any approach using the /config/install directories too. There are other tools for whitelisting or blacklisting specific config for imports/exports too, such as https://github.com/previousnext/drush_cmi_tools .

Meanwhile, https://www.drupal.org/project/config_tools is handy for automatically committing any changes made on production back to your VCS, which might be lighter than managing nightly database snapshots, when all you need is the config changes?

Sorry if you've already used / read about any of these tools - I'd be interested to know your experiences and why you use your workflow ahead of them?

Hi, James,

Thanks for your comment! Sounds like you've been working on your own solutions... and some great alternatives.

We do use update hooks pretty regularly, but I had not considered using them to create content entities, especially not with a hardcoded UUID -- thanks for the suggestion, that looks like a very useful technique, even if it's a bit hacky...

And I've started creating some config-only modules, one-off modules that have starter configs in config/install -- particularly for functionality that is taking a while to develop and we don't want merged into a production.

All or nothing... we're finding with our tooling that "all" is actually working great, but we have to stay on top of keeping the config clean. With a bot and nightly scripts running, this technique is working very well. Much like the approach in config_tools -- we export all production config each night when we detect a change, commit, and pull down/apply on dev.

For long-running development that isn't ready to merge, we've started using a new technique -- a "config-only" module. The process looks roughly like this:

  1. "Dev" site (our integration copy) exports/commits config.
  2. Developer pulls latest tip of dev to her working copy, and grabs a db snapshot of dev.
  3. Developer enables the config-only module she's working from, which imports all the config in its config/install, enables modules being worked on, etc.
  4. Work, work, work.
  5. Developer exports config, and moves ONLY THE CONFIG RELATED TO WHAT SHE'S WORKING ON into the config-only module's config/install.
  6. Commit code and push.
  7. Repeat as appropriate until done.
  8. When ready to deploy, enable the module, and then it can be uninstalled/deleted after the config is imported on production.

I've found this technique very useful as it allows me to pick up basically where I left off when switching development machines (home vs office), and it's a fast way to get right back to where I was without messing up any config on the dev server. The code's getting pushed to the development site, but nothing is enabled/imported until we're ready to put it in the full testing/deployment pipeline...

I think this is a very similar approach to the tools you're developing, especially that drush cexy command... We're just managing the config changes by being careful with what we commit, using git as our primary tool, instead of what config we export/import, using drush as the primary tool.

In either case, it takes some careful attention by the developers...

Thanks for the reply John!

The config-only module is a nice idea, I guess it could act almost like a holding place for in-progress or experimental functionality, I like that. It's really good to hear your experiences of doing the 'all' approach, and working around its challenges. I've been a bit worried about doing so on a sizeable project to be honest, because of the likelihood of clobbering changes made on production. How long would you say it takes a dev running your workflow to get through the steps (other than the actual 'work, work, work' step in the middle!)? I'd be interested to compare what sort of overhead there is, and what kind of resources are needed for getting the snapshots & notifications working too.

"In either case, it takes some careful attention by the developers" ... hehe, I think that goes for all of us regardless of workflow ;-) I lose count of the number of times we're encouraging people to check their diffs before committing!

Hi, James,

It's really good to hear your experiences of doing the 'all' approach, and working around its challenges. I've been a bit worried about doing so on a sizeable project to be honest, because of the likelihood of clobbering changes made on production.

Yes, this is a bigger issue in Drupal 8 than it was in previous releases. That's why we've set up our CI system to check the config status on production every night, and notify us (via our Matrix chat) if there's an issue.

How long would you say it takes a dev running your workflow to get through the steps (other than the actual 'work, work, work' step in the middle!)?

Well, we're just getting going with this methodology, but I'm finding it takes about 10 minutes to synchronize with our upstream "dev" server (which is where we integrate things like this).

I'm guessing the work of selectively choosing what to move to the config module is probably equivalent to the way you're handling things, same sort of attention to what config to export/manage, whatever you do with it...

I'd be interested to compare what sort of overhead there is, and what kind of resources are needed for getting the snapshots & notifications working too.

I think we're at a point where the developer resources involved is minimal -- there's definitely some training necessary to get everybody on the same page, but I can't think of how we might improve that further. We're now spending more of our time and attention on creating relevant Behat tests to go with the feature development process...

However, we've invested quite a lot of time in getting our CI systems set up to do the nightly checks across environments, getting Behat and Visual Regression Testing running automatically and acting as gatekeepers to releases, automating our release process to ensure there are database backups and tags for every release... I think without this automation support, we would risk a lot more mistakes.

As it is, there is some overhead every day going through and committing any production changes and pulling those back down to dev -- across our ~30 maintenance sites this is currently 10 - 20 minutes per day, usually two or three sites getting flagged for config changes each day.

Cheers,

John

Thanks for the wonderfully helpful & detailed reply :-) Yes, we probably have a similar day-to-day time cost for devs for either of our methods then, it's the main overhead of getting the setup & tests in place that is the difference then. I look forward to experimenting further, thanks for the help!

Permalink

An approach I'm looking at is to catch configuration changes as they are made. Using an event subscriber like the one in Drupal/Core/Config/ConfigFactory.php, you can implement the onConfigSave and onConfigDelete functions to log out changes and who made the changes.

Since we can set up Splunk alerts, I can get daily changes to the prod configuration.

Dear Sir,
I should first apologize for my limited english.
Then, thanks for your article, but I'm not sure to be able to apply your advice to my own case.
Please, let me describe it.
I manage what I call a site Factory: the core, the modules and the themes are shared between all the instances.
I create a new instance by:
- rsyncing the /sites/base content to a new directory named /sites/newInstance,
- create a new database (newInstance) with the right privileges,
- add one symbolic link from /var/www/drupal to newInstance,
- add another symbolic link frm /var/www/sites/newInstance to myHostName.ecp.fr.newInstance
The two last steps allow me to reach my site using this URL: myHostName.ecp.fr/newInstance, avoiding temporarily the creation of a DNS entry.

I give this instance to some authors, that use it to create contents.
Sometimes, I add functionnalities to the base instance. And I'm looking for a way to migrate these new features (new content types, modules installed after the creation of an instance) to already running instances.
I tried to export the configuration of the base instance, the importing it to other ones, without success.
The upload succeeds, but the import failed complaining about content types that already exist, and entities that must be deleted.
Please, could you give me some advice ?
Thanks a lot
Sincerely Yours.

Add new comment

The content of this field is kept private and will not be shown publicly.

Filtered HTML

  • Web page addresses and email addresses turn into links automatically.
  • Allowed HTML tags: <a href hreflang> <em> <strong> <blockquote cite> <cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h1> <h2 id> <h3 id> <h4 id> <h5 id> <p> <br> <img src alt height width>
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.