A client asks about yet another hosting option:

The VPS-2000HA-S includes the following resources:

6GB RAM (burstable)

150GB SSD Disk space

5TB Monthly Bandwidth

4 free dedicated IP's

options to configure the server for particular versions of PHP

2 hours of Launch Assist to help migrate and configure the server with the Managed Hosting team (one on one Tier 3 support)

... what do you think?

Sounds good, right? This is at what I'm going to call a "traditional" host. Cost for this package, if you pay for a year up front, is just over $40/month. Seems reasonable. But... Digging a little deeper, I'm skeptical that this is a good value. The "Choice of OS" offered is CentOS 7, Ubuntu 16.04, or Debian 8. While CentOS 7 is current, Ubuntu 16.04 is 3 years old and Debian 8 is 4 years old -- why don't they support Ubuntu 18.04 or Debian 9, both of which are 1 year old and have far more service life remaining?

And, how exactly can you manage backups on this hosting platform?

My sense is this client is really attached to having Cpanel or a similar control panel so they can add/remove sites through a point-and-click web interface. Sounds great, right? Not when DevOps enters the picture.

DevOps vs Hand Built

Control panels are great for learning what options and functionality is available. They are great for people who don't spend their days managing sites and servers. They are great for sites that are disposable, and not that important. But when I go in to figure out yet another control panel, I cringe and reach for another cup of coffee.

What's the alternative? Automation, and configuration management. The tools of DevOps.

Yesterday I spun up a brand new server and moved over a large existing Drupal site to it in under an hour. And most of that time I wasn't paying much attention to the process.

The steps were basically:

  • Start up a bare server on a cloud provider
  • Install the configuration management client, and point it at our master
  • Accept its key on our master
  • Create a new server configuration file from our template, filling in the blanks with things like API keys for the backup service, which PHP version to deploy, how to route their outgoing email
  • Create a new site configuration file from a different template, specifying the git repository path, all domain name variations, specific platform
  • Tell it GO!

That was 5 or 6 minutes of actual attention, and it churned away for another 25 - 30 minutes. When it was done, the site code was on the server, most everything was configured -- all that was left was to import the database and copy over the image assets, and it was ready for the DNS changeover.

"Ok," you say. "I can do all that in my favorite control panel in less than 30 minutes."

That may well be true -- but we haven't gotten to the point yet.

The point is, my config is completely self-documented, completely reproduceable, and easily portable to other services. Did you keep track of every change you made in the control panel? Can you save those control panel settings, and apply them on an entirely different service if you need to? How fast can you recover if the hardware your site is running on fails? Or worse, your site gets hacked?

When you're using DevOps tools and practices, you treat your configuration like code -- it is managed, versioned, constantly improved.

If you want to change the PHP version, it's a single line in a config file, and apply.

If something drastic changes on the server, the configuration management alerts you and changes it back.

If the server dies, you spin up a new one and tell it to use the previous config -- and, bonus, this can work extremely well even across new operating system versions! Which means under our server maintenance plan, we cover upgrades to new "Ubuntu Long Term Support (LTS)" versions as part of our regular service, no extra charge -- we just point your config at the new server and restore your content.

If your host isn't reliable enough, pick a new one -- similar effort on our end, we just point your config at a new one and restore your content.

Evergreen vs Set it and Forget it

This all largely boils down to attitude and world view. Is your website critical to your business, something that should be tended to, kept current, and constantly changed to keep delivering value to your business? Or is it something you create once and then ignore it for years until your marketing consultant persuades you to update it?

If you don't think your website is important, then sticking it in a control-panel-driven host may be fine. But if it's any kind of application that, you know, does anything, if you don't pay attention to it an attacker might.

Drupal has now had 3 quite serious security vulnerabilities over the past year. WordPress in many ways is even worse -- its huge ecosystem of plugins gets very little review or coverage, and we're getting more and more business cleaning up sites that have been hacked. Leaving a CMS untouched for any length of time is asking for somebody to come mess with it.

Now, I'm not saying you need to make your website the full focus of your business. Different businesses have different goals, and for many, "set it and forget it" is all they want to do -- have a contact place so people can reach them but otherwise do nothing with it. That's totally fine -- but I might suggest creating a static website that has nothing to attack, instead of having a CMS that can be.

So really, we're talking about a spectrum here. At one end, a static site is a collection of files sitting on a server somewhere. If the server is relatively secure, an attacker can't really do anything.

In the middle, you have these control panels, and the scads of people running WordPress in them. That's really asking for trouble -- what I find appalling is how many sites we're seeing where the site owners get hacked and they have no backup of the site, nothing telling them they have been hacked, no warnings, no nothing. We end up using Archive.org to try to at least recover some of their content.

And then you have the DevOps end of the scale -- what I'm calling Evergreen. We are constantly applying updates -- not just to your site, but to your servers, Docker containers, the overall environment. We are constantly solving tiny problems as minor upgrades break little things that can be easily solved one at a time.

This means we keep you at the forefront across the board -- you're never so far behind that you need to do a big risky expensive upgrade project.

This also gives you the opportunity to try out different things on your website, things that might drive big changes in your business, things that you can't do if your server is too old -- we make sure your server is never too old.

For the past year or so, we're seeing a big generational change to PHP, the language that powers a lot of popular platforms including both Drupal and WordPress. PHP 5 is now obsolete as of January 2019, no longer getting any security coverage after a 15 year run. PHP 7.2 is the current release, but there are lots of incompatibilities we're seeing between PHP 7.0 and 7.2 -- this has proved to be some of the stickiest upgrade issues we've had to resolve. And a huge number of hosts still only support PHP5 or PHP 7.0!

Cloud provider vs Traditional host

... which brings me back to the original question. Why is a VPS from a traditional host not the same as a Cloud provider?

Here are some of our favored cloud providers:

  • Digital Ocean
  • Upcloud
  • Linode
  • Amazon Web Services (AWS)
  • Google Cloud Engine (GCE)
  • Microsoft Azure

... those examples break down into roughly two categories: flat-rate packaged services, and entirely infrastructure-as-a-service. The first 3 on the list, you pretty much pay a flat rate for a server of a certain size, by the hour or month, and it includes disk and a certain amount of bandwidth. The others are more commodity, with a vast range of server sizes, and you pay per GB for disk space and bandwidth on top of that. Right now, I think of Amazon, Google, and Microsoft as "the big 3", which all have similar services and similar pricing -- more expensive than the flat-rate services, but all of these offer deep discounts if you commit to 1 or 3 years of "reserved instances", especially if you pay some or all up front.

There are many, many other providers that fall into one or both of these "cloud provider" categories. However, we see lots of "traditional hosts" that offer "Virtual Private Servers" that do not stack up. Here are some of the warning signs, deficiencies, and drawbacks of these traditional hosts, compared to any of these cloud hosts.

  • Higher costs. A 2GB server, for a basic site, costs around $10/month. An 8GB server around $40 at the packaged places, around $80 - $100 before discounts at the big 3 (can be brought down to ~$35/month or less for 3 years paid up front!). If you are seeing prices higher than that for equivalent hardware, you have to ask why.
  • Only older operating system versions. If you can't get a Long Term Support version of a major Linux distribution within a month of its release, I would look elsewhere.
  • Lack of entire-disk snapshots, or ability to spin up new servers from a snapshot. This is the killer feature of Cloud servers -- if something goes wrong with your server, spin up a new one on last night's snapshot and you're back up in minutes. And -- you don't need to wait for their support to do it for you.
  • "Elastic" or "Floating" IP addresses -- reserve an IP address you can point at a replacement server and not have to update your DNS.
  • No ability to attach extra disk. Cloud providers pretty much all have "elastic block storage" or "object storage" available -- you can create a new volume and attach it to your existing server if you need more space.
  • Restricted kernels, or limits on software installation. We have a client with a full VPS and root access at a major reputable host, but we cannot install Docker on it due to the way they manage the kernel. Docker is a must-have tool for us -- it is what gives us the ability to reliably manage software with different versions on pretty much any host -- and yet we've found several hosts where it's not possible to run Docker. Which takes us back to hand-built servers instead of letting us use our DevOps tools.
  • Console access. AWS lacks this, but most cloud providers will let you see the console and boot screens through a web interface, to help recover from corrupt disks or mistakes adding additional software. AWS does give you logs -- and you can easily attach disks from one VPS inside another to repair...
  • CPanel or Plesk -- if you see these, run away! They actively interfere with our ability to manage configuration in code, and often prevent us from locking down and properly securing a server. And they usually depend on specific, ancient software versions, making it impossible to stay evergreen.

Doing Evergreen Safely

There's a couple more pieces to this puzzle: monitoring, and change management.

How do we know that a server upgrade didn't break your site?

How do we know that a site update succeeded?

How do we know whether you got hacked?

How do we know if you made a change in production that might get overwritten by an update?

What do we do if something goes wrong?

Those are questions we can answer with our site and server maintenance plans -- you'll have a very hard time getting answers to these anywhere else!

No hosting company does this kind of site monitoring or management for you. Very few services do this for you. This is where we excel.

We use industry-leading monitoring software, the various DevOps tools we mentioned, mostly the same ones that cutting-edge technology startups and leading edge enterprises use -- but we use them for mid-market companies and organizations that don't otherwise have the expertise to put them in place.

Every minute or two, our monitoring systems check every site we manage, measuring response times, checking for error codes, warning about upcoming certificate expirations, checking for specific text on your page.

Every night our bots check every site to see if any code has changed, whether the code matches exactly what is tracked, whether there have been any configuration changes on the site that could get clobbered or reverted.

Every time we release updates to a website we get a full backup of the site before and after, we send a notification of exactly what's going to change before we release, and another notification when it has actually released.

If something goes wrong, we have a full trail of everything that has changed, and a process for recovering -- worst case, we can roll you back to the previous day's backup -- but more often, we can identify the single thing causing trouble and revert just that. And for many of our clients who have historical backups, it's surprising that we can go retrieve data that was deleted weeks ago.

And one final point here. We are far from perfect. We have made lots and lots, and lots, and lots of mistakes over the years. Every one of these things we do is something we've done because we've made a mistake, or seen somebody else make a mistake. We will continue to make mistakes -- but what all these systems, all these tools, all these process do is make it so a mistake doesn't hurt very much.

And you can use our mistake-rectifying service at a very affordable rate! Go make all the mistakes you want, we've got you covered!

Feel free to reach out to us via email computing@freelock.com or use our contact form.

Add new comment

The content of this field is kept private and will not be shown publicly.

Filtered HTML

  • Web page addresses and email addresses turn into links automatically.
  • Allowed HTML tags: <a href hreflang> <em> <strong> <blockquote cite> <cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h1> <h2 id> <h3 id> <h4 id> <h5 id> <p> <br> <img src alt height width>
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.