It’s 3AM. You’ve finally finished rolling out all the upgrades and patches for your application. The QA team is busily testing functionality, and you’re looking forward to finally hitting the hay after a long evening. Then the message comes in, the whole application is down, the upgrade had unforeseen issues, and now you have to rollback. With a weary sigh, and a slow sip of your lukewarm coffee, you begin the onerous process of rolling back the upgrade. You dream of a better way. A fantasy world where you can patch and upgrade systems without these late hours, unforeseen glitches, and time-consuming rollbacks. That’s no fantasy friend, there’s a better way.
Enter immutable infrastructure. The term immutable means that it cannot be changed in any appreciable way. When you deploy an immutable resource, its core components are configured in a specific way, and you do not upgrade, patch, or otherwise modify the resource. But wait, how do you perform upgrades? How do you patch a system against new vulnerabilities? The short answer is that you don’t. At least, you don’t patch or upgrade the deployed resource. Instead you have a base template or image that the resource is deployed from. That base image is what will be patched and upgraded with the latest version of software. That might sound a little esoteric, so allow me to provide a little context.
Immutable infrastructure is not a new concept, but it was galvanized by the rise of containers. This technology has been embraced by several cloud providers. If you’re unfamiliar with container technology, that’s okay, you can read up on it here. The main thing to understand is that containers are deployed from images. Those images live in a repository that includes versioning and source control. Although some customization of the container is performed when it is deployed from the image, the application and supporting libraries are already installed and validated. When it is time to roll out a new version of the image, existing containers are destroyed, and new containers are rolled out to replace them. The service provided by the containers will have a load balancer or some other type of shared endpoint, so while the new containers are rolled out and the old ones retired, the service itself stays up. Best of all, if something goes wrong with the update, an older, known good version of the image can be used to redeploy containers. In very short order, things are back the way they were pre-upgrade.
There’s no reason for containers to have all the fun! The exact same process can be used with virtual machines, it’s just not as baked into their DNA. Containers are meant to be ephemeral and stateless – that’s not always true, but true enough for our purposes. Additionally, containers spin up in seconds and are extremely lightweight. We’re talking Megabytes here, rather than the Gigabyte-size traditional operating systems and applications. Nevertheless, you can still follow the same process when deploying virtual machines from a source image. The image itself needs to have as much of the application and prerequisites baked into it as possible. When the virtual machine is deployed from the image, you will need tooling to automate the configuration of the application and integrate it with the service the application is a part of. That means embracing automation, and no more hand-crafted, bespoke servers running multiple applications in perpetuity. It also means no more silent prayers when rebooting a server with three years of uptime.
I’m sure you still have questions and aren’t sure about the benefits of immutable infrastructure. In the rest of this series, I will discuss what the concrete benefits of immutable infrastructure are, what potential tools can be used to achieve it, and walk through an example of deploying a traditional application in an immutable way. Buckle up dear reader, it’s going to be a wild ride!

Ned Bellavance
Director, Cloud Solutions and Microsoft MVP: Cloud (Azure/Azure Stack) & DC Mgmt
Ned Bellavance is an IT professional with over 15 years of experience in the industry. Starting as a humble helpdesk operator, Ned has worked up through the ranks of systems administration and infrastructure architecture, and in the process developed an expansive understanding of IT infrastructure and the applications it supports. Currently, Ned works as the Director of Cloud Solutions for Anexinet in the Philadelphia metropolitan area, specializing in Enterprise Architecture both on-premise and in the cloud. Ned holds a number of industry certifications from Microsoft, VMware, Citrix, and Cisco. He also has a B.S. in Computer Science and an MBA with an Information Technology concentration.
© 2000 - 2021 Anexinet Corp., All rights reserved | Privacy Policy | Cookie Policy