In September, thousands of technical professionals descended on Atlanta for the second annual Microsoft Ignite conference. Ignite is a combination of several Microsoft conferences that operated independently in past years, including Microsoft Exchange Conference (MEC), SharePoint conference, TechEd, and a few more. I was lucky enough to spend five days walking the halls of the Georgia World Conference Center, and trust me there was a lot of walking! But I also managed to attend several sessions each day, as well as hit the expo floor and talk to some of our partners like CommVault, CloudHealth, and HPE. All in all it was a great conference full of new information, best practices, and lots of technical deep dives into products that Anexinet delivers on a regular basis. I won’t even try to rehash the entire event, but I would like to break out a few key takeaways.
In case you missed it – not sure how you could have – the cloud is Microsoft’s number one focus. Every session I went to mentioned Microsoft’s cloud services and how to use them with their existing products. At last year’s Ignite, Satya Nadella made it a central focus that Microsoft would deliver everything as a service, and have every service tied back to the cloud. In the last 18 months he has more than made good on that promise. Office 365 has millions of active users, Azure has become the number two public cloud service with approximately 11% of the IaaS market, SaaS products like Dynamics CRM, and cloud partnerships with Citrix and Adobe. Our clients are certainly aware of the cloud first mentality, and Microsoft is right there to help them embrace that approach.
Windows Server 2016 was released as generally available during the keynote on Monday, along with the increasingly fragmented System Center 2016. Unsurprisingly, there were a ton of sessions highlighting all the new features of Server 2016. I don’t have a crystal ball, but I would bet that this release of Windows Server will be more stable on day 1 than any previous entry. Why do I say that? Microsoft has never been so transparent about their development process in the past. With Server 2016 they took a new approach by releasing technical previews every few months, all the way up to technical preview 5. Customers were invited to bang away on the operating system and submit constant feedback and telemetry. One hosting provider I spoke to has been running 2016 in production since TP4! The days of waiting for SP1 or R2 are over. The real-world testing for Server 2016 has already been done.
Server 2016 is meant to run at cloud scale and with a cloud mentality. What does that mean? Jeffrey Snover summed it up nicely:
In all the previous iterations of Windows Server, you would deploy and manage Windows Server on a one-by-one basis. Automation was available, but not necessarily built in or robust. If you wanted to execute a task, you would probably RDP into the server or walk up to a console. That does not work at cloud scale. You need to automate with PowerShell, define configurations with Desired State Configuration (or Puppet or Chef or Ansible), and manage and monitor at scale with OMS and Configuration Manager. Heck, with Nano Server you can’t even log in locally. There’s no GUI and no console!
Speaking of Nano Server, if you haven’t already heard about it, now is the time. Nano Server represents a fundamental shift in the approach to
Windows Server. Microsoft has always worked hard to make life easy for the point-and-click admin. Windows Server came with all the bits for every role available, and tons of processes already running and listening for requests. Until Server 2008, there wasn’t even a real firewall, and most Windows admins just turn it off (for shame!). The attack surface on Windows Server was big, the install footprint was bigger, and the number of patches required for all those features you weren’t using was enormous. Nano Server follows the principal of just enough operating system, meaning you tell Microsoft what features to include and what services to run. Everything else is left out, leading to a footprint of less than 500MB, a slimmed down attack surface, and a fraction of the necessary security patches. Microsoft has been running Nano Server as a base for their Azure compute clusters for a couple years now, and it enables them to restrict patching to only once or twice a year. Nano Server even has its own comic book character!
Another big focus of the week was Azure Stack, a hyperconverged solution being offered by Microsoft in cooperation with selected OEM vendors like HPE, Lenovo, and Dell Technologies. Imagine if you had Azure, with all its bells and whistles available as an on-premise offering for your private cloud. That is Azure Stack. One of the biggest shortcomings of hyperconverged solutions today is their orchestration and management layers, and Azure Stack is out to solve that. The management plane of Azure is the Azure Resource Manager, which acts as an intermediary between the user interfaces like the API, PowerShell cmdlets, and portal UI and the resource controllers for compute, storage, networking, etc. Azure Stack will run the exact same management layer, meaning that users will have the same experience on-premise as they do in Azure no matter which front-end UI they choose. The Azure Stack marketplace will allow you to import offerings from Azure, and offer them locally to your clients. Microsoft is working with the OEMs to provide seamless support and orchestrated updates, so that the administrative load for running Azure Stack is minimized. Azure Stack is now in Technical Preview version 2, and Microsoft expects it to be GA in mid-2017.
OMS for the Rest
The last big focus I will bring up is Microsoft’s desire to help simplify the management, monitoring, and configuration of your servers wherever they are and whatever OS they are running. Whether it’s a Windows VM running in Azure or an on-premise RHEL container host, Microsoft wants you to be able to manage that server from a single console. That’s where Operations Management Suite (OMS) comes into play. OMS has agents for both Windows and Linux operating systems. It is integrated with Azure Automation, which can leverage Desired State Configuration (DSC) templates to describe how parts of a server should be configured. Once a server is associated with a configuration, DSC can ensure that the server is compliant and auto-correct the server if its configuration drifts. Since OMS leverages an agent, it can also be used to monitor a system, and send alerts or fire off a task if a particular condition is met. It also can check on the status of updates and patches, and be leveraged to patch a system. Finally, OMS can work with another offering called Server Management Tools (SMT), which uses a gateway host to remotely access systems in other clouds, public and private. Using SMT, you can run PowerShell cmdlets, gain console access, and remotely run commands against a group of servers simultaneously.
You may realize that most of these solutions already existed in some form or another. Azure Pack instead of Azure Stack, SCOM instead of OMS, SCCM instead of Azure Automation Runbooks, Server Core instead of Nano. In every case, Microsoft has refined and streamlined the process of getting the functionality you wanted without a complicated setup. In most cases this is done by leveraging the cloud to provide SaaS, thus reducing complexity and administrative overhead for the IT professional. If you are interested in learning more about all the things introduced at Ignite, the sessions are now available on demand here. And of course, there are labs in Microsoft Virtual Academy to get your hands dirty.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.