The WinOps Conference is an annual conference and meetup group focusing on "Windows in a DevOps World". This is the second time it's been held, here are the highlights from selected speakers.
Jeffery Snover, inventor of PowerShell and Microsoft Technical Fellow, kicked off the conference with the ambiguous question, "What is DevOps?"
There seems to be a consensus that DevOps is about culture and processes and not about tools and technology
He then proceeded to surprise the conference attendees by saying that this assumption is wrong, that tools and technology do play a critical role. Thus, paving the introduction for Windows Server 2016, which has been designed to make DevOps "easy".
Before presenting the new Windows Server 2016 features, we were shown how Windows Server has evolved. From Microsoft's very first server for the masses (Windows NT), right through to data centre servers (Windows 2012) which allowed you to run scalable applications. It is this technology, which has introduced a new generation of servers, dubbed 'Cloud Servers', that have been optimised for DevOps. But how have Microsoft achieved this? Here are some of the upcoming DevOps technologies and features.
Alongside having the traditional deployment models of 'Server and Desktop' and Server Core. Windows Server 2016 has introduced a new environment called Nano Servers, a 'bare essentials' version, with the user installing only the components that they need. This means for example you don't have to apply unnecessary security patches for components that you are not using.
The Nano Server has been fully optimised for the cloud, and bearing a small footprint, it has refactored versions of .NET and PowerShell, namely .NET Core and PowerShell Core respectively.
Nano Servers are the future of Windows Server
Snover demonstrated that, compared to critical patches released in 2014, Nano Server would have only required 1/10th the number of critical patches. There is also a significant difference, in terms of number of reboots required, resource utilisation and deployment improvements.
Nano Servers can only be run in containers. Windows Server 2016 offers 2 options: Window Server containers and Hyper-V containers. With Hyper-V containers, you get more security due to isolation.
Introduced in WMF 5, Just Enough Administration (JEA) allows users to have 'just enough' admin privileges without being an admin themselves. Using the Edward Snowden case as an example, Snover demonstrated that JEA allows administrators to set a boundary of admin actions that can be safely done by users.
Iris Classon, revered speaker and Microsoft MVP enthralled and captivated us, as she literally sketched out how a start-up went about moving their on-premise system to the cloud.
Iris painted a picture of the current system, having many dependencies and a bottlenecked pipeline. She then proceeded to sketch out all the questions/concerns they were faced with.
Here is a summary of these challenges:
In the end, they decided to go with Azure, since they are already using .NET and have a familiarity with Azure services.
Pets, Cattle and Chickens…
The above, believe it or not, are terms given to various server configuration management approaches used in the industry. Gael Colas, a Cloud Automation Architect consultant, explained why chickens are the future and why it might be a good idea to give up your pets.
'Pets' are servers that have been given a memorable name and nurtured with the latest patches. But, this approach doesn't fit in with the 'DevOps' mind-set and seems out of date.
The down time of a server should not mean the down time of a service
A better approach would be to treat servers like cattle rather than pets. For instance, if there is something wrong with the server, you dispose of it and spin up a new one. This provides a low mean time to recover and keeps service disruptions to a minimum.
But in some cases, it might be cheaper to fix the server, than dispose of it - increasing the downtime rate.
The chicken analogy is relatively new, since it corresponds to containers and Nano servers. Compared to 'cattle' servers, they have a smaller footprint, and are easier/quicker to replace and test.
Giving yourself a Denial of Service attack in production sounds scary enough, but doing this during peak time every day sounds downright crazy. Or does it?
Just Eat are an online take away ordering service with over 7 million active users. Their peak time is predictable, occurring during the evenings and weekends for instance. So, it's easy to predict when there is going to be a high demand. It is during this heavy period, they happily DoS themselves with the same amount of test orders (they have around 1000 orders per minute). Pete Mounce, Senior Engineer at Just Eat showed us why this isn't a foolish idea.
Before becoming game-changers, Just Eat took more of a traditional approach when it came to performance testing.
A few years ago, they had a small replica of the live running environment where they would run their tests. Except, this test environment wasn't very much like production at all. For instance, the infrastructure wasn't the same, and lack of ownership meant that it was out of date. Their functional tests didn't really help, since the test data wasn't the same or the machine wasn't clean enough. In production, they had monitoring and alerting systems, but this alone was not enough.
Customers are really good load agents…they come to your site and break things
They use customers as load agents, which means that there are more complex scenarios one would encounter, compared to standard load agents running relatively simple scenarios.
To accomplish this feat, requires good dev processes and tooling.
We have tight feedback loops
They have cross functional teams adhering to an agile methodology, where they are "empowered to own the code that they write and services that they are shipping". Each team member plays a crucial role, with shared responsibilities i.e. all team members write tests.
A little decent tooling goes a long way…
Just Eat are big on monitoring and alerting, using Graphite and Seyren respectively. They have centralised logging in place using the ELK stack to find the fault quickly, and use JMeter to do their load testing.
Use test scenarios that are crucial to the business!
DoS-ing the production environment during peak times every day prepares one for future calamities. You get to know the system well, how it responds during stressful periods, and the best way to react. If of course something does go very wrong, they can simply switch off the fake load and do a post-mortem. Every "small" incident that they encounter leaves them well prepared for future busy periods such as major holidays.
In all, I found WinOps 2016 to be very informative. The mixture of the content presented, appealed to both newcomers and regulars on the DevOps scene. I highly recommend joining the WinOps meetup group and of course attending WinOps 2017!