Reflections and predictions on how DevOps will evolve in 2020, by Godel’s DevOps division.
1. What have been the drivers for adoption and evolution in DevOps over the last decade?
Speed of change in technology over the last decade has laid the foundation for DevOps since the concept itself was officially born just ten years ago. Back in 2009, most businesses were operating with a very fixed set of resources running onsite, with hosting providers or in private data centres. It solely encapsulated their systems. However, demand for the evolution of business services rose quickly. It became clear that this type of rigid setup simply wouldn’t allow for the new need for flexibility.
Fixed solutions require advanced planning. As businesses began to shift towards a more agile mindset, the risk of implementing a big-budget, complex infrastructure setup placed a brick wall in front of their ability to innovate at pace. Not only this; knowledge on how to maintain these systems often remained with a few key employees – the moment they left, out walked that knowledge too.
So, when cloud providers entered the game they perfectly met the demand by enabling businesses to spin up and scale environments based on their needs at any given time. The time- and cost-saving potential was, and is, gigantic in comparison to on-premise solutions. As we moved through the decade the rise of APIs for cloud services had a large impact too. APIs enable engineers to configure services in a universal way through Infrastructure as Code, without need for complex documentation and manual configuration – the code itself is now the documentation.
All this created the ability for teams to set up automated delivery process which can be replicated and reproduced consistently across systems and for a long period of time. Also, it provided support for systems to be scalable, fail-tolerant and reasonably priced – in a way which simply didn’t exist before the 2010s.
2. What’s going to happen in DevOps during the 2020s?
Like most areas of software engineering, we DevOps teams are still tapping into the world of value that data can offer. I think Splunk offers a great example of what can be achieved with collecting and analysing huge amounts of data from a multitude of sources, in order to improve efficiencies across a business.
The next level of this – “DataOps”, you can call it – will see machine learning shift into the spotlight. Right now, machine learning for operations is still in its infancy, with teams from Splunk and Dynatrace implementing proof-of-concepts, and companies dipping into the application of models for small areas of their businesses. Still, the opportunities machine learning offers for real-time operations are undeniable. Right now, engineers are spinning or scaling up environments with code in response to monitoring issues, as well as updating security policies and configuration based on incidents (either in their own systems or as the result of vulnerability announcements). In the near future, machine learning models can take the reins on these tasks.
The good thing about machine learning is that it’s relatively simple to plug into existing systems. Start slowly by feeding it data and assessing the metrics whether the outcomes are good or not, to the end of implementing models across wider systems and shifting to an AIOps approach.
Looking at the big cloud providers, all are shifting their priorities in different ways – but all to reach the same end of driving efficiencies for businesses.
Amazon for the last few years has been making waves in the hardware space. The AWS Nitro system was introduced after Amazon felt the impact of hardware constraining capacity and accessibility to CPUs, GPUs and networking hardware. The Nitro platform is leading us down a road of more granular access to hardware capacity, helping businesses tap into better and faster capabilities of virtualization and more strict control and isolation of shared resources in the cloud.
Also, Microsoft’s adoption of an open-source mindset sets an example of where the future undoubtedly lies – in fact, open-source is our current setting and simply cannot be disregarded. Open source code is helping fuel the rise of machine learning by providing models with vast amounts of software engineering data. And for companies like Microsoft which must innovate to deliver against such a rapidly evolving market – and such intense competitors – accessing tens of thousands of ideas beyond their own company is invaluable.
DevOps adoption is at varying levels for businesses – those that are in the early stages are increasingly looking to hybrid approach as a solution to reap the benefits of cloud adoption without completely shifting their data to the cloud. Proper DevOps implementation is about two things – the provision of dependencies from an application to help it run reliably and quickly, and then the deployment of that application. A hybrid cloud approach, using tools that support this such as Kubernetes to containerise applications and then deploy functionalities to the right places, lets teams release their lock-in with managed services and reduce costs in the areas they’d like, without going “all in” on the cloud at once.
Google has gone a step ahead with this recently with the release of Anthos – bringing together a single view of hybrid-deployed applications across multiple cloud providers and private data centres alike. This kind of solution profiles the future for DevOps and the future for engineering as a whole – unified, fully integrated and open source.