Software, in the words of Marc Andreesen, is in the process of "Eating the world." It touches every industry, powers more processes than ever before, and impacts nearly every aspect of business.
The improvements we have made in its development and deployment are a big reason as to why it has become so powerful. At a constant rate, new technologies and approaches to build more and robust scalable things emerge, become adopted, and are discarded as new strategies take their place. As predicted by Gordon Moore in 1965, computers have continued to double their computing power every year and a half; and since the Second World War when Alan Turing's bombe was used to break the codes of Germany's Engima machine, the makers of software have sought better ways to build systems.
We adopt new computing paradigms, such as virtualization and cloud. We learn important lessons of what works, what doesn't, and that which doesn't work as well as expected. One step at a time, one project at a time, the state of the art advances. Looking at the miracles of the modern world, it's clear that technology advances; and along with it we make advances in the techniques we use to build technology.
DevOps is a set of practices and tools to build better software. It combines development and operations teams into a whole (dev + ops = DevOps), and encourages them to work together. The unified team is then expected to own their application: from planning and coding, to testing, deployment, operations, and emergency. When something goes wrong, the developer who wrote the feature will often work alongside the QA engineer and systems administrator to troubleshoot and bring the service back online. Amazon, one of the early pioneers of DevOps practices and procedures, summarizes the whole of "What is DevOps" succinctly:
DevOps is the combination of tools, practices, and philosophies that increases an organization's ability to deliver applications and services at high velocity ... This allows for products to evolve and improve at a faster pace than organizations using traditional software development and management processes. Improved speed enables better customer service and enhances the ability to compete more effectively in the market.
It is, in effect, the convergence of everything: philosophy, practice, and architecture; and its proponents claim that it is revolutionary, capable of changing everything. But is it really? What are the benefits of DevOps? Are they worth the investment they require and the disruption they cause? What are the parts and pieces? How do they work together, and how can you start to take advantage of DevOps practices? In this article, we look at a number of these questions and attempt to look past the hype to some of the practical benefits of DevOps.
How do you put DevOps into practice? What procedures and tools can help you implement DevOps processes?
In many ways, the guiding principle of DevOps might be described as "Go Smaller." One fundamental example of this principle in practice is to perform frequent, but small updates. Deploying incremental updates to an application is much easier than deploying less frequent, massive upgrades. Putting code into the hands of customers helps teams to find and patch bugs more quickly, while the code may be fresh in the minds of the developers who wrote it.
A second example is to break large applications into smaller pieces called microservices. Applications that are broken into many small pieces, each with a very tightly scoped purpose, that are operated independently reduces the coordination overhead of large, tightly coupled applications.
Nothing comes for free, though. Decomposing applications and increasing the pace of release increases operational complexity. To actually release code takes time. Someone has to commit, review, test, build, and deploy. Likewise, splitting services into smaller pieces introduces network and environment complexity that may not apply with more traditional applications.
The good news is that the additional overhead and challenges that DevOps introduce can be addressed with tooling. For this reason, another core principle of DevOps is "automate everything". Many jobs within traditional operations require a great deal of manual labor and consume significant amounts of time. Examples include:
- testing of the software: unit, functional, and integration
- staging new changes and building new artifacts
- deploying the staged environment to live production use of the application and performing actions required to upgrade components (such as running database migrations)
All of these tasks can be automated through practices such as Continuous Integration / Continuous Deployment (CI/CD). After an initial investment to build CI/CD pipelines, the entire process of testing and upgrade can proceed automatically. This means that every time a developer commits a new source code change, a verified and tested update can be pushed to production with zero downtime.
Other practices, like Infrastructure as code and configuration management, can help to scale computing resources as demand spikes. Further, the use of monitoring and logging when combined with automated actions can help systems react to outages of part of the system without needing to take everything offline.
Taken as a whole, DevOps practices help to deliver better and more reliable code to their c0ustomers more often. In the remainder of this section, we will provide an overview of key practices and some of the tools that are used to implement them.
Many organizations (businesses, governments, and non-profits) have applied DevOps approaches with success. The case studies below highlight some of the specific challenges that companies like Amazon, Netflix, and Etsy have been able to overcome using DevOps.
Implementing DevOps with Open Source
The Open Source ecosystem provides a comprehensive suite of tools that can be used to implement all aspects of DevOps. Important tools include:
- Git: a distributed content versioning system that has become the most popular software tool for tracking and managing source code. Additional platforms, sometimes called code forges, can be used to facilitate collaboration between developers working on software. The most popular hosted forges include GitHub and GitLab.
- Ansible: Infrastructure as code (IaC) tool that allows for the automating of the provisioning and deployment process for infrastructure. Automation is configured as a readable description of state in yaml called playbooks. Provisioning tools read the description of state and then generate sets of instructions that will create an environment that matches.
- Docker: an end-to-end platform for building, sharing and running container-based applications. Used in environments as diverse as a developer's desktop to the cloud, Docker allows for the same build artifact to be leveraged at every stage of an application's lifecycle.
- Kubernetes: an open-source orchestration system for automating deployment, scaling, and management of containerized applications. Increasingly, Kubernetes (and the further systems that build on top of it) has become the core component of DevOps infrastructure. Kubernetes works closely with container technologies such as Docker. While Docker handles the execution and packaging, Kubernetes automates the processes of deploying Docker-based software across broad clusters of systems.
DevOps at Oak-Tree
Oak-Tree uses many DevOps tools and techniques including Docker containers, Kubernetes, and Ansible. The links below can help you learn more about how to implement DevOps on top of Open Source.