In the second blog post of a two-part series, Tintri Field CTO Matt Geddes and Forrester Lead DevOps Analyst Charles Betz explore DevOps principles and infrastructure-as-code.
In my last post, Charles Betz, Forrester’s lead DevOps analyst, and I discussed the origins of DevOps and how infrastructure-as-code is the foundation of speed, agility, and stability for the modern business. In this blog, we’ll look at specific use cases involving MySQL, Docker and Kubernetes, and Disaster Recovery (DR).
To learn more about how DevOps principles apply to your business, how to bridge the gap between IT developer and IT admin, and guidance on how storage-as-code can enable DevOps workflows, watch our on-demand Webinar: DevOps and Storage-as-Code—Hype, Hope or Horror Show?
In the past, the sysadmin was the superhero of IT, provisioning the resources you needed to develop new capabilities and solve your production problems. But you simply cannot run a modern digital organization this way. The systems are too big and complex and the social and economic dependencies on these systems are far too critical. You need automation.
You can’t afford to wait on someone else to get you a new development server. When you have a brilliant new idea to meet customer needs, you need to explore that idea right away.
As I wrote in the first post in this series, infrastructure-as-code is an essential aspect of storage architecture for DevOps. Overall, a DevOps reference architecture has two parallel tracks:
The more you can handle infrastructure the same way you treat code, the easier the DevOps process becomes.
Tintri’s focus is building storage that supports infrastructure-as-code in DevOps-style workloads. Applying DevOps principles means you no longer need to dedicate capacity because you can stand up a dev/test, QA, or build environment very quickly.
For storage in DevOps workflows, you need four things:
Composable infrastructure with programmatic access for all components in the infrastructure, especially storage, not just automation for provisioning of storage but for smart copy data management.
Right level of abstraction with APIs and development tools, giving you better control over your autonomy and delegation and being able to equip individual users to become more self-sufficient.
QoS and performance isolation allows you to make sure that one workload sharing infrastructure with another workload can’t negatively affect that other workload, so you can meet SLAs for both sets of applications.
Intelligent copy data management means moving data around more efficiently without having to make expensive copies and makes data available in multiple places at once, so that long-running jobs take a much shorter time.
In the following example, you can see an infrastructure definition that is going to install MySQL on a resource. It is just a text file that requires automation in order to produce the MySQL node that can then act as a database as part of a bigger system. This is an example of infrastructure-as-code. Compare this to past practices when an admin would need to download the software, complete a series of steps at the command line, including downloading of MySQL and execution of various commands and procedures (setting up user accounts, creating tables in the database), all done manually, maybe with reference to a Word doc or other requirements document.
Infrastructure-as-code creates a different paradigm that starts with identification of needed capacity of the resource and ends with a fully-provisioned MySQL server with all the tables you need.
An even faster and more elegant way of provisioning resources quickly involves Docker and Kubernetes. You could stand up a database in Docker just as you did in the previous example with a virtual or physical server. But what about the storage? Containers are known to be ephemeral and tend to be immutable. Storage (including volume management) is treated as a separate concern.
In the following example, a set of commands results in the creation of a Docker container that is talking to a specific amount of storage that could be mapped to a storage infrastructure provider. All of this is automated and extremely repeatable and predictable.
This approach is extremely useful for disaster recovery (DR). With infrastructure-as-code, you can have confidence that workloads can be brought up in a new location because everything is precisely defined down to the storage driver, the capacity you need, and full visibility into the lowest levels of the storage infrastructure.
DevOps-style workflows and infrastructure-as-code apply not just to software development, but also VDI and application virtualization, as Tintri customers have found:
Software Development. Tintri customer Mentor Graphics follows DevOps practices. They use a combination of automation and intelligent copy data management to spin up and tear down 10,000 VMs a day as part of their continuous integration testing. As a result, test and development cycles take far less time.
VDI. The United States Marine Corp (USMC) used to take days to deploy new virtual desktops. When you’re deploying 100s or 1000s of desktops, that is simply unmanageable. Using automation and copy data management, they were able to reduce deployment time from days to merely eight minutes.
SQL Databases. Sterkinekor reduced SAP reporting 70% from 30 minutes to under 10 minutes for their traditional service-style workloads.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.