Virtualization has a problem. It’s raced ahead of the underlying datacenter infrastructure, and now the industry is trying to adapt. Many companies we speak with have aggressive plans to virtualize nearly all of their servers within the next year or two, but legacy storage makes this an increasingly expensive and complex proposition. Companies want to go all-in on virtualization, but at what cost?
During my time as VP of Engineering at VMware, I saw this problem looming on the horizon and ultimately founded Tintri because it was clear that existing traditional storage wasn’t in a position to meet the demands of a virtualized world. As Paul Maritz and Steve Herrod noted in this year’s VMworld keynotes, over 60% of applications are virtualized today compared to just 25% in 2008, but a lot of work still remains to be done.
We need a new model of storage that is designed for virtualization. We call it VM-aware storage – the idea that all storage monitoring and management should run natively at the VM-level. This is the essence of software-defined storage – storage whose very design is built around the constructs of the virtualized infrastructure.
Legacy Storage: Built for a Physical, Not Virtual, World
Let’s take a closer look at traditional shared storage. It was designed around a few fundamental principles:
But as workloads have changed, traditional datacenter infrastructure needs to adapt and become more aligned with the new requirements. Today’s general-purpose storage systems are based on decades-old architectures, physical servers and applications are being virtualized, and mechanical disk drives are being replaced by SSDs. The I/O patterns change significantly, and also become obscured by virtualization. General purpose storage systems are based on LUNs or volumes, which map poorly to VMs. This creates three fundamental, interrelated problems:
Legacy storage was built for a physical world, where each application ran on a physical server and could be easily mapped to LUNs and volumes. With the advent of virtualization, it is now the virtual infrastructure, rather than the physical that matters. LUNs and volumes are being replaced by VMs and virtual disks. We need a new storage architecture that can understand and manage the virtual infrastructure.
A Different Infrastructure Calls for a Different Approach
Virtualization demands a different kind of storage. It needs storage that understands the IO patterns of a virtual environment and automatically manages quality of service (QoS) for each VM, not LUN or volume. Operating at a VM level also enables data management operations to occur all the way down to a specific application. This is what defines VM-aware storage.
There are different ways to approach the problem with varying degrees of “VM-awareness.” VMware understands the problems with legacy storage and is working to make existing general purpose storage more VM-aware, but ultimately, this will be a slow incremental process that adds additional layers of abstraction, complexity and inefficiency to decades-old storage architectures designed before the advent of virtual machines and flash storage technology. We believe that the best way to support virtualization is from the ground-up – let the software define the storage.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.