0 0

Defining VM-aware Storage

Virtualization has a problem. It’s raced ahead of the underlying datacenter infrastructure, and now the industry is trying to adapt. Many companies we speak with have aggressive plans to virtualize nearly all of their servers within the next year or two, but legacy storage makes this an increasingly expensive and complex proposition. Companies want to go all-in on virtualization, but at what cost?

During my time as VP of Engineering at VMware, I saw this problem looming on the horizon and ultimately founded Tintri because it was clear that existing traditional storage wasn’t in a position to meet the demands of a virtualized world. As Paul Maritz and Steve Herrod noted in this year’s VMworld keynotes, over 60% of applications are virtualized today compared to just 25% in 2008, but a lot of work still remains to be done.

We need a new model of storage that is designed for virtualization. We call it VM-aware storage – the idea that all storage monitoring and management should run natively at the VM-level. This is the essence of software-defined storage – storage whose very design is built around the constructs of the virtualized infrastructure.

Legacy Storage: Built for a Physical, Not Virtual, World

Let’s take a closer look at traditional shared storage. It was designed around a few fundamental principles:

  • General-purpose storage.
  • Physical servers and applications.
  • Mechanical disk drives with rotating discs.

But as workloads have changed, traditional datacenter infrastructure needs to adapt and become more aligned with the new requirements. Today’s general-purpose storage systems are based on decades-old architectures, physical servers and applications are being virtualized, and mechanical disk drives are being replaced by SSDs. The I/O patterns change significantly, and also become obscured by virtualization. General purpose storage systems  are based on LUNs or volumes, which map poorly to VMs. This creates three fundamental, interrelated problems:

  • Overprovisioning. General purpose, disk-based storage is poorly suited to handle the random I/O streams in virtual environments, so companies tend to overprovision storage to meet demand. This in turn increases costs – more spindles with larger footprints.
  • Complexity. Traditional storage uses different abstractions than virtualization. Administrators need to manage an unwieldy accumulation of legacy storage constructs such as LUNs, volumes and tiers. Just ask any IT architect who has designed a storage architecture for a 1,000 seat VDI deployment. Trying to design, tune and manage around legacy storage architectures for today’s demanding virtual workloads is a complex and frustrating undertaking.
  • Performance management. Storage can become an opaque, unpredictable  performance bottleneck. A legacy array may suddenly experience latency if it’s hit by a burst of random I/Os from virtual machines and troubleshooting quickly becomes problematic since there’s no easy way to correlate problems when the storage is managed using LUNs and volumes while applications are managed using VMs and virtual disks.

Legacy storage was built for a physical world, where each application ran on a physical server and could be easily mapped to LUNs and volumes.  With the advent of virtualization, it is now the virtual infrastructure, rather than the physical that matters. LUNs and volumes are being replaced by VMs and virtual disks. We need a new storage architecture that can understand and manage the virtual infrastructure.

A Different Infrastructure Calls for a Different Approach

Virtualization demands a different kind of storage.  It needs storage that understands the IO patterns of a virtual environment and automatically manages quality of service (QoS) for each VM, not LUN or volume. Operating at a VM level also enables data management operations to occur all the way down to a specific application. This is what defines VM-aware storage.

There are different ways to approach the problem with varying degrees of “VM-awareness.” VMware understands the problems with legacy storage and is working to make existing general purpose storage more VM-aware, but ultimately, this will be a slow incremental process that adds additional layers of abstraction, complexity and inefficiency to  decades-old storage architectures designed before the advent of virtual machines and flash storage technology. We believe that the best way to support virtualization is from the ground-up – let the software define the storage.

Kieran Harty / Nov 20, 2012

Kieran is the Chief Technology Officer and co-founder of Tintri.  Prior to becoming CTO, Kieran served as CEO and Chairman of Tintri. Before founding Tintri, he was Executive Vice President of ...more

Temporary_css