By now you are hearing a lot about Virtual Volumes (VVOLs) at the VMworld 2014 event and why it is so important to manage the end-to-end infrastructure, from compute to storage, at the virtual machine (VM) and virtual disk (vDisk) level. Virtualization meant VMs and vDisks are the unit of management at the compute layer. VMware® Virtual Volumes is meant to bridge the gap by extending the paradigm to storage specifically on VMware vSphere® deployments.
The buzz around VVOLs to extend the VM-level management to storage layer is great validation of what we at Tintri strongly believe – fundamentally the storage layer also must understand and provide data management services and visibility at VM-level. Tintri has been delivering on this promise and shipping products for almost 4 years. Hundreds of customers are enjoying VM-level management across hundreds of thousands of VMs on thousands of Tintri VMstore systems. It is great to see the excitement around VVOLs and that VMware and others share our vision.
What is VVOLs?
VVOLs is an out-of-band communication protocol between vSphere and storage. It allows VMware to associate VMs and vDisks with storage entities, and allows vSphere to offload some storage management functions, like snapshots and clones to storage. This offloading allows virtualization administrators to get the same performance and scalability through the VMware tools they may expect through their storage.
Not all storage is created equal
The VVOLs API defines the interface between vSphere and storage, but does not change the underlying storage architecture. The scale and performance of a storage system is still determined by the storage implementation itself. For example, in order to implement VVOLs for a 1,000 VM deployment an array must support between tens of thousands to hundreds of thousands of individually addressable virtual volumes (directories in NFS or LUNs or sub-LUNs in SAN).
Not all storage systems are created equal for bringing the Virtual Volumes vision to reality. Retrofitting an existing traditional storage system to support an API like VVOLs with data services at the VM and vDisk level is going to be rather complex. For example, storage vendors built around limited number of LUNs and volumes per array will find it challenging to support the necessary number of virtual volumes for real workloads.
Given that VVOLs is an API, not an implementation, be sure to ask any storage vendor the about the following aspects of their VVOLs implementation.
1) Simplicity and visibility
One of the major benefits that we have seen of VM-level management is visibility. A VVOL implementation can’t maintain visibility when presenting tens of thousands of VVOLs with low-level names. The storage system needs to be able to understand VMs and vDisks. Tintri's instant VM bottleneck visualization provides ability to identify performance hot spots at the host, network and storage level with comprehensive end-to-end performance visualization.
This level of simplicity and visibility is not possible with traditional storage systems and architectures, regardless of the protocol used.
2) Scalability and VM density
Scalable implementation requires a storage system to support a large number of virtual volumes. For example, for a storage system to support 2000 VMs, 32 snaps, 4 virtual disks per VM, it will need to support 256,000 virtual volumes. Traditional LUN based architectures provide data services, such as snapshots and replication, at LUN level. A simple mapping of a LUN to virtual volume will greatly limit the number of VMs a system can support. Assuming a traditional system based on LUN architecture can support 10,000 LUNs, using the example above, such storage system can support ~80 VMs.
Tintri VMstore T650 supports 2000 VMs in a 4U chassis to provide unparalleled VM density. Each VM can have up to 128 snapshots and combined with native per-VM replication for granular data protection and disaster recovery. Retrofitting existing storage systems will not be able to provide this level of scalability and VM density.
Also, presenting tens of thousands of vVols with low-level names is, at best, very difficult to manage. The storage system needs to understand VMs and vDisks natively.
3) Granular resource management and quality of service (QoS)
Tintri VMstore systems are purpose built for virtualized environments and private cloud deployments. Tintri delivers the following
Traditional storage systems and architectures not purpose built for virtualized workloads that don’t understand VMs and vDisks cannot provide such level of granular resource management and QoS.
4) VM granular data management
Tintri VMstore provides space-efficient snapshots and clones at the VM level. Data protection schedules can be customized on a per-VM basis to fit the business requirements and needs of different applications. Hundreds of clone VMs can be created in a matter of few minutes for rapid provisioning. Tintri ReplicateVM™ allows administrators to only replicate VMs of interest, greatly reducing storage required for data protection in large virtual environments. What is more, you can leverage WAN-efficient per-VM replication today.
Traditional storage systems cannot deliver granular VM-level data management (including customized data protection schedules at VM-level) and per-VM replication as they don’t fundamentally understand VMs and vDisks.
5) VM-level Automation
What if you could automate your virtualization and cloud storage at the same level as the rest of your scripts: at the VM level? Tintri Automation Toolkit brings the power of application aware storage to your virtualization and cloud automation environment that can be leveraged across multiple hypervisors. The same VM-level automation scripts can be leveraged to automate workflows. For example, the following one line script gets a VM from vCenter server, creates five clone VMs from a snapshot and adds to vCenter inventory.
The same workflow can be applied to Hyper-V environments as well connecting to a SCVMM instance. Traditional storage systems cannot deliver such simple hypervisor agnostic automation as they don’t fundamentally understand VMs and vDisks.
6) Multi-hypervisor support
In addition to VMware vSphere, Tintri VMstore systems provide VM-level data management services for Red Hat Enterprise Virtualization (RHEV) and Microsoft Hyper-V. Tintri architecture means that the same VM and vDisk level data management is available across all the hypervisors to provide a unified storage system to serve multiple workloads running on multiple different hypervisor platforms while managing VMs and vDisks natively on the storage.
Traditional storage systems cannot deliver such VM-level data services across multiple hypervisors as they don’t fundamentally understand VMs and vDisks.
Tintri’s fundamental value is application-awareness that is hypervisor agnostic, so VMs with varying resources needs (IOPS, latency, application characteristics, etc.) from multiple hypervisors can be concurrently run on a single VMstore while leveraging VM-level data services and visibility across all VMs. By removing the language barriers that exist between virtualization and storage components of the infrastructure, Tintri VMstore systems provide a unifying experience in managing the end-to-end infrastructure in the language of VMs and applications across all hypervisors and protocols.
Learn more about Tintri VVOLs Implementation
Tintri is a design partner of VMware VVOLs program. We’ve been working with VMware to support VVOLs on VMstore and give customers the choice of leveraging the VM-level functionality as exposed by VVOLs API and also Tintri native VM and vDisk level functionality, such as instant bottleneck visualization, per-VM replication, as well. We’ve demonstrated VVOLs support at VMworld 2013 and are doing so at VMworld 2014 as well. Come by the Tintri booth to see a simple and scalable implementation of VVOLs that showcases 20,000 virtual volumes on a single host in action at Tintri booth #921.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.