Fourth in a four-part series on the emergence of the software-defined data center and major trends in storage, networking, and servers.
In the previous post, I discussed the need and the challenge to develop virtualization aware storage for which finding the right level of abstraction between storage and VMs is the key. For all the reasons we discussed previously, we believe that the best way to support virtualization is from the ground-up. Starting from a clean sheet, we developed a VM-aware storage system that natively understands and integrates with the virtual infrastructure. We use virtual machine abstractions — VMs and virtual disks — in place of conventional storage abstractions such as volumes, LUNs, or other legacy storage objects. All data management operations – snapshots, clones, and replication – are at the VM level.
The VM-level capabilities also extend to performance isolation and quality-of-service. Tintri VMstore directly monitors and controls I/O performance for each virtual disk. Each I/O request – reads, writes, or metadata operations – maps directly to the particular virtual disk on which it occurs. By operating at the virtual machine and disk level, Tintri VMstore finally provides administrators with the same level of insight, control and automation as they have for compute, memory, and networking resources.
Tintri VMstore is natively integrated with the VMware vCenter™ Server API to learn which virtual machines are active and reside on the datastore. The VMstore collects and reports per-VM and per-virtual disk statistics, such as size, I/O throughput, and resource utilization. An administrator can immediately see which virtual machines and virtual disks are responsible for the consumption of storage resources, and what performance the virtual machines receive.
With Tintri, there is now a VM-aware storage platform that fits into the software-defined data center, providing storage resources automatically as the application workload needs it.