0 0

Three Current Challenges with Storage for VMs

"All problems in computer science can be solved by another level of indirection — except for the problem of too many layers of indirection." - David Wheeler

This principle is particularly applicable when you think of it in the context of virtualization and storage. Virtualization introduces a new layer of abstraction — the VM — that provides unprecedented benefits, including consolidation, flexibility, and high availability. But other critical parts of the infrastructure still use their own layers of indirection. Existing networked storage is designed for every imaginable type of workload. It presents abstractions such as LUNS or volumes, RAID groups, and queues to create and present logical disks to applications. In the previous all-physical world, these abstractions were effective and mapped directly to the workloads. These legacy storage systems were designed before virtualization was even a consideration, and adapted to virtualization via the typical computer industry approach: Introduce another layer of abstraction to mask the complexity of meshing these two layers together. However, as virtualization adoption continues accelerating, general-purpose storage cost and complexity also accelerates. As Kieran noted in his blog post, storage now accounts for up to 60 percent of the cost for a virtualization deployment. Tintri customers consistently confront the following three problems with traditional storage:

1. Visibility into per-VM storage performance

Traditional networked storage provides a performance view from the LUN, volume or file-system level — the abstractions it understands. Legacy storage can’t isolate VM performance or provide insight into VM-level performance. Without ready access to the relevant performance metrics, it’s impossible to know if a new VM workload will create problems. As a result, many customers knowingly over-invest in storage to reduce risk.

2. Performing operations at the VM level

Storage operations ultimately occur at the storage abstraction layer. For example, a NAS device runs most tasks at the volume level, while a SAN device runs operations at the LUN level. Block and file operations also occur only in the context of higher-level abstractions: VMs don’t map cleanly to any of these objects, so (for example) a storage operation on a NAS system — a snapshot or clone, etc. — actually occurs on the entire volume. The result is complex VM management at the storage layer. As one customer notes, provisioning storage for new VMs previously required a few days of planning to get right.

3. Efficiency and automating decision-making

This disconnect — the different abstractions storage uses — creates an administrative boundary. The virtual and storage infrastructure essentially speak completely different languages, which introduces a layer of translation. As a result, general-purpose storage does not have the ability to fully automate decision-making, because there are too many variables. Changing the configuration of one device could have unforeseen consequences that cascade across the infrastructure. Retrofitting and adding new features to legacy architecture just adds layers of complexity that compound the problem. Features such as tiering require substantial ongoing manual decision-making to work effectively. At Tintri, we believe there is a better way to do this — what we call VM-aware storage. The Tintri file system is designed from the ground-up for VMs, and uses virtual machine objects — VMs and virtual disks — in place of conventional storage abstractions. Learn more about Tintri VMstore.

Temporary_css