Private cloud environments, supporting dynamic and unpredictable workloads, provide many challenges for the underlying storage infrastructure. We’ve highlighted the top five things you need from storage to guarantee consistently good performance for your users.
Private cloud environments are dynamic and run a diverse set of virtualized applications. Self-service users running applications including VDI, test/dev, databases, and analytics, constantly deploy and destroy new workloads. On top of this, scheduled data protection features such as backup, snapshots and replication add additional load onto the storage.
Some workloads access offsets sequentially, others randomly. Others use small request sizes (<16KB) and others large request sizes (256KB). Still others are throughput oriented, and others are latency sensitive. On top of this mixed bag of I/O behavior, the timing of the I/Os themselves is bursty.
The key takeaway: you don’t want to manually tune performance for this unpredictable and dynamic environment. The storage needs to be designed to automatically do the right thing: provide consistent low latencies, high IOPS and high throughput.
This is where traditional storage falls over. Traditional storage management is based on right-sizing and configuring LUNs for statically pre-defined workloads. Once configured, the storage configuration is difficult to change. Assumptions during provisioning can be invalid and plans change, so in practice, many administrators overprovision LUNs significantly for performance, wasting capacity.
You’ll need these:
Even with perfect storage that lives up to the expectations of consistent low latencies, high IOPS and high throughput, this does not mean you will never have any performance issues. Bottlenecks can be in any part of the private cloud environment, including host CPU and memory, network and storage.
That’s why visibility across the infrastructure is critical. You need to be able to quickly see end-to-end I/O latencies and breakdowns to quickly identify latency issues and root cause the culprit. Tintri VMstore provides real-time and historical latency breakdowns of host, network and storage for each application, vDisk and VM.
Storage needs to be application-aware in order to provide the needed performance isolation and QoS to ensure fair scheduling of I/Os and low latencies for all.
Especially in private cloud environments where self-service users can run any workload within their VMs, you want your storage to prevent rogue VMs, which flood your storage with heavy I/O, starving other VMs of performance resources.
Tintri storage implements performance isolation at per-vDisk granularity, meaning applications across different vDisks and VMs are protected from each other. Tintri’s per-VM QoS allows administrators and cloud service providers to configure minimum and maximum IOPS for VMs, providing further control over storage performance resources. This can be used to implement service tiers.
This video shows how simple it is to configure per-VM QoS on Tintri storage.
SSDs behave very differently from hard disks. The flash translation layer (FTL) within SSDs perform such internal tasks as wear leveling, page remapping and garbage collection, which cause unpredictable latency spikes—sometimes even hundreds of milliseconds. That’s slower than hard disks!
Storage software needs to be designed for SSDs. It has to understand the intricacies of their behavior (which varies by vendor) to implement algorithms that guarantee consistent low I/O latencies to the client. These techniques range from writing SSDs in a particular manner to carefully breaking down requests and scheduling the I/Os. Simply bolting on flash to legacy hard-disk based storage cannot provide the same performance and management simplicity as storage built for SSDs.
Tintri has designed a flash-first file system, built from the ground up for flash and virtualized applications. Read more about the challenges of managing SSDs in enterprise storage in our earlier blog post.
Even in all-flash storage, metadata needs to be cached in memory to guarantee low latency. You don’t want your storage to read from flash for both metadata and data.
Tintri’s filesystem was built to understand virtual disks and VMs, so we have deep knowledge on the access patterns and working-set of each application. We use this knowledge to provide fast in-memory access to metadata. For our hybrid platforms, we also use the working-set information to manage the placement of application data between SSDs and HDDs to make them perform like all-flash platforms.
How and where dedupe and compression are implemented matters. Dedupe and compression should not hinder consistent low latencies or high MBPS/IOPS. Depending on how they are implemented, dedupe and compression can add additional overheads when they kick in, affecting the performance of incoming I/O.
For example, post-process dedupe and compression implemented on the host side adds more I/O load to the storage, not to mention compute on the host, affecting production I/O even more.
Tintri does inline dedupe and compression on the storage, so the latency you see is what you get. There’s no more work to be done later. Plus, we’ve optimized our dedupe and compression for low latencies, high IOPS and throughput.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.