0 0

VM-Level Storage Performance Guarantee

The Not So Fine Print Behind the Industry’s ONLY VM-Level Guarantee

Tintri recently issued the storage industry’s first (and only) VM-level performance guarantee. That’s a big promise—why are we so confident that we can deliver? Well, it’s because Tintri is built to work at the VM-level. That level of visibility and control shifts storage from passive container to an asset that actively improves your infrastructure.

Here are the underpinnings of Tintri’s VM-level storage performance guarantee:

  1. Deliver 99% of IO from Flash. Tintri VMstore has one big difference from other hybrid storage systems. Tintri writes every bit of data to flash first. All ‘hot’ (regularly accessed) data stays in flash—it never touches disk. Only data that is determined to be ‘cold’ is moved to disk.

    Conventional storage does the opposite—it writes to disk, and then bubbles the hottest data up to flash. That simple design switch allows Tintri to deliver 99% of IO from flash, while other hybrids struggle to serve 40% of IO from flash. Tintri puts flash first.

    Deliver 99% of IO from Flash
  2. Give every VM its own lane. You’ve heard about noisy neighbors. On conventional storage, noisy neighbors are a real problem — that’s because IO requests are handled sequentially. So, the mission critical test your development team needs to run right now is stuck behind a massive (and relatively unimportant) database update. And it’s why boot storms and anti-virus scans can cripple your VDI user experience. 

    Tintri has a simple solution—give every single VM its own lane. No more IO traffic jams or noisy neighbors. Rather than stack up actions sequentially, Tintri storage handles them simultaneously to end the performance hiccups that are so pervasive with conventional storage.

    Give every VM its own lane
  3. Control IOPS at the VM Level. There is plenty of talk about Quality of Service, but rarely is it clearly defined. Other vendors let you set minimum and maximum IOPS… at the LUN level. If the VMs inside a LUN need different levels of performance, you either have to over-extend performance (buffer), or re-configure your LUN. 

    Tintri has a far more elegant solution—enabled by VM-level visibility. For an individual VM (not a LUN or volume) you can toggle the minimum and maximum IOPS as desired. Most vendors have you punch in numbers; Tintri makes it as easy as dragging thresholds up and down.

    Control IOPS at the VM Level
  4. Visualize Contention. So you just changed IOPS policies for a LUN or volume… how will that affect your overall performance? Unless your conventional storage comes with a crystal ball, you have to predict the impact, and then wait to see if users start complaining. That’s a very slow feedback loop. 

    Tintri keeps it simple and visual. You can drag down the IOPS ceiling on a rogue VM and then watch—on the same screen—the latency caused by throttle jump up. It’s immediate visual feedback. And one graphic shows you the root cause of any latency across your infrastructure—spanning compute, network, storage, contention and throttle. Tintri removes guesswork from QoS.

    Visualize Contention

In short, Tintri delivers 99% of IO from flash, gives every VM its own lane, lets you throttle IOPS for individual VMs, and then visualize contention. Oh, and you can do all of this for up to 112,000 VMs from a single pane of Tintri glass. That is visibility and control that is well beyond the capability and imagination of conventional storage.

It’s not fine print, or a secret sauce—it’s just smart storage. That’s how Tintri guarantees the performance of every single individual VM.

Temporary_css