0 0

Tech Tuesday: Chargeback / Showback with Normalized IOPS

Every other week, join us for Tintri's Tech Tuesdays, where guest bloggers feature new Tintri VMstore capabilities and spotlight interesting new use cases.

A few weeks back, I wrote a series of blog posts (Part 1, Part 2, Part 3) on how Tintri simplifies chargeback/showback for service providers (SPs). With the release of manual quality of service (QoS) per virtual machine (VM) and the introduction of normalized IOPS, Tintri has made that value proposition for SPs even better.

Tintri Storage QoS

As we all know, Tintri is the only storage platform that has an always-on dynamic QoS service that ensures QoS at a vDisk level. As part of this new functionality, Tintri customers can manually configure QoS at a VM level.

QoS on Tintri systems is implemented on normalized IOPS (more on this below) and customers can configure min and/or max settings for individual VMs. The minimum setting guarantees performance when the system is under contention (when you are more than 100% on the performance reserve bar) and the maximum setting allows an upper limit on performance for the VM. The new latency visualization gets an enhancement as well with the support for contention and throttle latency visualizations that ensure that QoS doesn't become a liability.


If you want to read more about QoS, head over to the blog post. There is also a great video on QoS available to view.

Normalized IOPS

Normalized IOPS are measured at a granularity of 8K by a reporting mechanism that translates standard IOPS into 8K IOPS. This helps create a single scale to measure the performance of various VMs/applications. So, in addition to reporting the standard IOPS per VM/vDisk, the VMstore also reports normalized IOPS for the VMs. Here's an example:

1000 IOPS @ 8K = 1000 Normalized IOPS

1000 IOPS @ 12K = 2000 Normalized IOPS

1000 IOPS @ 16K = 2000 Normalized IOPS

Why Use Normalized IOPS?

  • As we all know, different applications have different block sizes. Normalized IOPS allows us to understand the real workload generated by various applications and help create an apples-to-apples comparison between applications.
  • It also makes QoS predictable. When we set up QoS using normalized IOPS, we know exactly what the result will be, instead of getting a skewed result because of the block size of the application.
  • It gives one single parameter for SPs to implement performance-based chargeback/showback. So, instead of considering IOPS, block size, and throughput, and then trying to do some sort of manual reporting and inconsistent chargeback/showback, the SPs get the measurements out of the box.

Let's use an example to see how SPs can take advantage of the new functionality.

Dashboard view of Tintri's normalized IOPS.

In the above screenshot, we have three VMs and we can see the IOPS and normalized IOPS for each of these VMs. If we look at just the IOPS, we would be inclined to think that the VM SatSha_tingle is putting the highest load on the system, and that it is 2.7x the VM SatSha_tingle-02. But if we look at the normalized IOPS, we know the real story. The VM SatSha_tingle-02 is almost 1.5x of SatSha_tingle. This is also reflected in the reserves allocated by the system to the VMs under Reserve%.

In a SP environment, without the normalized IOPS, the SP would either end up charging less for SatSha_tingle-02, or would have to look at block size and do some manual calculation to understand the real cost of running the VM. But with normalized IOPS, the SP can standardize on one parameter for charging based on performance and get more accurate and more predictable with its chargeback/showback.

Since normalized IOPS are also used for setting up QoS, SPs can now guarantee predictable performance to its customers through implementation of min and max IOPS-based QoS. With normalized IOPS, the SPs now have four different ways to chargeback/showback: provisioned space, used space, reserves and min/max normalized IOPS.



Satinder Sharma is a storage and virtualization expert certified on Tintri, Sun, NetApp, Microsoft and VMware technologies, helping customers design their virtualization and cloud solutions using Tintri technology. He is based out of Toronto, Canada.

Adapted from Satinder Sharma's original blog post at Virtual Data Blocks, with permission.

Satinder Sharma / May 05, 2015

Satinder Sharma is a subject matter expert for storage & virtualization.  He works for Tintri, based out of Toronto, Canada. He is responsible for evangelizing and helping customers design th...more