0 0

Quality of Service (QoS)

QoS refers to the level of performance a storage system or network offers—or guarantees—when handling different volumes of workloads and applications while maintaining agility, flexibility, visibility and speed across virtual environments and the architectures that make them up. Storage QoS ensures specific executable operations always get the resources they need to perform at their highest level by assigning a number of input/output per second (IOPS) for each application.

Data storage QoS and shared workload management

Within virtual or enterprise cloud environments, multiple tenants often work within the same or multiple applications simultaneously. Data storage QoS has become crucial to preventing adverse reactions across applications or workloads all being worked in by multiple users. For virtual environments and the storage systems that run them to meet QoS performance standards, there must be enough storage for all tenants to operate as many workloads as possible to maintain productivity. This means data centers running on all-flash arrays (AFAs) need to offer infrastructures with shared storage—flexibility at the virtual machine (VM) level that allows numerous operators to hop on and get to work.

Data storage becomes very differently architected when it’s virtualized, which changes how tenants work within an infrastructure. Because of this difference, there has to be a way for all tenants to have access to all the apps, data, proprietary information and software they need to do their jobs. All of this is made possible with shared storage: a way to integrate resources needed by operators and admins, while also centralizing management of everything within your storage.

The difference in QoS in different cloud types: from automation to provisioning

When businesses opt to put the agility of public cloud within their data centers by using Tintri Enterprise Cloud, things like provisioning and service level agreements (SLAs) are all handled through automation on the AFA-level. For your business, that means as many users you choose can run as many applications concomitantly as you desire—it’s simply a matter of choosing the Tintri All-Flash Storage Series best suited to your business. When you choose third-party public cloud environments, not only do you (usually) have to pay additional fees for automation, but provisioning for storage sharing and other functions may not be up to you. Essentially, without a private enterprise cloud architecture for your venture, storage QoS is almost always out of your hands.

QoS in application-level AFAs means never having to throttle apps for being resource hogs again

When it comes to legacy storage like HDDs that run on logical unit numbers (LUNs) and volumes, it’s very often the case that an application must be throttled. Throttling is a way to regulate an application’s processing rate, especially when it’s using up resources that are critical for keeping other apps, software or files open. Throttling is far less commonly needed within solid state drives (SSDs) packed with all-flash memory—just another way AFAs provide a higher standard of QoS for the enterprise-level user and their tenants.

And, when you choose the Tintri All-Flash Array, you can say goodbye to throttling altogether, because VM-level visibility through the Tintri Global Center (TGC) makes it possible to see where latencies are occurring at the most granular level: the single VM. Troubleshooting on TGC is a breeze: the click of a mouse or tap of a screen allow you to resolve latency and other issues that would require throttling on traditional storage systems. That means higher performance, more agility and flexibility—and far better QoS for your entire storage architecture and network infrastructure.

Per-VM QoS with Tintri VM-aware storage (VAS)

When you choose Tintri VM-Aware Storage (VAS), you’re also choosing the best-in-class QoS available for storage today. VAS offers per-VM QoS, which delivers performance SLAs to every single VM across your entire flash storage array. But it also means isolation of applications: each app gets its own lane within the virtualized data center, ending the noisy neighbor dilemma. In other words, a single application won't be able to monopolize bandwidth, because it is housed within a single VM with its own OS.

When you configure QoS on the per-VM level, you’ll be able to configure and manage a variety of service tiers based on requested performance requirements, the needs of your IT staff and other employees and your customers. Applying per-VM QoS policies allows you to manage mixed workloads with different service-level requirements—and you’ll be able to host multiple numbers and types of hypervisors on the same Tintri VMstore within one virtualized data center. Best of all, it's all manageable through TGC—a single pane of glass—regardless of how many VMstores you're working with.

Temporary_css