0 0

Designing Storage Around Virtual Desktop Complexity

Designing around virtual desktops introduces some rather interesting levels of complexity for physical hardware. Some resources are relatively simple to account for, such as compute and memory, and can leverage many of the advanced features that we already know and enjoy in vSphere: transparent memory sharing (TPS), host caching and advanced compute scheduling.

The harder variable in the equation is the storage array, or, more specifically, how it will be sliced and consumed. There have been many conflicting articles floating around on how best to serve IOPS—especially as they relate to linked clone desktops, which refer back to a master replica image that provides a common set of data for each desktop.  Some suggest using high performance SSD for the replica, otherwise known as the Tier 0 layer, to provide the needed reads to hundreds (if not thousands) of desktops. Others have challenged this practice and instead provide these IOPS to the delta, or change, files that grow as each desktop is used.

Dominoes
 

The problem with both approaches is that both are potentially right; it boils down to that annoying consultant answer of, “it depends.” If a lot of changes are occurring to the desktop, and that delta (change) disk is going to grow significantly, that may in fact be where you want to put your expensive, fast disk. Or, for more read intensive workloads such as a very task worker-oriented desktop, the delta file may not get much use at all and the replica should in fact deserve higher performance disk.  Keep in mind that host-side caching features such as VSA (aka CBRC) and vFlash, which allow you to use host memory or SSD as a local cache, can dramatically alter the IO seen by storage. They can shift the amount of IO that goes to the replica versus delta files, or the ratio of reads versus writes.

I often think of virtual desktops as a row of dominoes. When one desktop begins to crush the storage with a spike in IOPS, latency, or raw throughput, it can rapidly affect others and you get what is known as a “storm.” Typically, these are boot storms or antivirus storms.

Boot storms occur when a significant number of desktops are all booted (or powered off) at the same time. Startup operations create a vast amount of reads, while shutdowns typically generate a lot of writes. Antivirus storms can lead to both, depending on what is being scanned. As such, desktops benefit from access to flash quality performance at all times. It’s the same reason everybody loves having an SSD in their laptop. You don’t want to tune the disk performance manually, so I’d advise landing desktops on a storage array that can serve the vast majority of your IO from flash. This ensures that whatever the workload, or host caching features you use, or the mix of reads versus writes, you will have fast, responsive desktops.

Desktops are ever changing and volatile in their ways. Not only is creating a design for the current virtual desktop state difficult, but even if achieved successfully, requires a continuous pair of eyeballs on the performance with potential architectural changes down the road. One single application can drastically change the read, write and delta performance of a desktop pool, and is very often introduced without the operations and engineering teams knowing about it – until the support calls come in.

 

 

Chris Wahl / Sep 21, 2012

Chris Wahl is a datacenter engineer at Ahead and a virtualization-aholic living in the Chicago area. He has over 13 years of IT experience in enterprise infrastructure design, implementation, and a...more

Temporary_css