Starting in 2010, virtual desktop infrastructure (VDI) was starting to make progress in the minds of corporate IT engineers and architects. The complexity of the infrastructure required to operate virtual desktops was coming to light and, for higher-budget IT departments or with specific use cases, VDI was attainable. VDI addresses a couple obvious use cases:
1) Infrastructure refresh: With the typical corporate IT refresh occurring every three to five years, VDI's coming of age meant existing hardware could be turned into a VDI client and new purchases could be thin-client only. Existing endpoints were stretched further and new endpoints were cheaper (in theory).
2) Branch simplicity: VDI brings the end-user compute environment back to the datacenter. As a result, there is less of a need (if any) to have critical IT resources at branch locations. Instead, a solid WAN connection is required to connect clients to their workstations. The cost to operate and maintain a branch decreases significantly.
3) Easier desktop maintenance: The knowledge requirement to maintain end-user desktops does not go away, but it does shift to a new set of technologies. Desktop engineers can look forward to updating a single image and having all newly created desktops update immediately.
Two significant roadblocks hinder pushing VDI from wish to fruition: cost and performance. The cost of the infrastructure to support VDI is significant. So significant, the ROI for a VDI investment is not typically seen for three to four years; about the time that the next IT refresh would be amortized. Storage represents the most significant cost as VDI is IO-intensive and the required storage infrastructure (the number of spinning disks) is fairly high (recall that SSD was not a possibility for most in 2010). VDI environments are sensitive to both available IOPS as well as latencies for providing the data from the arrays.
The VDI wave was high and strong. However, no one was able to ride that wave.
Where is VDI now?
The years 2010 to 2012 brought some maturity to VDI products. Better user-profile management, more sophisticated functionality with image cloning, enhanced transport protocols, and better performing storage infrastructure were among the benefits. A significant number of SSD-based storage arrays are coming to market that make the ROI proposition much more palatable. One would think all the pieces are there for VDI to take off.
However, a new nemesis is in town and is causing interesting future-planning conversations: The everything-as-a-service model (XaaS) and cloud computing.
XaaS and cloud computing represent a move toward service providers and non-local computing resources. Applications are becoming more Web-based and require less local infrastructure — Google, Microsoft, salesforce.com, Dropbox, and a host of others provide infrastructureless solutions that are compelling for many companies. What good is a virtual desktop when email is hosted at Google or the CRM application is at salesforce.com?
Plus, the companies that did not consider VDI in 2010 and 2011 may have committed to a hardware refresh and cannot afford to implement VDI for a number of years. Who knows what the ecosystem will look like then (most likely more Web-based than ever)?
VDI is struggling to maintain relevance in an XaaS and cloud computing world. What kind of future (use cases) does VDI have going forward? Despite running into the resistance of the XaaS and cloud computing juggernaut, VDI has a number of use cases that provide value for the solution going forward:
VDI is not going away anytime soon. XaaS and the cloud offer viable alternatives to what VDI may have addressed a couple years ago. However, there are always use cases that will drive VDI forward.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.