Flash-based Server-side Caching and What it Means for Storage

Bill Hill

Guest Blogger, vExpert

Solutions like Fusion-io have been around for some time. Typically, these products surround NAND-based local storage solutions. Moving application components to NAND storage provides significant IO benefits. Server vendors have profited by selling rebranded Fusion-io products, such as:

  • IBM PCIe card as a performance disk tier
  • HP mezzanine card for blade solutions

Until the end of 2011, the solutions were presented as local storage. With the astronomical rise of virtualization into enterprise environments, the idea of local storage became blasphemy. Many of advantages of virtualization are realized through shared storage. Without going deep into the details, Fusion-io has developed a technique to turn server-side NAND-based storage devices into caching devices. Conceptually, the cache is a read-cache with write-passthrough. Using a filter driver in the guest OS, VMs can benefit from shared storage while taking advantage of locally available cache. Now storage vendors, like EMC and its Project Lightning/VFCache, are trying to play catch up.

Moving data from the storage device to the local server is packed full of benefits: microsecond access times, tens of thousands of IOPS, and so on. But, the bigger question becomes: How do storage vendors approach this new storage technique?

Storage design continues to be an extremely important component of enterprise architecture, but server-side caching may mean rethinking configuration. At first glance, it seems there should be fewer higher-performing disks in the storage infrastructure, as all of the performance is local to the server. Suddenly, populating storage arrays with high-capacity SATA disks sounds like a great cost savings. The storage array becomes truly purpose-built for capacity and performance is an afterthought.

However, hopefully before the storage admin pulls the trigger on replacing disks, they will realize some important truths about server cache technology:

  • Cache warm up: Cache devices start out without data. Everything comes from the array.
  • Cache over-commit: Cache devices can be over-committed. In this instance, an application may have questionable performance as the cache is consistently unloading and replacing data.
  • Write-heavy applications: These cache devices benefit environments with significant read profiles, or aspects of applications that read often. However, as caches only function with read operations, heavily write-based applications will not see much of a performance benefit.
  • VM migrations: This is, perhaps, one of the worst offenders — in a perfect world, virtual environments are balanced and stable. However, in the event that a VM must migrate to another host, performance will degrade significantly. Users expect an application to perform well even with thousands of IOPS. The move to another host introduces a cold cache to an unsuspecting user base. What once was an unnoticeable feature of virtualization becomes a carefully coordinated and calculated event.
  • Selected candidate VMs: Administrators may configure a VM to use the cache, or not to. Depending on the OS, the VM may not be a candidate. These noncandidate or nonselected VMs rely on the storage array.

The fact of the matter is that with server-side caching solutions, a properly designed storage environment will be critical to support instances where the cache cannot be used. The utopian dream of inexpensive high-capacity SATA disks in the array is far from a reality. Storage vendors should be able to take advantage of the server-side cache to enable their solutions as well as a well-balanced storage solution.

At the end of the day, storage vendors do not appear to have much to worry about with the server-side cache technology. A cache still requires an underlying storage environment to provide the actual data and functionality, and that needs to perform at peak levels to support noncache situations. Some design decisions may change slightly, but modern solutions are still necessary.