Flash-based solid-state drives (SSD) have no moving parts, are more reliable, have faster read times, and offer consistent IO performance (latency) during high utilization, compared to traditional hard drives. Most of us have seen the most immediate impact of SSD in laptop and tablet systems that we use every day. In those systems, SSD is either used to make the device fit the form factor we desire (like an iPad) or to provide our laptop with greater performance and reliability. When we can choose SSD, the choice comes with the tradeoff of being more expensive and providing greater performance, vs. the alternative of lower cost and abundant space (the case with traditional hard drives).
In the enterprise storage landscape, you may think SSD is just for some high-end cache on the largest of enterprise storage. However, you would be wrong.
Yes, SSD made its first inroads through the end-user (or consumer) side, but it is quickly taking over enterprise storage. In a very small form factor, SSD can provide solid IO performance. For example, in the past, to achieve a certain level of IOPS, storage vendors would build arrays with a large amount of cache memory. That cache memory was expensive and required redundant storage controllers and built-in backup power supplies (because the RAM cache would be lost when power was removed). Finally, that RAM cache was small, in comparison to solid state disks.
In summary, SSD will:
As we know from buying laptops today, the downside to SSD is that when compared to traditional hard drives, SSD costs much more and offers much less capacity. If you replaced all HD in your storage array with SSD, you would spend much more and receive significantly less capacity. However, that doesn’t make SSD impractical.
The best use of SSD today, in enterprise storage arrays, is as the first-class storage to a larger array of traditional HD. Then, use software in the array to intelligently balance the most active files to SSD and the less active files to HD. This works particularly well for snapshot data and older versions of files that are infrequently accessed. When you combine this intelligence with features like compression and deduplication — you have today’s ideal storage cocktail for the datacenter.
This approach is in stark contrast to traditional hard tiering levels, or just using SSD for caching only.
What should storage and virtualization admins do? In this quickly changing market, here are three recommendations:
SSD is the single greatest factor changing enterprise storage today. It’s important to stay current on this rapidly changing storage topic.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.