Every twenty to thirty years, what seems to be a game-changing technology hits the datacenter. Virtualization, for example, really took the datacenter by storm. Computing would never be the same. The same could be said for the latest introduction: the Solid State drive (SSD).
One of the largest constraints in the storage environment has been the rotational speed of the spinning disk. This impacts the speed at which data can be read and/or written to the disk. RAID techniques were developed to both protect data AND increase performance of arrays of disks. But, the disks continued to be hamstrung by how fast they could spin. Even the fastest spinning disk (15k SAS) will provide approximately 200 IOPS. For the longest time, this was the pinnacle of disk performance. Need more performance? Just buy another tray of disks.
The introduction of SSD media in the datacenter was a major hit. Suddenly, a single SSD disk could out perform, in theory, an entire array of 15k RPM disks. A single SSD may be able to support 5,000 IOPS. The same IO performance could be achieved using 25 x 15k RPM disks (assuming 200 IOPS/disk). I'll be the first to admit, though, that there are various factors impacting IO performance. So, thank you for giving me some freedom in the example above. The moral of the story, though, is that the introduction of SSD media in the datacenter blows away traditional storage performance models.
SSD media seems to be touted as the savior of the storage world. Just throw some SSDs in and all the problems disappear. However, there are some things to keep in mind:
- SSDs are just another tool in the toolbox. Rotating disks still have a place in enterprise storage. While 10k and 15k disks are the major victim with the SSD introduction, SATA disks (7200rpm) still provide a major function… bulk storage. The storage densities (especially for the price) cannot be beat. Some use cases require bulk storage and others require high performance.
- SSDs need to be included as a COMPONENT in a holistic solution, not the entire solution in and of itself. Using an SSD as a caching device may have significant benefit over utilization as a storage device. Similarly, using an SSD as a tier of storage may have better benefit. It really depends on the need.
- Be aware of the surrounding infrastructure. Various components in the datacenter are ripe for being the next bottleneck. CPU contention, RAM contention, Network utilization, Storage performance, etc… The introduction of SSDs can shift the bottleneck from one area to another. For example, by providing a higher performing storage layer, you may see the network be negatively impacted by the ability to access data faster via iSCSI, SMB, NFS, and any other network-based data access protocol. That is not to say that you should not add SSD media to storage environments, but keep it in the back of your head.
- One of the side effects of rotational disks was that in order to meet performance metrics, an abundance of storage space was created. Since SSD media is significantly higher performing per device, the amount of available storage space (in aggregate) is significantly less. The price for higher performing SSD media is much higher and the cost for space increases with it. a 128GB 5,000 IOPS SSD device is much cheaper than the 128GB 20,000 IOPS device. Suddenly, storage footprint becomes very important.
- SSD media becomes an easy way to mask problems. Early application developers really had to work hard to create a usable application out of restrictive resource levels… working at the level of bytes and kilobytes, some amazing applications were created. As more and more resources were available, developers got a little more lax… to the point that today, there is an abundance of CPU and memory resources. If there is a problem with the application, rather than spend time fixing it, sometimes the easiest solution is to mask it by adding another CPU or more memory. With SSD media, the same is true for application performance. Rather than figure out what is wrong with a DB query, for example, just throw it on an SSD tier. No more problem? Right?!
With all of that being said, SSDs are providing a massive advantage to a couple areas:
- Virtualization and the IO Blender. Virtualization introduces some of the most random workloads in the datacenter. A single physical server can request data from all over the place. This type of workload can bring a rotational disk array to its knees and negatively impact the application environment. SSD media is able to absorb the requests and respond to the random nature of the virtualization environment in a much more efficient way. By introducing SSD media as cache at the array, on the host, or as a tier of storage, the virtualization environment can benefit from the higher storage performance.
- Storage tiering and capacity. Prior to the introduction of SSD media to the datacenter, one of the most compelling technologies in storage dealt with migrating data between different tiers of disks (each with their own storage profiles). An array could house a tier of SATA, 10k, and 15k RPM disks. The most frequently accessed data would be automatically moved to the highest performing disks and the rarely access data would be moved to the SATA tier. SSD media introduction resulted in the addition of a 4th tier OR, using Tintri as an example, eliminate tiers altogether by seamlessly integrating SSD and SATA into a high performance and cost effective storage appliance. Who says you can’t have your cake and eat it too!
SSD media has been a great addition to the datacenter. When used as part of a complete solution, the benefits can be felt throughout the organization.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.