In this guest post on the Tintri blog, Leon Erlanger explores the trade-offs between single and multi-hypervisor external cloud providers.
Recent developments in virtualization and enterprise solid-state storage (SSD) promise to shake up data-center storage, replacing scores of disk arrays with individual tiers of enterprise SSD, mainstream disk storage, and customized solutions for virtualization and other specialized functions. Here’s why.
High-performance disk storage doesn’t cut it
Disk storage has ruled data-center storage for years, with fast, expensive fibre-channel disks dedicated to high-performance applications and slower, lower-cost SATA arrays or NAS devices used for file serving and other less performance-hungry apps.
Unfortunately, disk IO performance comes up short for mission-critical database and online transaction processing applications, particularly when the number of transactions per minute relates directly to revenue.
IT has come up with all sorts of tricks to address this issue, including distributing data across the outer sectors of multiple drives to minimize mechanical head movement (called short stroking) and splitting databases across several servers (sharding). However, both of these techniques mean a lot of extra expense, not only in server or disk storage hardware, but in data-center space, cooling, power, and management.
SSD solutions are maturing and coming down in price
SSD solutions have fallen rapidly in price during the past few years, and enterprise-ready solutions from vendors like Violin Memory, Texas Memory Systems, Fusion-io and Kaminario have started to take off. With no moving parts and IOPs in the tens of thousands vs. only about 200 for disk storage, a single SSD can replace as many as 25 or 30 short-stroked high-performance hard disks, and even reduce the number of database servers formerly employed for sharding.
Concerns about the lifespan limitations of flash have also dissipated as solution vendors employ tricks such as wear-leveling to increase the write lifespan of these solutions. SSD solution vendors are also starting to add in enterprise-level reliability and disaster-recovery capabilities to their products.
Finally, the maturity of MLC flash, which is considerably less costly than higher-end SLC, and the introduction of compression and data deduplication in SSD arrays have done much to close the price gap with expensive high-performance disk, particularly when you take all those other less tangible disk costs into account.
Meanwhile EMC, Fusion-io and a host of other companies have created PCIe server-based SSD caching solutions that can sit in front of the data-center disk infrastructure, speeding up disk reads of data housed on even the slowest performing disk storage.
Server and desktop virtualization are other applications starving for disk IO, and SSD solutions such as Tintri’s have proven to be a godsend as server virtualization has moved into mission-critical applications in the enterprise. As Tintri is happy to point out, when it comes to storage, virtualization works best when the storage architecture is designed for the virtual rather than for the physical world.
All of these trends point to a near-future data center in which high-performance disk storage for mission-critical database applications, analytics, and OLTP will be replaced by arrays of SSD; mainstream applications will be served by less costly high-capacity SATA disks in combination with high-performance SSDs providing submillisecond latency, either in the arrays or servers; and specialty applications such as server and desktop virtualization will be served by storage solutions architected for their particular storage requirements. It’s already started. A recent Objective Analysis report says that the enterprise SSD market is likely to grow at 55 percent per year through 2015.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.