0 0

Comparing Storage Options for Virtualized and Cloud Environments

TechTarget

Virtualization has become a fact of life for IT organizations. More and more servers and their workloads are being virtualized every year, with the tipping point occurring in 2012 when, for the first time ever, the majority of x86 servers were virtualized.1 This tidal wave of server virtualization has undoubtedly provided many benefits to organizations, from Capex savings and improved scalability to improved server management.

But the growing trend toward virtualized infrastructure has had an unintended consequence for IT departments, particularly storage administrators: Storage requirements are going through the roof, accelerating financial and storage management pressures on administrators. “At first, virtualization uncovered inefficiencies in traditional fixed architectures of SANs [storage-area networks],” says Jon Toigo, a leading consultant on data storage issues and founder of the Data Management Institute. “For all of its benefits, virtualization has dramatically expanded storage requirements,” he added, pointing out that research firms IDC and Gartner each have identified massive expansion in data storage by as much as 300% to 600%.

For business stakeholders, this expansion in data storage requirements has had a significant impact on operations. “It’s dramatically raised storage costs, and it has become the single biggest reason why new projects fail,” said Toigo. “The growth in storage costs severely impacts project budgets, and organizations are looking for new ways to deal with [storage growth in virtualized environments].”

Although the actual management of data storage for virtual machines (VMs) is improving, the lingering challenges of scaling capacity and managing virtualized infrastructure need to be addressed. Storage magazine polled its readers in its annual 2014 Purchasing Intentions Survey, and discovered that 58% of respondents were using more storage with VMs than before, and 48% said they intend to buy new tools to help them get a better handle on VM storage management.2

Making smart decisions on the massive growth of data storage requirements in virtualized environments is more important—and more challenging—than ever. Many IT organizations are struggling with the shortcomings of legacy storage architecture that, while continuing to work well in physical infrastructure environments, has demonstrated critical limitations in increasingly virtualized environments. While traditional data storage vendors have sought to tweak or even make major overhauls of their solutions to adapt to this new reality, many of those solutions are still in transition, forcing IT organizations to evaluate storage solutions that, in many cases, are still works in progress.

Fortunately, a new class of storage solution—one purpose-built for VM-based environments—has emerged as a viable and attractive alternative for IT departments and administrators looking for storage options that work well today and, especially, provide a seamless bridge to the future.

This document provides an overview of how to evaluate the strengths and weaknesses of legacy storage systems as they seek to address VM-based infrastructure requirements. It also talks about new storage options available that have been designed from the ground up for virtual environments. 

Storage Challenges in a Virtualized IT Environment

The IT community is in the midst of a major industry shift. For decades, the focus for IT organizations was to find ways to take advantage of impressive advances in infrastructure such as storage, servers and networking in order to work faster, more reliably and more securely. But all that focus on infrastructure advances has given way to the reality that applications—not faster, cheaper or even better hardware—are the key to a more efficient, agile and transformative IT framework.

This is having a significant impact on data storage specialists within IT departments, because traditional storage infrastructure—even with its undeniable improvements in capacity, performance and reliability—isn’t built or optimized for virtualization and cloud computing. As data centers became more virtualized, and as infrastructure and applications increasingly became cloud-based services, it was soon clear that storage administrators were often speaking a different language than their co-workers in the IT department who were tasked with helping business users do their most important tasks. Storage administrators grew up talking about logical unit numbers (LUNs) and volumes, while the focus increasingly moved to VMs and applications.

Key storage functions that are part of virtualization—such as storage allocation, management and performance troubleshooting—are extremely difficult and expensive to accomplish with legacy storage. A major reason: the random I/O demands of virtualized applications, putting significant pressure on storage systems to keep up. This has often required IT organizations to overprovision storage—an expensive and inefficient workaround—in order to supply sufficient capacity and performance for VM-based applications.

And, as if having a hypervisor-based architecture wasn’t difficult enough, take into account the reality that more and more enterprises are built on heterogeneous, multi-hypervisor architectures. This has increased storage management complexity exponentially.

The main problem, however, is the inability of legacy storage infrastructure to give IT and business users critical insight into applications at the VM level. This lack of a one-to-one map between VMs and the LUNs of traditional storage systems has surfaced numerous inefficiencies that techniques such as application tuning or buying additional network bandwidth can’t solve efficiently or cost- effectively.

Ideally, organizations should look for storage solutions that are purpose-built for providing that application visibility in virtualized and cloud environments, rather than trying to make legacy storage infrastructure accomplish tasks that it never was intended to do.

The bottom line is that new storage solutions are available for IT departments that need:

  • Improved availability for their virtualized and cloud-based applications.

  • Improved resilience for their overall infrastructure.

  • Easier provisioning of storage (and elimination of the need to overprovision storage).

  • Support for heterogeneous hypervisor environments.

  • Support for multi-tier applications.

Application-Aware Storage: What It Is, What It Means

At the heart of this new storage requirement is the concept of application-aware storage, designed specifically for VM- and cloud- based environments.

With this transformed focus on applications and the growing adoption of virtualization and cloud computing, IT organizations are adapting their storage infrastructure in order to provide visibility into applications at the VM level. Specifically, they are seeking out solutions following a model of “see, learn and adapt” in order to simplify storage management and to efficiently manage the application environment.

In order to achieve VM-level visibility into application behavior and performance, new storage solutions should be able to see across hosts, networks and storage systems. This allows them to identify performance problems in real time, rather than reacting in panic mode when storage systems encounter problems with slowing I/O, stretched-out latency periods or even unavailable applications. These systems should be able to identify performance bottlenecks at the hypervisor, network and storage level an in integrated manner, and to extract deep-level insights into specific VMs and virtual disks. 

Since virtualized and cloud environments have added significant complexity to storage management, these solutions also must
learn how to better deploy and manage application-aware storage, regardless of hypervisors. The VM-level focus helps IT organizations eliminate the unnecessary levels of mapping and complexity typically required by traditional storage infrastructure. As a result, it innately understands and enables such functionality as VM-level replication, quality of service management, capacity monitoring, snapshots and performance monitoring. This frees up IT personnel normally required to manually analyze and respond to storage performance and application activity to focus on other, more essential tasks.

Finally, storage built for VM and cloud environments should be able to adapt to the always-changing nature of applications and data in a services-based architecture. Virtual infrastructure should be able to be run simultaneously without performance tuning, as well as provide the ability to run mixed workloads such as performance- centric databases and virtual desktop-based VMs with extremely low latencies. Additionally, they must be able to scale to extremely large numbers of VMs without special tuning, as it becomes increasingly easy and affordable for organizations to add new VMs to their infrastructure.

Application-aware storage has quickly evolved beyond the “interest” stage among IT professionals to the point where it has garnered considerable attention and increasing adoption. In a recent report, research firm IDC noted that “legacy storage architectures are not able to cost-effectively meet the demands for performant, scalable, efficient and agile storage.”3 Noting that traditional storage principles such as LUNs and volumes are hallmarks of physical storage infrastructure rather than today’s services-based IT framework, IDC said organizations are looking for smarter, easier-to-manage and more adaptable storage focused on applications.

“The need for application-aware storage is emerging ... This will become a central tenet in the next-generation storage platform that is designed specifically to deal with the needs of virtual infrastructures,” according to the report. IDC further noted that respondents to its 2013 Storage Purchasing Trends QuickPoll Survey identified application workload requirements as the most important factor in selecting storage products.4

The Journey to Private Cloud

Many companies are looking to implement some form of private cloud internally to meet these needs and to better fulfill them within the enterprise. While a complete discussion of private cloud is beyond the scope of this white paper, most private clouds are based on virtualization and add automation of infrastructure or application deployment and lifecycles, self-service capabilities and chargeback or showback for internal accounting.

Private cloud is even more focused on the application, and applications in a cloud model are being re-architected to be more scalable and more quickly updated. These re-architected applications consist of multiple, smaller, stateless VMs, where capacity can be added by simply adding new VMs to the application cluster. In these cloud environments, both the raw number of VMs running at any one time as well as the rate of change in VMs can increase by an order of magnitude from the typical enterprise virtualization environment. The benefits of this new architecture are more resilient and scalable applications, and modular applications that are easier to update to meet market demands of innovation and constant updates. Issues with storage scalability and responsiveness are even more of a concern with private cloud environments than with traditional virtualization, and endanger these agility and scalability gains.

Evaluating Two Options: NetApp and Tintri

The storage industry has always been marked by ferocious competition among well-established industry giants and fast-moving, nimble suppliers that came up with innovative ways that helped IT organizations deal with their storage challenges. IBM’s monolithic market position in storage eventually was eroded by aggressive product development and improved value propositions from EMC, which in turn came under intense pressure from a raft of companies led by Network Appliance (now known as NetApp). Now, NetApp’s leadership position is being challenged by a relative newcomer focusing on a new paradigm based on the concept of smart storage: Tintri. 

About NetApp

Founded in 1992 as Network Appliance, the company went public in 1995 and eventually changed its official name to NetApp. The company has had a tight focus on file-based storage systems, and that concentration helped the company become a major player in the enterprise storage marketplace. The company posted 2013 revenue of $6.3 billion, and employs more than 12,000 people operating in 150 different countries.

Like most enterprise storage companies, NetApp offers a broad range of storage systems. The company’s low-end FAS2000 series is designed for budget-conscious organizations or those not necessarily requiring a full set of storage features and functionality, such as in departmental or remote-office applications. The top-of-the-line FAS6000 series offers an extensive set of features and functionality, comes with a much higher price tag and is designed for the highest-performance, I/O-intensive applications, like online transaction processing, seismic/geophysical exploration and big data analytics. NetApp’s midrange solution is the FAS3000 series and is the company’s bread-and-butter solution for the broadest range of customer requirements.

NetApp’s FlexArray virtualization software is designed for use with the company’s high-end systems in order to support comprehensive storage virtualization of both NetApp and competitive storage hardware. It is designed to unify and simplify IT operations by expanding support to both NAS and SAN workloads, offering a common management structure and reducing capacity requirements while also cutting planned downtime through improved storage management tools.

The FAS family is configured as network-attached storage (NAS), supporting block storage in both Fibre Channel and iSCSI versions. NetApp has historically positioned its product line using the concept of unified storage, supporting the integration of NFS, iSCSI and Fibre Channel. More recently, however, the company has introduced a variety of new technologies, such as the Engenio storage platform acquired several years ago from LSI Corp., as well as the ONTAP storage management software platform. NetApp has recently added new functionality to ONTAP in hopes of making it a more appropriate solution for virtualization and cloud environments. ONTAP is designed to work across the full range of NetApp hardware solutions.

Additionally, NetApp has looked to increase its support for flash storage for I/O-intensive applications by adopting Flash Cache intelligent caching software to boost storage performance. NetApp’s FAS line offers a well-regarded data deduplication functionality that provides storage savings, and it supports flash pools primarily for read caching.

About Tintri

Founded in 2008 by a group of experienced executives that included the former executive vice-president of research and development at VMware, Tintri is a well-funded organization with the backing of such leading venture capital firms as Insight Venture Partners, Lightspeed Venture Partners, Menlo Ventures and New Enterprise Associates. The company was established to build a new class of storage products purpose-built for application-aware IT architectures such as virtualization and cloud computing.

Tintri’s main storage platform is Tintri VMstoreTM, which is designed for what the company has termed “application-aware storage architecture.” Tintri VMstore is aimed at the growing gap between traditional storage infrastructure designed for physical environments and the storage requirements in virtualized and cloud environments. The company’s avowed goal is to provide storage that can predictably and efficiently handle all virtualized applications and desktops, allowing the IT team the time and latitude to focus on more transformative tasks instead of storage management and manually tuning storage performance to meet the needs of VM-based storage.

The Tintri VMstore reportedly is designed for easy setup, taking minutes instead of hours or even days to deploy greenfield storage solutions. Tintri says VMstore avoids confusing and complex configuration or tuning because it requires IT organizations to work with only auto- aligned VMs and vDisks, rather than legacy storage abstractions such as LUNs and volumes. The company’s FlashFirstTM storage architecture reportedly delivers 99% of the I/O performance typically associated with all-flash solutions.

Another important feature of the Tintri VMstore is its ability to serve thousands of different types of VMs from a single solution with VM- level visibility, QoS and performance isolation. The result is the ability to offer in-depth insight into application performance and behavior at the VM level, providing a comprehensive, global view of all VMs in order to identify performance and capacity trends without dealing with the physical hardware.

This is said to allow for real-time identification and remediation of performance hotspots at the hypervisor, network and storage levels with end-to-end performance visualization capabilities. The Tintri VMstore is also designed to meet the potentially complex needs of heterogeneous, multi-hypervisor environments.

Individual VMs can be protected using unique, customizable policies for VM-level snapshots, requiring no additional space and providing WAN- efficient replication at a fraction of the bandwidth normally required.

Examining the Differences Between Tintri and NetApp

Tintri and NetApp represent two fundamentally different approaches to storage for today’s increasingly application-centric architectures. NetApp built its business and achieved considerable success in delivering a superior alternative to previous enterprise storage solutions by offering improved functionality at better price points. While NetApp has made moves recently in an attempt to better position its solutions for virtualized and cloud environments, its fundamental storage architecture remains built upon its legacy storage designs optimized for physical data centers and physical servers.

Tintri, by comparison, created its storage solutions for the specific purpose of delivering application-aware storage that enabled VM-level visibility and insight for services-oriented architectures. This has been increasingly important as data center managers, storage professionals and even business stakeholders have begun speaking in the same way about applications rather than in the terminology of legacy storage abstractions. IDC highlighted this important requirement in its report, noting, “Because LUN or volume-level storage operations have no visibility of a VM, they have no ability to operate at the VM level.
... If an administrator wanted to replicate a single VM on a LUN, he/ she would have to replicate all VMs on that LUN,” thus requiring overprovisioning of storage and additional network bandwidth.5

Tintri’s solution also offers vital assimilation with VMware’s vCenter because the VMstore appliances natively integrate with the platform, allowing IT organizations to manage multiple Tintri data stores from it.

NetApp certainly has done a good job meeting the needs of many organizations rooted in the traditional physical infrastructure. The NFS support of its FAS line of solutions is said to be solid; the company offers a common storage management platform across its hardware, and its deduplication performs adequately in VM environments.

However, NetApp faces certain significant challenges in VM- and cloud- based IT environments:

  • Its post-process deduplication and compression is not particularly efficient in managing all-important snapshots. In fact, compression is said to reduce system performance by about 50%.
  • NetApp’s FAS solutions line isn’t specifically designed for VMs, in that it doesn’t offer VM-level management or monitoring.
  • While it does provide support for flash storage, the FAS flash implementation is a read cache used to reduce the number of disk spindles. As a result, its flash hit rate is considerably lower than a robust flash implementation.
  • NetApp’s data deduplication works differently in cluster and legacy modes, creating an added level of management complexity and inefficiency.

These limitations often result in high operating expenses (Opex) for organizations either looking to create new storage frameworks or to adapt their existing NetApp environment to cope with VMs and cloud. For instance, the NetApp approach often requires a lengthier and more complex installation process than the Tintri VMstore for virtualized and cloud environments. Also, NetApp’s design focus on LUNs, volumes and aggregates adds Opex expense compared with Tintri’s “single data store” design. The Tintri solution also offers VM and vDisk metrics that NetApp’s FAS line doesn’t.

Capital expenses also can be considerably higher with NetApp compared with Tintri because of NetApp’s requirement for more licenses or paying to add more features to the base solution. NetApp performance suffers as LUNs or volumes begin to fill, so overprovisioning to maintain performance is commonplace and also contributes to higher costs. Tintri also selectively and strategically optimizes its use of flash storage rather than trying to deploy an all- flash solution that adds considerably more cost and complexity. 

Tintri’s approach has been acknowledged and recognized by several analyst firms. Enterprise Strategy Group acknowledged Tintri’s strong match with VMware in a recent report: “If you’re planning a VDI deployment or you’ve run into an I/O performance challenge that can’t be met cost effectively with your existing solution, ESG Lab recommends that you take a serious look at VMware Horizon View with Tintri VMstore.”6

In another recent report, research firm 451 Group noted the benefits of Tintri’s application-aware approach when compared with legacy storage infrastructure: “Tintri can present strong arguments about the technical superiority of its product compared with conventional storage, especially in regard to the efficient blending of flash with disk, and VM- level management.7

Conclusions and Recommendations

Although rapid technical advances have always marked the data storage industry, the past two decades have seen relatively little in the way of storage architectures designed primarily for physical infrastructure. Now, as IT environments become increasingly virtualized and as cloud computing becomes a critical architectural requirement, a yawning gap has been created between how storage systems have been designed and the demands of virtual environments.

IT organizations increasingly are turning to new storage solutions that are purpose-built for virtualization and cloud, with a focus on application-aware designs that provide VM-level visibility and control across the entire infrastructure.

Faced with the choice of staying rooted in the world of storage systems designed for physical infrastructure or ones that are designed from the ground up for VM- and cloud-based requirements, IT organizations increasingly are selecting application-aware storage options. Tintri is an excellent choice for innovative IT leaders who want to align their storage models with their computing architecture models, especially when they want to shift their focus away from traditional storage issues such as LUNs and data stores and in favor of applications.

As the march toward cloud computing accelerates and creates a growing number of VMs and accelerates the pace of infrastructure change, Tintri is well positioned to help alleviate storage and server administrators’ growing problems with traditional storage infrastructure.

As the leading supplier of smart storage for virtualization and cloud computing environments, Tintri is helping companies explore, understand and leverage the opportunities for a more efficient storage paradigm. 

 


1 “Survey: 51% of x86 servers now virtualized,” ServerWatch.com, Jan. 17, 2013
2 “Spring 2014 Purchasing Intentions Survey,” Storage magazine/SearchStorage.com, March 2014
3 “Application-Aware Storage for Virtual Environments,” IDC, May 2014
4 IDC, Ibid
5 IDC, Ibid
6 “Lab Validation Report: VMware Horizon View with Tintri,” Enterprise Strategy Group, February 2013
7 “Flash start-up Tintri steps to the plate with $75m funding and new management,” 451 Research, March 2014 

Temporary_css