What is Hyper-converged? | Tintri

0 0

What is Hyper-converged?

Hyperconvergence (HC) is an infrastructure type that adds speed and saves time for operations like computation, input/output (I/O), internal structure, storage, networks and frameworks of private or open system computing architectures. Hyperconvergence borrows the “hyper” from hypervisor—the virtual machine monitor (VMM) that runs virtual machines (VMs) in data centers and across all cloud platform types. But at what cost? Is HC really the best option for enterprise cloud users?

Hyperconvergence (HC) places multiple—rather than one—hypervisors at the base of the infrastructure, making both direct attached storage (DAS) and complete storage area networks (SANs) run more smoothly. A hyperconverged infrastructure (HCI) is typically built for each unique enterprise to ensure compatibility with all other hardware, software and applications for development and IT. However, while HC can be effective for small-scale deployments, larger enterprise infrastructures will likely have a better experience if they opt for at-scale virtual storage on separate servers.

Virtualization software for virtualization stacks and hypervisors

To run multiple hypervisors within an HCI, you need virtualization software that allows all VM-aware components and structures to interface and communicate with one another, which also adds to the ever-mounting cost of HC. Virtualization stacks include the components that run computation processes and manage the user interface (UI), while also executing innumerable operations within the overall infrastructure and framework for each enterprise. Because these components and UIs vary in type and work across a variety of operating systems (OS), each hypervisor and virtualization stack will be distinctively designed for each enterprise. The cost associated with unique build-outs for HCs can be prohibitive, especially for young startups or ventures that use enterprise cloud at relatively small scale.


For large enterprise cloud deployments, OpenStack deployment is the ideal option, and doesn’t require HC. It's VM-aware offers up-and-down scalability, drops latency to practically zero within multiple hypervisor architectures and offers performance isolation with guaranteed quality of service (QoS). You can run OpenStack beside vSphere (VMware’s platform for server virtualization), Hyper-V (cloud and container monitoring by Microsoft) and RHV (Red Hat Virtualization), which allows you to reduce resource use and cost without hyperconvergence or HCI.

Red Hat

Red Hat Virtualization (RHV, previously Red Hat Enterprise Virtualization [RHEV]) allows IT and development teams to best design and configure virtual storage within any converged or hyperconverged enterprise infrastructure, but works ideally with other architectures, including Tintri CONNECT to deliver public cloud-like agility to data centers. Using a virtual desktop server manager (VDSM), Red Hat utilizes its enterprise Linux model to centralize infrastructure and framework management, whether you’re using HC, data center or virtualization. It's most recently released version (April 2017) allows for live-migration of VMs, and integrates beautifully with OpenStack, too. Perfect for use in data centers and the modern business model.


This software for virtualization for both hybrid and all-flash array storage creates a safer, more controlled and more adaptive cloud environment without a need for HC. Citrix offers this by allowing integration between all components of your infrastructure, including legacy and virtual technologies, frameworks, and any cloud environment you use for your enterprise. It can be deployed seamlessly, without disrupting data, storage or any componentry on your current infrastructure.

The difference between hyperconverged and converged infrastructures

While both converged infrastructure (CI) and HCI allow you to store, network and compute virtually, an HCI is more customizable for small-scale computing and small- to medium-sized enterprise architectures. HCI can sometimes offer greater capacity for end-user comprehension, with an easy-to-use UI nearly anyone with any modicum of IT experience can run when deployed with supporting systems like Tintri Global Center (TGC).

HCI offers automated data, job, resource status and utility computing migration between environments for tight integration. HCI is typically the superior choice for platforms like software as a service (SaaS), infrastructure as a service (IaaS) and platform as a service (PaaS). However, a virtualized, federated storage pool offering real-time and predictive analytics will likely serve you best for scalability when you have a growing infrastructure.

The hidden and mounting cost associated with hyperconvergence

For all the good it can do in the right environments, HC comes at a cost—many hidden price tags begin to pop up after you transition to HCI.

Added licensing costs for HCI—the “HCI tax”

In hyperconvergence, nodes in clusters are used for computing and storage, and they must be balanced regularly to stave off data bottlenecking and storage hotspots. To avoid these issues, you have to scale up, buy more HCI storage so the infrastructure can continue to compute with the speed you’d enjoy within enterprise cloud. All this adds up to consistently purchasing more and more nodes, especially in compute-heavy and storage-heavy infrastructures.

Licensing cost of HCI

With HC, you pay for node storage controlled by VMs, which means every time you scale up you have to pay more in virtualization licensing to keep those nodes and clusters operational and able to compute and store. What’s more, when you add more VMs to handle more clusters of nodes, you’ll also be handing over more cash to third-party providers like Microsoft SQL and Oracle for additional dedicated resources they’re providing to keep your HCI from collapsing in on itself.

Redundancy within HCI

It’s smart to safeguard data and software—but the issue with most HCI providers is that they store two, three or more copies of the same data. This overly redundant storage paradigm is unnecessary, and it takes up a tremendous amount of storage space that causes imbalance and leads you right back to having to purchase more storage. Additionally, all these extra data copies cause latency because there is not enough free space for computing or deployment—and that’s a crisis.

All-flash storage architecture might be a little pricier, but it keeps your infrastructure from caving in on you, which, among other advantages, makes it worth every penny.