A virtual machine (VM) is a software emulation of dedicated hardware, with its own operating system or application environment. Because virtual machines can be run en masse on hardware, end users can drastically reduce hardware, maintenance and power and cooling costs. In fact, hundreds of thousands of VMs can run at once on a single all-flash array. This process is called virtualization.
Software known as hypervisors isolate VMs from the hardware they live on. It’s because of hypervisors that VMs are siloed from one another, but are still able to share the underlying hardware’s physical compute resources: processor cycles, memory, bandwidth, and more. But if one VM is compromised by something like malware, the other VMs, being siloed, will remain fine.
This is a fundamental need of virtualization. Without hypervisors, VMs must be linked to hardware, with no true migration capabilities; to “move” a VM, an administrator must run tedious reinstallation workflows. Hypervisors allow VMs to be migrated at will across the infrastructure—a process called VM live migration.
Because virtualization demands more bandwidth, storage, and processing capacity than traditional dedicated hardware, especially if the number of VMs is especially large, end users must be sure to optimize VM placement across their data centers. If one VM begins to use more resources than other VMs—the so-called “noisy neighbor” problem—end users must put a cap on the quality of service of these VMs in order to balance their resources wisely. And if one all-flash array that holds multiple VMs fails, end users must have a recovery strategy planned out.
On traditional storage, this can be an issue, as even the shiniest of their all-flash arrays are architected in logical unit numbers (LUNs) or volumes. At that level of abstraction, VMs are difficult to work with on a 1:1 basis. However, with VM- and container-level storage, end users can manage VMs directly, replicating, recovering and managing each VM easily.
Sure, many enterprise-level ventures have worked with legacy storage to retrofit LUNs and volumes to work with cloud computing, but the results in all cases help to clarify that retrofitting legacy storage blocks doesn’t offer the speed, accuracy, security, or the ability to spin up and tear down applications. If you’re trying to find the latency of a particular VM, LUNs, and volumes can actually obscure the true accuracy of your findings. After all, because LUNs further abstract your VMs’ analytics, you will find averages, rather than specifics, of your VM data. It can take minutes to hours just to find the basic metrics of your environment.
VMs and containers speak the language of enterprise, private, and hybrid cloud environments. Thus, with VM- and container-level storage, built to manage VMs and containers rather than LUNs, per-VM latency becomes a cinch. When you couple VMs and containers with VM-level all-flash arrays within an enterprise cloud with public cloud-like agility, security, speed, and operations occur far faster than if you were operating in LUNs or volumes—especially if you run mostly virtual applications.
According to a Tintri study of VM size and use of 400,000 machines, the most common VM sizes in GiBs (gibibytes) in order of popularity are 40–80 GiB, 80–160 GiB, and 20–40 GiB. Of these, the most commonly used size is the 40 GiB provisioned VMs. All previously mentioned sizes make up about one-third of all provisioned VM users.
When it comes to used size VMs, the numbers are a little different and seem to trend toward smaller VMs. Used size VMs are different from provisioned VMs, in that they measure information, application, and software size before compressing or deduplicating, so what they measure is the amount of data the VM has written, rather than how much the VM is provisioned to store. The most popular size among used size VMs is significantly smaller than provisioned VMs, coming in at 20–40 GiB.
While it’s true that provisioned VMs almost never write enough data to fill the entire machine, provisioning offers benefits that used size VMs don’t. For example, a provisioned VM of at least 20 GiB or larger won’t experience thin provisioning, and they can also be expanded to accommodate more storage as a venture grows and needs more data space.
As long as your data center operates in VMs or containers, and as long as you’re using application-level storage from Tintri, you'll always get real-time per-VM analytics, which allows you to see exactly how much space you're using, right down to each individual machine. And whatever size workloads you may be gearing up for in the future, Tintri allows you to see what kind of impact any deployment will have on your data center.
If you’re interested in Virtual Desktop Infrastructure (VDI), you can use Tintri to isolate each of your VMs in their own lanes, avoiding noisy neighbors and overprovisioning. And with drag-and-drop quality of service at the VM or container level, you can make sure your applications always run right at the rate you want them to.
Moreover, integrating Tintri VDI with the Tintri EC6000 Series All-Flash Array, VMware Horizon, and Citrix XenDesktop will turn your infrastructure into a high-performance operation, with automation powered by Amazon Machine Learning algorithms that offer you suggestions on how to manage your VMs in your data center. Tintri makes VM and all-flash array management easy enough for any IT staffer to understand. Turn your storage into your greatest asset rather than your biggest nightmare.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.