Virtualization means moving storage from CPUs and servers to cloud (virtual) environments. Digital infrastructures replace legacy hard-disk drives (HDDs), and old operating systems (OSs) are replaced with applications to allow faster input/output (I/O). Virtualization requires migrating to a public, private or hybrid cloud where virtual machines (VMs) and containers on all-flash arrays allow greater speed, significantly reduced latency and visibility on a deeply granular level.
In most instances, virtual space—generally referred to as a cloud environment—house enterprise data, software, applications, and other parts of the infrastructure on all-flash arrays. All-flash storage arrays are mounted on racks in data centers, just as some HDDs still are. But the superiority of all-flash performance within virtual environments versus HDDs living on traditional storage like physical servers is undeniable.
The use of a guest OS and the abstraction of data (migrating data to VMs and containers) managed with a hypervisor (or hypervisor stack) is the core of server virtualization. Virtualizing servers allows the movement of infrastructures, networks and data to cloud environments. Some IT pundits argue that VMs and hypervisors can’t replicate the speed or flexibility of physical hardware or open (public) clouds. However, other IT experts and data scientists are already aware of the evolution of virtualized infrastructures utilizing web services to run private enterprise cloud environments with public cloud-like agility.
Legacy storage such as physical HDD servers are built on the same architecture that was used more than two decades ago. This forces development and operations (DevOps) teams to work within logical unit numbers (LUNs) and volumes, which, unlike VMs and containers aren’t the currency of cloud environments. IT and developers required to work on physical storage don’t have access to the ultra-granular visibility they have access to when working at the VM level. That means hours of time wasted on manual manipulation of LUNs and volumes because they are not isolated, automated or autonomous the way VMs and containers are.
Additionally, legacy storage built on LUNs and volumes cannot offer real-time analytics or predictive analysis. Storage built on VMs and containers offer both—and they also make executing operations and accessing data or software for troubleshooting possible at the individual VM level. Conventional storage that’s been retrofitted for cloud environments requires high-level IT expertise, while VMs can be set up and ready to go within minutes. Often, super-simplified VM and container management tools for admins who need immediate access to data and software is available from a single screen storage management user interface (UI).
In some instances, virtualization software is used to manage physical or legacy storage and hardware rather than cloud storage. In other cases, virtualized software is used to manage software as a service (SaaS), infrastructure as a service (IaaS) or platform as a service (PaaS). In these instances, an enterprise may be using no cloud of their own, but rather using the cloud capabilities of a third-party provider. Likewise, other enterprise-level operations may be using SaaS, IaaS or PaaS as part of their hybrid cloud environment. Usually, this means they’re using physical servers to store proprietary data while using cloud-based services to spin up and tear down various applications native to the cloud-side of their environment.
A hypervisor is a virtual machine manager (VMM) that gives enterprise cloud environments the ability to execute computations coming from more than one OS through one host. Prior to the advent of hypervisors, DevOps teams and other ventures were forced to use several hosts if they wanted to run more than one OS (or guest OS). Hypervisors are able to efficiently manage many OSs without any OS stepping on the toes of another.
A hypervisor or virtualization stack is the combination of a hypervisor with other resources to make an infrastructure run more smoothly. The resources combined with the hypervisor will typically include any virtual servers, the VMs within all-flash arrays, the data stored within these VMs, software used by the enterprise and cloud native applications.
Most DevOps teams understand virtualization to mean the configuration or reconfiguration of a new cloud environment to replace an existing legacy storage system. They may also think about virtualized aspects of componentry within hybrid cloud environments or within SaaS, IaaS or PaaS solutions. But there are other ways to virtualize a variety of constituents within an infrastructure.
Tintri All-Flash Storage for Virtualization and Cloud can help you manage your virtualized databases and servers. We understand your virtual databases need more reliable and evolved support than legacy storage offers. By partnering with Oracle, Tintri created solutions for assigning each VM its own lane and the ability to troubleshoot across every part of your infrastructure to get to the root of an issue quickly. Tintri All-Flash Storage can also give you on-demand, real-time analytics and predictive analytics so you can forecast what you'll need to provision ahead of time.
Tintri's AWS-like agility and scale for private enterprise cloud environments also offers automation for processes that can only be manually executed when you're dealing with the LUNs and volumes in physical storage. The Tintri Enterprise Cloud platform uses web services architecture—VMs and containers. You’ll always have the speed, scalability, and flexibility you need to scale up and tear down using representational state transfer application program interfaces (RESTful APIs).
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.