Last month, I attended a meeting of Wisconsin VMware User Group (WIVMUG), sponsored by Tintri and Mitel. I enjoyed the opportunity to meet virtualization users and talk about Tintri. But my interest was unexpectedly piqued by the presentation from the other sponsor.
Mitel has introduced an entirely virtualized PBX and other communication services. Phone switching, voice mail and other services, which used to live on dedicated hardware, can now be run on the same virtual infrastructure which supports traditional server applications. While many data centers aim at 100 percent server consolidation — eliminating all dedicated servers in favor of virtual machines — products like Mitel’s point toward eliminating other forms of dedicated hardware. We should expect to see similar physical-to-virtual conversion in other areas, such as surveillance, environmental controls, and even industrial processes. Virtual data centers can achieve more than 100 percent virtualization by expanding from servers and desktops to other components of business infrastructure, beyond traditional IT services.
What enables and drives this change? Obviously there are compelling advantages to running as part of a virtualized data center rather than a standalone box: lower power and cooling costs, reduced floor space requirements, improved availability, and disaster recovery. Systems which can make use of shared infrastructure, whether it is a private cloud or an external provider, are cheaper and more powerful than those which must operate in a self-contained fashion, without access to networked resources.
But the trend to greater virtualization is also driven by hardware improvements for embedded devices, such as more powerful processors and convergence on Ethernet and Internet protocols. Devices that once were nothing more than simple sensors communicating via analog signals are increasingly available as smart devices with their own software and standardized network connections. In Mitel’s case it is the IP phone that allows a virtual PBX, but this situation is not unique. Industrial programmable logic controllers (PLCs) have offered IP connectivity for many years; manufacturers in areas as diverse as HVAC and farm equipment are starting to offer IP-based technologies. For example, a small company called Feedlogic offers feed measurement tools that wirelessly communicate rate information to a cloud-based service, so that a livestock farmer can access sophisticated history and alerts without on-site computers and storage.
The greater capabilities of data center infrastructure and the increasing scope of network technologies have synergistic effects. My friend Neal Tovsen, founder of TelemetryWeb, puts it this way: “I find that the most interesting thing about what we call a smart sensor or device is that it is actually becoming more dumb. We usually call something smart when it becomes network-aware. But as the availability, capacity, and speed of a given network improves, functionality will always continue to be aggregated, centralized, and virtualized. This reduces the need for domain-specific logic at the end points, reduces cost and complexity, and therefore lowers barriers to new innovations.”
Virtualizing past 100 percent will pose a lot of new challenges. Although virtual infrastructure can theoretically provide a better experience than the physical infrastructure it replaces, this advantage will be negated if the increasing complexity of the VM environment exceeds the ability of administrators to manage it. Systems which can’t effectively be virtualized, or ones that can’t give 110 percent to the virtual infrastructure, will be replaced with ones that can.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.