Sprawl is a big issue across nearly all areas of IT. Server virtualization aims to combat server sprawl and reduce the physical footprint of servers in server rooms and data centers. It’s simple math: more consolidated systems on a lower number of higher capacity systems equals less server hardware. This works well for most systems and many have been able to reduce the footprint of their server hardware. Server virtualization has also made it easy to provision infrastructure and deploy systems in minutes rather than hours, days, weeks or months.
However, with this ease also comes VM sprawl — which in a lot of ways could be just as bad, if not worse, than server sprawl. VM sprawl puts more strain on the management, security and support layers of IT. Fortunately, there are ways to help combat the affects with good processes and third-party products. At the core of virtualization is storage, the most-used resource with typical server virtualization. All of the VM metadata files, VMDKs, swap files, templates, etc. are kept in storage, which in my opinion makes it the most important layer of the server virtualization stack.
At a high level, the three main areas of concern with storage deployments should be with availability, capacity and performance. These three areas shape the storage layer for all virtualization deployments large or small, and also have a direct link to cost. Even if you don't acknowledge having VM sprawl in your environment, you can get into datastore sprawl. While I admit that the storage layer is the most important, it used to be — and in many ways still is — the least optimized layer of the hypervisor. This is why high storage I/O systems are typically left to run on physical hardware.
To avoid storage I/O performance issues, you would typically use Fibre channel storage, keep datastores somewhat small to limit the number of VMDKs per datastore, mix I/O workloads on datastores, and so on. Then at the lower level, you have the vendor-recommended spindle counts, RAID type, virtual pools, etc. Datastore sprawl is also a problem when each datastore is provisioned with dedicated spindles with high-rpm lower-capacity drives, usually seen in large enterprise storage arrays.
In the past, you were forced into this way of storage provisioning either by the vendor or the storage admin. I had one storage administrator that was adamant about not having a LUN larger than 500GB usable due to spindle count and rebuild times. And for some it didn’t matter whether the storage was using virtual pools of storage or not. Things have changed a lot with storage both at the physical array and in the hypervisor. New efforts are aimed at improving storage efficiency, utilization, and performance for all workloads, but especially virtualization. Flash SSD, virtual storage pools, high-capacity low-cost storage, NFS, vSphere 5, new VMFS version 5, and VAAI are all things that are going to help reduce the number of datastores that are required in a given vSphere deployment. This in turn will help reduce datastore sprawl and the management headaches that come with it.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.