Is the IO Blender effect going away in the future?
There is no question that virtualization has taken a stronghold in enterprise environments. The benefits of virtualization have been realized and highly adopted.
Running mixed workloads like virtualization will prompt the IO Blender effect. Compound this with upcoming trends (VDI, Big Data, etc.), and the effect will only worsen—unless you properly plan ahead for it!
How do you plan/design for mitigating/minimizing the IO Blender effect?
Knowing that the IO Blender effect is here to stay and that trends in virtualization are only going to compound the issue, a number of steps can be taken to minimize the issue for the future:
1) Adjust organization of VMs for like workloads - This may involve different SAN configurations to ensure like-workloads are organized rather than providing a single array of disks. Like workloads are more predictable and can be planned for.
Make no mistake, this is not an easy task. Organization of workloads takes a significant amount of analysis and the behavior of applications in your environment. In many instances this may result in a larger number of storage pools in your environment to manage.
2) Utilization of storage offloading functions - Hypervisor vendors have been working hard with storage vendors to offload many storage functions from the hypervisor to the storage controllers. This allows the storage system to more efficiently handle the operation, reduces pressure on the hypervisor, and helps reduce congestion on any storage connections (local, fibre, or network).
3) LUN Alignment – This is mostly an issue with older Operating System versions. Newer versions will auto-align on installation and should not be an issue. However, with that said, spending the time to ensure LUN alignment is correct will reduce the IO overhead and relieve pressure on the storage controller and subsystems that perform the operations. LUN alignment has the ability to have a significant and positive impact on systemic health!
4) Understand the workload that is being placed on the storage system AND what will be placed on the system in the future. Understanding the workload will help system admins and storage admins ensure proper grouping of like-workload types. Being able to proactively plan where workloads are going to be placed will ensure optimal performance will be maintained and identification of capacity constraints BEFORE they occur.
5) Utilize expandable and extensible storage systems – Storage systems are exceptionally difficult to replace. High cost and the role in the compute environment is difficult to overcome. Storage migration functions in the hypervisors help ease the transition, though. With that being said, being able to expand a storage system in some fashion ensures the decisions you make now can be adapted, adjusted, and enhanced to meet the needs of today and tomorrow. Adjustments and enhancements may include: addition of trays of disks, support for multiple disk types (SAS/FC, SATA, SSD), SSD-based caching, upgradable firmware to add support for future features, additional controllers, swappable HBAs/NICs, etc.
6) Utilize SAN storage that is designed/optimized for virtualization. A small number of vendors are bringing virtualization-focused storage solutions to the market. Large storage vendors are re-tooling their solutions to address issues introduced by virtualization too.
Make a conscious effort to vet and evaluate storage solutions outside of the box. Storage systems with a penchant for virtualization have immediate value outside of the ability to store VM data. Additional functions, like performance metrics, provisioning, management, functional offloading, migration of VM workloads in tiers of storage, etc. all have value beyond storage connectivity and RAID configurations.
7) Utilize up-to-date infrastructure – The IO Blender effect has a way of highlighting the bottlenecks in your virtualization environment. By staying up to date with infrastructure technologies, the bottlenecks can be reduced or eliminated!
- 10Gb networking: for IP-based storage solutions, 10Gb networking can ensure storage requests are not being held up at the NIC level. 10Gb can provide more than ample bandwidth between the compute and storage tiers.
- Fibre channel HBAs: Similar to 10Gb networking, utilizing faster/more efficient HBAs ensure that data requests are not clogged up in the fabric.
- Servers: CPUs, memory, PCI expansion cards, etc… all have a role in ensuring peak performance. CPU performance is increasing in some fashion all the time. Memory is getting denser and faster (which reduces the need to go to storage in many cases). PCI busses are speeding up and the cards that are used are faster and more efficient.
- SSDs, Tiering, and offload functions: SSDs provide a significant increase in performance, if utilized properly. SSDs, in conjunction with automated tiering functions, provide amazing value to all workloads. Offloading hypervisor functions to the storage systems allow the storage system to more efficiently handle the request without sustained impact to the compute and network/fabric.
With virtualization comes the IO Blender effect. The IO Blender effect is not going to go away any time soon. By understanding the cause, impacts and ways to minimize the effect, though, you can help ensure a healthy, efficient, and optimal virtualization environment!
For more information on the IO Blender effect, feel free to check out the following:
- Stephen Foskett – The IO Blender
- Tintri Blog – Virtualization Can Be Kryptonite For Storage Admins
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.