0 0

Performance-driven SQL Databases and Virtualization

Server virtualization changed the way IT has deployed most of its infrastructure for the last few years. Many companies are adopting a “virtualization first” policy because of all the benefits associated with virtualization. While most applications can be virtualized without any penalty to the application and user experience, there is still a worry about virtualizing everything. The top two areas of concern I see are security and performance for high I/O driven applications.

The one service that I tend to not virtualize, for performance reasons, is Microsoft SQL Server. This does not mean that I never virtualize Microsoft SQL Server; but a great deal of thought needs to go into your decision when you consider virtualizing here, and the answer is never black-and-white. Sizing and configuring the compute and memory resources are usually easy with the hypervisor. VMware does a great job providing features and functionality to allocate, reserve, prioritize, and modify vCPU and vMEM resources on a VM. Other virtualization solutions offer similar features.

The most important and complex part of a SQL deployment   to get right is the storage infrastructure. Granted this is true in any virtualization deployment, and the same can be said for any high I/O Microsoft SQL Server deployment. But it gets more complicated when you put these two things together. It is recommended to dedicate storage and spindles for logs, data, and indexes; however, dedicating resources is commonly something you don’t tend to do, or want to do, when virtualizing servers. It goes against the concept of shared resources, which promotes higher consolidation ratios. This is where new flash solid-state disk (SSD) storage systems can greatly simplify deployments and improve performance by eliminating contention for spindles. Pure SSD storage systems, however, can quickly become very expensive in term of $/GB especially when virualizing large databases.  SSD storage systems that do dedupe, compression and automated movement of data between SSD and HDD can be more cost effective.

Some other storage issues to think about are whether you’re going to use Fibre Channel, iSCSI, or NFS. It used to be that if you want low latency and high bandwidth, Fibre Channel would be the best solution to pursue. Today, with the introduction of 10 Gigabit Ethernet, network-based storage protocols like iSCSI and NFS can be used without sacrificing bandwidth and performance. NFS has the further advantage that it is easier to configure and manage.

Many times what drives my decision about whether to go virtual or physical hinges on whether I’m going to want an out-of-the-box experience, or whether I will have to overly tweak and modify. If the benefits outweigh the time spent tweaking the configuration, I’m OK with virtualizing. On the other hand, there is nothing wrong with deploying on physical hardware when it absolutely makes sense. You should never have to force it. Don’t forget to do all of your testing and certify the environment before handing it off to the customer. Customer satisfaction should be the goal—and remember, the customer is not always the end user. Make sure that the operations team, application owners, and developers are satisfied with supporting the solution as well.

Antone Heyward / Jul 21, 2011

Antone Heyward is an IT Professional with years of experience working with Windows Server, Virtualization (VMware, HyperV and Citrix XenServer), shared storage environments and datacenter infrastru...more

Temporary_css