0 0

NFS on vSphere is Enterprise Class

In this guest post on the Tintri blog, Chris Wahl explores the evolution of NFS for virtual environments.

I recall sitting in a VMware training course years back where I was taught “NFS is good for hosting datastores that contain ISOs, but doesn’t have the performance to run virtual machines.” And, since I didn’t know that much about the protocol (hey, I’m a Windows guy), I took the information at face value. Fortunately for me, I had the opportunity to work on a large VMware environment that ran hundreds of mixed-server workloads completely over NFS. Sadly, I’m sure many folks who have also been through those same training courses are still of the mindset—that NFS is great for ISOs, but shouldn’t have any sort of workload on them.

A Decaying Best Practice

As with any best practice, it’s always a good idea to take a look at what has changed since everyone fell onto the band wagon. We’re now living in a world of much more powerful, multilayer switching with 10 gigabit being both an affordable and common port speed, connected by beams of light over fiber optic wire. That pretty much eliminates any concerns about throughput and latency, as fiber channel currently runs at either four or eight gigabit speeds (depending on your SAN fabric) over fiber optic wire.

However, I’ve just described the changes that any IP traffic (such as iSCSI or FCoE) can take advantage of, and not specifically NFS. The major difference is that NFS is a protocol that delivers data to an array-defined file system instead of VMFS. NFS is commonly referred to as a file system, like VMFS, which is simply not true (but is a common misconception because NFS stands for network file system). NFS gives the array ultimate flexibility to choose how to organize things under the hood, because the vSphere host does not own the storage like it does with VMFS. This has always allowed for some huge advantages, such as larger datastores, the ability to shrink a volume, and file level locking (although VMFS5 and VAAI are helping to bring parity in the feature list).

The Big Hurdle

One of the challenges with NFS is that it’s harder to design for under normal circumstances in a VMware environment. Let’s take a few common stumbling steps that I see:

  1. Being limited to NFS version 3 means no parallel NFS (and thus no MPIO).
  2. There’s a lot of confusion on how to do load balancing with NFS, and no way to bind uplinks as with iSCSI.
  3. Traditionally, the storage team handled this task. Now, it’s on the network team.

So, here’s my take on these items. I don’t see item no. 1 changing any time soon. As for no. 2, I wrote a number of articles to help clear the confusion called the “NFS on vSphere” technical deep dive series. As for no. 3, well, we’re living in a converged infrastructure world, so it’s time to polish off some new skills.

Once the design is complete, however, implementation and ongoing maintenance is a snap. There is no zoning, masking, working with disparate fabrics, or any of that nonsense.

NFS on Tintri

Interestingly enough, Tintri actually solved these stumbling blocks just by the way the array’s connectivity is designed. Rather than having a complex set of ports active simultaneously, they instead make only one port on the array active at any given time (although there are four total ports, two per side, for both NIC and switch redundancy). The port also uses a virtual IP (VIP) that floats to whichever port is active. This means you only need one path to the storage, eliminating any concern over using a variety of subnets or EtherChannels. I’m a fan of this architecture.

Chris Wahl / Jun 05, 2012

Chris Wahl is a datacenter engineer at Ahead and a virtualization-aholic living in the Chicago area. He has over 13 years of IT experience in enterprise infrastructure design, implementation, and a...more

Temporary_css