Introduction to vSphere Storage | Tintri

0 0

Introduction to vSphere Storage

In my last two articles, I covered NFS configuration and troubleshooting (see Connecting vSphere to NFS the Easy Way and vSphere NFS Troubleshooting Basics) but before I delve back into more advanced topics, I’d like to take a step back and ensure that all the Tintri blog readers are on the same level when it comes to understanding vSphere storage basics. Many admins are thrown into storage and virtualization and just don’t have time to develop a strong level of basic knowledge, which is very important.

vSphere Storage Terms You Must Know

I don’t want to get too rudimentary here, but stick with me while I cover some basic terms that we all need to know:

  • Storage virtualization: When using any kind of server or desktop virtualization, virtual machine (VM) hardware is virtualized, and that includes storage. VM storage is virtualized into VM disk files (VMDKs), abstracting the storage from physical storage hardware. However, when people talk about storage virtualization, in many cases, they expect the ability to dynamically move storage that is in use from one SAN to another.
  • VMDK: VMware’s virtual disk file container, the VM disk file. Each VM will have at least one and it could be provisioned in one of three ways (figure 1, below):
    • Thick-lazy zeroed: Space taken when created but only zeroed when needed by the VM — the most popular type of virtual disk for production VMs.
    • Thick-eager zeroed: Space taken when created but zeroed when created (which could take a long time) — used for VMs that will have fault tolerance (FT) enabled.
    • Thin provisioned: Space not taken when created — great for saving space but could also get you into trouble — best used with lab environments.


Provisioning a VMDK in vSphere.

Figure 1: Provisioning a VMDK in vSphere.


  • Datastore: A disk volume (local disk, SAN LUN or NFS) that has been formatted and mounted with an ESXi server (figure 2).


Datastores in vSphere 5.

Figure 2: Datastores in vSphere 5.

  • VMFS: VM file system — the file system format of a datastore.
  • Storage adapters: Virtual or physical storage adaptors, used solely to talk to local or remote storage devices. In vSphere/ESXi, storage adaptors are labeled vmhbaXX. For example, on my server, vmhba32 is the iSCSI software adaptor that talks to the iSCSI SAN (see figure 3, below). There is a longer name for the storage adaptors called the “runtime name” that looks like this: vmhba1:1:0. It uses this format — vmhbaC:T:L, where:
    • C = Channel (path and FC HBA used)
    • T = Target (which storage processor is used on the target)
    • L = LUN ID

Thus, vmhba1:1:0 would be HBA1, storage processor 1, and LUN 0.

Storage adapter configuration in vSphere 5

Figure 3: Storage adapter configuration in vSphere 5.

  • Storage paths: In many cases, ESXi hosts have multiple paths to a disk LUN. VMware offers three ways to help ESXi determine which path to take when there are multiple choices (see figure 4, below). These three path selection policies are:
    • Fixed: Uses the first working path discovered at boot time. If that path is unavailable a new path is selected, but when the original is available again, it is used — the default policy for LUNs from an active/active array.
    • Most recently used (MRU): Uses the first working path at boot time. If that path is unavailable, a new path is selected and that path is used until it becomes unavailable — the default policy for LUNs from an active/passive array.
    • Round robin: Automatically distributes load across all available paths.

See VMware KB article 101140, Multipathing Policies for more information.

Setting up storage paths in vSphere 5

Figure 4: Setting up storage paths in vSphere 5.

vSphere Storage Frequently Asked Questions

Besides the terms you must know, there are a few common questions that people ask about when implementing vSphere.

Why do I need shared storage?

Your VMs will be stored in a single place — the shared storage — where all ESXi hosts have access and visibility to them. Since all hosts can access them, these VMs can be vMotion’ed from one host to another with Distributed Resource Scheduler (DRS) or restarted when a host fails with vSphere High Availability (HA). Shared storage is a required component of a vSphere virtual infrastructure where you want to take advantage of advanced features.

Should I use block-based or file-based storage?

Examples of SAN block-based storage are Fibre Channel (FC) and iSCSI. An example of NAS file-based storage is the NFS. All of these options are supported by VMware and work well with advanced features you’ll want in your virtual infrastructure. Honestly, there is no right or wrong answer to the block vs. file question. NAS tends to be a bit easier to manage, but both provide excellent performance and if designed well and configured correctly, either option will work well.

How does the popularity and affordability of SSD change my storage decision?

Solid state disks (SSD) provide reliability and consistent performance as the demand for IO increases. Because of its unique benefits over traditional disk, SSD is the best high-performance storage option (and it has become affordable). VMware vSphere can even now be configured to use internal server SSD as secondary (additional) memory for the server, allowing it to run more VMs.