Local or Shared Storage – that is the question


Previously I’ve talked about how using local storage can help reduce the costs of desktop virtualization.  Paul Wilson tested this type of environment to determine if it is possible or to see if I was talking crazy. The result: it is possible and I’m a little crazy.  So we have a new design decision, which way will you go?


Before making the decision, you have to determine if local storage is good enough for your environment or if you need shared storage.

Local Storage

Local storage is the storage located within the server running a hypervisor (XenServer, Hyper-V, vSphere).  By using the local drives, the following are important considerations to remember:

  1. Live Migration: being able to move a running VM from server-to-server is not available.  Because other servers cannot see and access local storage, live migration is not an option, however, many organizations have decided not to use this functionality for virtual desktops.
  2. Server Balancing:  Server balancing allows the pool of hypervisor servers to be rebalanced so their loads remain similar throughout the day.  In order to do active server balancing, live migration functionality is required.  However, if server loads are balanced on virtual machine startup, the balancing is still an option.
  3. Server Costs:   In order to use local storage, the server must have additional, fast hard drives and fast array controllers. This increases the cost for each server, although these costs are usually lower than an enterprise shared storage solution.

Shared Storage

Shared storage is located on a centralized storage system (SAN or NAS type devices). By using shared drives, the following are important considerations to remember:

  1. SAN Costs: Shared storage solutions are expensive infrastructure components.
  2. Server Costs: Connecting a server to a shared storage infrastructure requires some type of hardware connection, either network cards, fibre-channel cards, or some other connection method. This has an impact on the server, but also on the underlying infrastructure to support the increased traffic (network switches)
  3. Expertise: In order to utilize an enterprise storage solution, the organization must have expertise.  If a team is not already ingrained within the organization, this must be done in order to support this important infrastructure component and your larger desktop virtualization goals.

So which option is right for you? You tell me.  If you don’t need live migration, and don’t already have a enterprise storage solution, you might be better off by going with the local storage option.  However, if you have capacity, technical expertise, and experience with enterprise storage then use what you got and go down the shared storage path.

Daniel – Lead Architect

Advertisements

8 thoughts on “Local or Shared Storage – that is the question”

  1. My biggest question that no one ever answers is where should i be looking at assigned the storage with the most power too? Is it the Provisioning Server or is the LUNs attached to the XenServer Pool that has hardrives for caching to that VM?

    Like

  2. It is the drives for caching the write cache. On the PVS side, you are only doing reads of the vDisks. If you use Windows cache appropriately, we can significantly reduce the impact on the disk because the blocks are read from cache (RAM). See this post for more info: https://virtualfeller.com/2010/07/19/not-spending-your-cache-wisely/

    The write cache side is a different matter. We are doing a lot of writes. We know that the harder users work on their virtual desktops, the greater impact on the storage.

    Like

    1. We currently have a 3 – 4 Disk RAID 10 300GB 15k rpm drives assigned to our XenServers pool for 1.5GB Write cache for the VMs. I believe this is our HUGE Bottleneck that is causing tons of retries. Its hosting about 650 Machines.

      Would it be best to create Seperate FC LUNS like we have or would it be better for me to create 1 or 2 Large LUNs across multiple DAEs?

      Thanks your site has been extremely helpful in helpin me try to determine where the bottle neck is comming in.

      Like

      1. Based on that info, it appears you are correct that the 4 Disk RAID 10 volume for the write cache is your bottleneck. You have enough space. you have enough througput to the storage. You just don’t have the IOPS. If you have 4 disks in RAID 10, you have 600 total raw IOPS. Because desktop virtualization is write intensive, you get an effective IOPS of about 360. If you are only using 4 spindles (disks) for 650 active virtual desktops, you can only give each VM about 1/2 of an IOPS before you hit a bottleneck. Most users will require about 8-12 IOPS, which means you need a lot more spindles.
        This blog might help you out with storage as a lot of things come into play: https://virtualfeller.com/2010/08/02/improper-storage-design-for-virtual-desktops-is-a-killer/

        Like

  3. Just wanted to make sure we are both clear that currently 360 IOPS is hosting190 or so vDisk per LUN there are 3 Seperate 4Disk RAID 10s assigned to the vDisk so currently i have 1080 IOPs for the 650 active

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s