Does Cache Trump IOPS

With desktop virtualization, we hear more and more about how important IOPS are to being able to support the virtual desktop. I’ve had a few blogs about it and plan to have a few more. What I wanted to talk about was an interesting discussion I recently had with 3 Senior Architects within Citrix Consulting (Doug Demskis, Dan Allen and Nick Rintalan).  There are 3 smart guys who I talk to fairly regularly and the discussions get quite interesting.

This particular discussion was no different.  We were talking about the importance of IOPS, RAID configs, spindle speeds with regards to an enterprise’s SAN infrastructure. (Deciding if you are going to use a SAN for your virtual desktops is a completely different discussion that I’ve had before and Brian Madden had more recently). But for the sake of this article, let’s say you’ve decided “Yes, I will use my SAN.” If your organization already has an enterprise SAN solution, chances are that the solution has controllers with plenty of cache. Does this make the IOPS discussion a moot point? If we simply use an IOPS calculator (at least the ones I’ve seen) and do not take into account the caching capabilities of the SAN controllers, won’t we over-provision our virtual desktop environment and end up wasting more money/resources?

Many of us who are familiar with XenDesktop knows that changes made to the golden disk image, when delivered via Provisioning services, is stored in a PVS Write Cache.  From numerous tests and implementations, we know that 80-90% of the IO activity from a virtual desktop will be writes.  If we configure the SAN Controllers to be 75% write (assuming we have battery-backed write cache controllers), we allow the controllers to allocate more cache for write operations, thus helping to offload the write IO to the disk, which raises the number of effective IOPS the storage infrastructure can support. Think of the controller’s caching capabilities as a large buffer for our disks.  If our disks can only support so many write operations, the controller cache stores the writes until the disk is able to write it to the platter. This cache allows the infrastructure to keep moving forward with new operations even though the previous operations were not written to the disk yet.  They are all buffered. Just remember, we aren’t reducing the total number of IO operations, we are just buffering them with the controller cache.

Think about it another way. If we encounter a storm where each user will require 10MB of write operations and the storage controller has a 4GB cache, that one controller can support 400+ simultaneous users for this particular storm, and we haven’t even talked about the disk IOPS yet!!!  With this scenario, wouldn’t a single disk spindle be able to support this particular storm because the controller is buffering everything? And what’s also interesting is those write operations are being flushed to disk continuously so the number of users the controller will be able to support would be much, much higher.

So if we have cache on our controllers, which most SAN controllers I’ve seen lately have, are we over designing the storage infrastructure by only focusing on IOPS?  (this is assuming you are using SAN and not local disks on your hypervisor which I talk about a lot as well).  Just remember that those write operations must eventually get written to disk. So if we know what our controller cache is capable of, and we know the amount of storage required for a particular storm (logon, boot, logoff, etc), can’t we support more users (and I mean a lot more users) on the SAN?

What do you think?

Daniel – Lead Architect

10 thoughts on “Does Cache Trump IOPS”

  1. Daniel,

    I’m wondering what storage controllers on the market today cache write IOPS and how they do it?



    978 602 6565


    1. Yes . Cannot agree more .

      As far as my understanding , Response time for the end machine is completely dependent on the ACK sent from the cache i.e Data is first writeen to cache and it is very important to size the cache properly . Sizing disks is also important since the disks has to be sized for Total IOPS that would be generated since writes from the cache has to be written back to the disks else cahce will get full which in turn will affect the response time .


  2. As a storage designer (or at least someone who was one until a month or two back) for NetApp, I agree that VDI sizing for storage is primarily influenced by IOPs, though the ability to absorb random write traffic not only depends on the amount of cache, but also by the way in which data is committed from cache to disk.

    I recent;u covered this, and how NetApp controllers cache writes and commit them to disk with an efficiency that is difficult to match in fairly great detail as a response to one of your other blog posts at where I wrote (amongst other things)

    “The read and write cache in traditional modular arrays are too small to make any significant difference to the read and write efficiencies of the underlying RAID configuration in VDI deployments“.


    “I think Ruben did outstanding work, and something which I’ve learned a lot from, but when it comes to sizing NetApp storage by I/O I think he was working with some inaccurate or outdated data that led him to some erroneous conclusions which I hope I’ve been able to clarify in this blog.”

    I’d appreciate any critical feeback you might have.

    To save some time you might want to jump straight to

    John Martin


    1. John

      That is a great series of posts. I do have some questions regarding the caching. Let’s assume that we need roughly 10 IOPS per user for normal working time. We also will need about 26 IOPS for a boot storms. Wouldn’t the cache help us level out the massive boot storm spike? I’m not talking about a controller that only has 1GB of cache. I’m looking at larger ones like 4, 8, 16GB Cache controllers.

      I know that with Provisioning services as part of XenDesktop that we need to stream 100MB of data across the write to boot up the WinXP desktop. A portion of that 100MB will be stored in a change file (something we call write cache), let’s say it is 40MB. If I have a SAN controller with an 8GB cache set for writes, would that controller be able to cache a large potion of the boot storm SAN traffic (40MB per user across 8GB cache means it would help support 200 users). Those 200 users data would constantly be flushed to disk so the cache can support additional users.

      As I would think, a SAN cache could help substantially during a storm (which is a short-term spike in disk activity), but you would still need to design your storage based on your average IOPS parameters.


      1. The answer to your question depends on so many factors, and I’ve been working for the last week or two on how large cache models can be effective for peak workloads.

        We..need ..26 IOPS for .. boot storms. Wouldn’t ..cache help .. [with] 4, 8, or 16GB Cache

        It will help some, but it depends on whether you’re using some kind of single instancing technology, and how hot the datasets are. If you have an old school virtualised environment where you just P2V a couple of thousand desktops and then try to boot them (I’ve seen this), or worse still run a virus check over them, then the ratio of read cache to working set may well be so small that it wont make any appreciable difference. If you use some form of single instancing, then in some cases 80%+ of the reads will come out of cache, reducing your 26 IOPS to about 6 (from a disk perspective).

        As far as the XenDektop provisioning server case goes (I’m a little vague on XenDesktop so I could be wrong here), the answer depends on how much of that data is accessed before it’s flushed to disk by other write requests coming and how many workstations you’re trying to boot at the same time. 100MB isnt much but if you have to write that for each of 1000 workstations that amounts to about 97GB which is a lot more write cache than you’re likely to find in anything short of a top of the range frame based array (which would make the entire exercise unworkable), or possibly some of the SSD based megacache architectures which we’re starting to see in the marketplace. I should have this covered in the next week or two on my blog.


  3. My question then is, if we are talking about a corporate SAN that is already in place, how such changes in the caching policy would affect other servers reading/writing on the same SAN?
    It seems a large cache with a high write % ratio allocated sure does the trick for VDI and help reducing the raw IOPS required but again, this also assumes a new SAN or an exclusive SAN for VDI is being pushed as such changes could heavily impact other servers that require a much higher read than write % allocated. Wouldn’t this be the case here?



    1. Great point. Most SANs would be in place to support databases and other backend systems, which I would imagine to be read intensive, or at least 50/50. So this all goes back to the question of should you use a SAN for VDI? And if you do go down the SAN route, will you have a dedicated SAN for the VDI and another SAN for everything else?

      This is another reason why I’m a big fan of using local disks 🙂 No politics involved.


      1. The desire to avoid politics and sharing resources is understandable, but goes to the question of how do you manage these issues in any kind of shared infrastructure solution, including VDI.

        This is why most IT shops are typically made up of islands of infrastructure with various teams vigorously defending their turf. IMHO, this is not only counterproductive, and causes more politics in turn, but undermines the important movements within datacenters to eliminate waste and move from project based provisioning of IT resources to one based on a common set of shared services.

        These islands of IT makes it difficult to get improvments in operational expenditure, or reallocate inefficiently utilised resources from one project to another

        There are ways of dealing with your concerns, but like any infrastructure, if you can estimate your future workloads with reasonably accuracy, then working out how much you need to spend on expanding/upgrading your SAN isnt too hard to do, in fact this has to be done every time you add a new workload to shared infrastructure, be it compute, network or storage.


  4. This is why we recommend and use Datacore San Melody with many of our customers. This products places the SAN logic on an industry standard Windows Server OS and Hardware. This product utilizes all of the RAM in the server as SAN cache. With many machines coming today with 24, 32, 64GB of RAM the amount of cache is effectively unlimited. As a result with PVS you can keep entire vDISK images cached in RAM!!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.