Provisioning Services Read Cache


As you can see, I’ve spoken numerous time about the Provisioning Services RAM Cache with Disk Overflow capability.

  1. Windows 10 IOPS
  2. Video Proof
  3. Reducing IOPS to 1
  4. Read/Write Ratios
  5. XenDesktop 7.5 IOPS
  6. Digging deeper into IOPS
  7. ESG Spotlight on IOPS

So yes, I like talking about this topic.  But now, I’m going to talk about something very slightly different… Cache 🙂

While I was working on capturing some images for my Citrix Synergy 2016 Tech Update session, I saw something interesting.

I started my lab, started my Provisioning Services server and launched a Windows 10 virtual desktop.  According to the Provisioning Services agent on my virtual desktop, the desktop took almost 60 seconds to boot (Just so you know, I’m working on 7200RPM spinning disks in my meager home lab, so 60 seconds is expected).

First time boot

I then started a second Windows 10 VM, using the same Provisioning Service images.  Now look at the Provisioning Services agent.

Cached boot

 

Instead of an almost 60 second boot time on the first VM, the second VM booted in 14 seconds! WHAT?

Look even closer at the two images.  Look at the disk throughput.  4,400KB/sec vs 18,000KB/sec.

Sorry, but my cheap disks are not that fast. So what gives?

When you boot a Provisioning Services-based VM, the VM requests the disk image from the Provisioning Services server.  The Provisioning Services server reads portions of the disk and streams it across the network.  As the Provisioning Services server reads portions of the disk image, Windows automatically stores this information in RAM (system cache), if enough RAM is available.

So when we boot subsequent target devices that use the same disk image, we get a massive boost in performance as Provisioning Services uses the information in RAM instead of reaching out to slower storage.

As i said before, Cache is Good!

Daniel (Follow on Twitter @djfeller)
XenApp Best Practices
XenApp Videos

Advertisements

4 thoughts on “Provisioning Services Read Cache”

  1. For sure! Need to get the RAM cache feature implemented in MCS, as well. 🙂 Using SDS with lots of RAM for read cache seems to achieve similar improvements; we get consistent 28 +/- 2 sec login times for Windows 8.1 VMs.

    Like

    1. We should leave a note for the uninitiated reader that this is the effect of PVS using the Standby RAM on Windows server to read the vDisk directly from RAM instead of the HDD, not the cache on the local VM- though that is awesome in general you’d get the same effect by running something like PrimoCache on your local VMs via MCS to achieve the effect. I’m seeing a lot of industry confusion out there 🙂
      So unfortunately, there is no like for like technology for MCS since PVS is streaming the vDisk to the target devices, and we’re taking advantage of the speed of RAM to cache reads at the PVS server, then caching disk deltas at the local VM. Win-Win… Win!
      But to the point- how much RAM is needed (popular question) to actually make PVS cache? There are formulas- generally plan on about 4 GB per active vDisk and you’re at a good starting point…
      This is why most of us consultants always say don’t make assumptions about how much RAM to give PVS tho.
      TEST, TEST, then TEST again!
      You can use RAMMAP from Sysinternals to validate how much of the vDisk is being read from cache and how much from the HDD, and adjust RAM to the point where you aren’t seeing any reads from disk after the first read.
      Longer term- remember to use a performance monitor to look for your PVS server’s Cache Hit Rate to be above 80% for most reads, if not… time to increase RAM!

      And, now you’re a PVS expert!
      Okay, maybe just a little closer 🙂

      Like

  2. Just wondering, why the first boot needed 1,17 GB Reads and the boot on the second machine only 750MB? This screenshot is boith times taken on the target, right? Why the second OS would come up with 35% less reads? Any explanation ?

    Like

    1. Just focus on the top section (boot Statistics), which only focuses on boot time. The bottom portion is the entire time the VM is running. I was doing things on the first VM, which is why the read for the entire session is higher

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s