PVS vs. MCS – Part 3: Storage Optimization

This is part of a series comparing Provisioning Services and Machine Creation Services


For years, storage optimization has been one of the major strengths of Provisioning Services. With PVS, we can do the following:

  1. Optimized temporary storage allocation: PVS allows us to store the read-only master image on local or shared storage. We can also decide where to place the temporary, write disk, which could be on the PVS server’s local storage, the hypervisor local storage, the hypervisor shared storage, within the virtual machine’s RAM, or a combination of RAM and hypervisor local storage.
  2. Read IOPS Optimization: By automatically utilizing Windows 2012R2 system cache, we can drastically reduce read IOPS from the master image.       This has been shown to drastically decrease VM boot time.
  3. Write IOPS Optimization: By utilizing a combination of RAM and local storage for the write cache, PVS can significantly reduce write IOPS going to the hypervisor’s storage. This helps reduce costs as well as improve the user experience.

The write IOPS optimizations are powerful for any deployment because of the impact it has on the user experience while helping to reduce the cost of VDI storage.

But where does this leave Machine Creation Services?

If you believe Machine Creation Services is severely lacking in these capabilities, the latest release might surprise you.

Storage Location

Historically, Machine Creation Services utilized a differencing disk to store the writes. One limitation with the differencing disk approach was that the disk must reside on the same storage as the master image.

Default StorageIf you used shared storage to host your master images, you were also required to place all of your writes on the shared storage. This can drive up your costs.

With the 7.9 release of XenApp and XenDesktop, the differencing disk is transformed into a write-backed cache disk. This transformation allows the writes to be separated from the master image storage location.

Opt StorageWhen shared storage is used for the master image, the temporary storage (writes) can then be stored on the hypervisor local storage. This is configured as part of the host connection configuration within Citrix Studio.

Storage ConfigRead IOPS Optimization

Even though the writes are stored locally on the hypervisor, shared storage is still used for Read IOPS as each virtual machine on each hypervisor must read from the same set of images on shared storage.

Remember that PVS utilizes a RAM-based read cache to reduce Read IOPS from storage; when using XenServer, Machine Creation Services implements similar functionality.

RAM Read CacheA portion of XenServer RAM is used to locally cache portions of the master OS disk.

Write IOPS Optimization

I believe the Write IOPS optimization is the biggest enhancement for Machine Creation Services because of the impact the similar write IOPS optimization technology had on Provisioning Services with respect to storage cost and user experience.

With Machine Creation Services, each virtual machine utilizes a portion of their non-paged pool memory for the Machine Creation Services RAM cache.

MCS RAM CacheAs the virtual machine begins writing data to disk, those operations are stored within the RAM cache. Eventually, the RAM cache will get consumed and the oldest cached data will be written to the write-backed disk cache in 2 MB blocks.

This process is similar to how Provisioning Services handles the RAM-based write cache with disk overflow, which significantly reduced write IOPS.

XenApp and XenDesktop 7.9 gives enhances Machine Creation Services with

  1. Optimized temporary storage allocation
  2. RAM-based Read IOPS optimization
  3. RAM-Based Write IOPS optimization

So in the comparison of PVS and MCS, where does that leave us now?

storage compageAgain, things are fairly even.

Daniel (Follow on Twitter @djfeller)
XenApp Advanced Concept Guide
XenApp Best Practices
XenApp Videos

7 thoughts on “PVS vs. MCS – Part 3: Storage Optimization”

  1. Hi Daniel,

    Is the read based caching in RAM only supported on XenServer currently? If so, is this support going to be extended to ESXi and Hyper-V platforms?



    1. Correct, it is only for XenServer as it is at the hypervisor level while the write cache, being inside the VM, is for any hypervisor. For the RAM-based read cache, it would be up to the hypervisor vendor to include similar functionality that XenApp/XenDesktop could leverage.


    1. You are correct. Hyper-V does include an alternative in CSV Cache. I would expect it to perform similarly to XenServer’s RAM-based read cache, but not seen the test data yet.


  2. Dan, I setup some test pooled machine for mcs i/o. it seems that it is very cpu intensive. running w7-32, with 4cpu achieved 60,000 iops with iometer, while 2vcpu achieved only 1600 iops or so (ps – this is an ssd array). does this seem correct to you? also using the same perfmon’s in the video, i could not get them down to 0 like you all did. can you advise on the specs of the vm’s in the video?


      1. 256mb and 10gb, just the detaults when i create the new catalog. i did the same test with a fresh win 10 ltsb iso with just the vda installed, and got the same results. support ticket?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.