This is part of a series comparing Provisioning Services and Machine Creation Services
- Part 1: Resource Delivery Options
- Part 2: Scalability
- Part 3: Storage Optimization
- Part 4: Deployment
- Part 5: On-going Maintenance
- Part 6: Architecture
- Part 7: Summary
For years, storage optimization has been one of the major strengths of Provisioning Services. With PVS, we can do the following:
- Optimized temporary storage allocation: PVS allows us to store the read-only master image on local or shared storage. We can also decide where to place the temporary, write disk, which could be on the PVS server’s local storage, the hypervisor local storage, the hypervisor shared storage, within the virtual machine’s RAM, or a combination of RAM and hypervisor local storage.
- Read IOPS Optimization: By automatically utilizing Windows 2012R2 system cache, we can drastically reduce read IOPS from the master image. This has been shown to drastically decrease VM boot time.
- Write IOPS Optimization: By utilizing a combination of RAM and local storage for the write cache, PVS can significantly reduce write IOPS going to the hypervisor’s storage. This helps reduce costs as well as improve the user experience.
The write IOPS optimizations are powerful for any deployment because of the impact it has on the user experience while helping to reduce the cost of VDI storage.
But where does this leave Machine Creation Services?
If you believe Machine Creation Services is severely lacking in these capabilities, the latest release might surprise you.
Storage Location
Historically, Machine Creation Services utilized a differencing disk to store the writes. One limitation with the differencing disk approach was that the disk must reside on the same storage as the master image.
If you used shared storage to host your master images, you were also required to place all of your writes on the shared storage. This can drive up your costs.
With the 7.9 release of XenApp and XenDesktop, the differencing disk is transformed into a write-backed cache disk. This transformation allows the writes to be separated from the master image storage location.
When shared storage is used for the master image, the temporary storage (writes) can then be stored on the hypervisor local storage. This is configured as part of the host connection configuration within Citrix Studio.
Even though the writes are stored locally on the hypervisor, shared storage is still used for Read IOPS as each virtual machine on each hypervisor must read from the same set of images on shared storage.
Remember that PVS utilizes a RAM-based read cache to reduce Read IOPS from storage; when using XenServer, Machine Creation Services implements similar functionality.
A portion of XenServer RAM is used to locally cache portions of the master OS disk.
Write IOPS Optimization
I believe the Write IOPS optimization is the biggest enhancement for Machine Creation Services because of the impact the similar write IOPS optimization technology had on Provisioning Services with respect to storage cost and user experience.
With Machine Creation Services, each virtual machine utilizes a portion of their non-paged pool memory for the Machine Creation Services RAM cache.
As the virtual machine begins writing data to disk, those operations are stored within the RAM cache. Eventually, the RAM cache will get consumed and the oldest cached data will be written to the write-backed disk cache in 2 MB blocks.
This process is similar to how Provisioning Services handles the RAM-based write cache with disk overflow, which significantly reduced write IOPS.
XenApp and XenDesktop 7.9 gives enhances Machine Creation Services with
- Optimized temporary storage allocation
- RAM-based Read IOPS optimization
- RAM-Based Write IOPS optimization
So in the comparison of PVS and MCS, where does that leave us now?
Again, things are fairly even.
Daniel (Follow on Twitter @djfeller)
XenApp Advanced Concept Guide
XenApp Best Practices
XenApp Videos
Hi Daniel,
Is the read based caching in RAM only supported on XenServer currently? If so, is this support going to be extended to ESXi and Hyper-V platforms?
Thanks
Shaun
LikeLike
Correct, it is only for XenServer as it is at the hypervisor level while the write cache, being inside the VM, is for any hypervisor. For the RAM-based read cache, it would be up to the hypervisor vendor to include similar functionality that XenApp/XenDesktop could leverage.
LikeLike
Hi,
Correct me If I’m wrong, but there is already a native read cache feature when using Windows 2012r2 Hyper-V in failover cluster + CSV :
https://blogs.msdn.microsoft.com/clustering/2013/07/19/how-to-enable-csv-cache/
quote : “CSV Cache will deliver the most value in scenarios where VMs are heavy read requests, and are less write intensive. Scenarios such as Pooled VDI VMs or also for reducing VM boot storms.”
LikeLike
You are correct. Hyper-V does include an alternative in CSV Cache. I would expect it to perform similarly to XenServer’s RAM-based read cache, but not seen the test data yet.
LikeLike
Dan, I setup some test pooled machine for mcs i/o. it seems that it is very cpu intensive. running w7-32, with 4cpu achieved 60,000 iops with iometer, while 2vcpu achieved only 1600 iops or so (ps – this is an ssd array). does this seem correct to you? also using the same perfmon’s in the video, i could not get them down to 0 like you all did. can you advise on the specs of the vm’s in the video?
LikeLike
That doesn’t sound right. The VM specs were pretty minor (2 vCPU and 4 GB RAM). How much RAM are you giving to the RAM Cache?
LikeLike
256mb and 10gb, just the detaults when i create the new catalog. i did the same test with a fresh win 10 ltsb iso with just the vda installed, and got the same results. support ticket?
LikeLike