PVS vs MCS – Part 2: Scalability


This is part of a series comparing Provisioning Services and Machine Creation Services

In Part 1 of the PVS vs MCS debate, we saw

  • Provisioning Services bridges the gap between the physical and virtual world
  • Machine Creation Services bridges the gap between the on-premises and cloud world

Let’s continue digging into the PVS vs MCS debate and focus on scalability. Scalability plays a big role in the solution. If one solution only scales to 500 users, it would be a bad idea to select that option for a deployment of 5,000 users.

It seems like more people question the ability for Machine Creation Services to scale than they do Provisioning Services. When Machine Creation Services was initially released, many believed it was only for small deployments of around 500.

So, let’s look at how these two technologies work, which will give us some insight into their scalability potential.

Provisioning Services

Provisioning Services utilizes network streaming. A master image is contained within a single file. Each Provisioning Services server accesses the read-only file, and streams portions of the file to the target devices (FYI booting Windows 10 to the logon screen only requires 250MB of streamed data!).

PVS architectureWhat will bottleneck? Your Provisioning Services server’s NIC.

If you need more scalability, you can increase the NIC speed, add more NICs, add more servers or do a combination.

Machine Creation Services

Machine Creation Services, on the other hand, is not based on a stand-alone server install like Provisioning Services.  The Machine Creation Services functionality, contained within the XenApp and XenDesktop controller, is based on interacting with the hypervisor storage (local and shared). As virtual machines are created and updated, the Machine Creation Services commands are sent to the virtualization hosts.

MCS architectureSo, what will bottleneck first? Your storage.

Each storage cluster can only support so many virtual machines before it cannot handle the incoming requests.  If you need to add more scalability, you need to create additional storage clusters.

Comparing the Two

Both solutions can scale, as long as you add more storage clusters or more servers. But one thing you should keep in mind is that the user experience, or how well the target device performs, is based on different factors:

  • Provisioning Services links user experience to the stability and performance of your network
  • Machine Creation Services links user experience to the stability and performance of your storage

But in terms of scalability, where does that leave us in the PVS vs MCS debate?  Even

MCS vs PVS table

Daniel (Follow on Twitter @djfeller)
XenApp Advanced Concept Guide
XenApp Best Practices
XenApp Videos

Advertisements

PVS vs MCS – Part 1: Resource Delivery Options


This is part of a series comparing Provisioning Services and Machine Creation Services

Five years ago, Citrix released Machine Creation Services.  As a way to help admins decide between Provisioning Services and Machine Creation Services, I created a decision tree, breaking the decision across multiple requirements.

A lot has changed.

Provisioning Services changed.

Machine Creation Services changed.

You know what didn’t change?  Homer, Marge, Bart, Lisa and Maggie Simpson.

You know what else hasn’t changed? The decision tree.  It is old. It is outdated. It is no longer useful.

So my recommendation to you: STOP USING IT!

I want to look at the PVS vs MCS debate again based on the current technology.

I first want to look at the type of resources each platform can deliver.

Both imaging platforms are able to deliver virtual RDS and VDI workloads to XenServer, Hyper-V and vSphere.  The difference between the two lies in the ability to support physical and cloud-hosted workloads.

Provisioning Services, because it relies on network streaming of the master image, is able to deliver the image to virtual and physical endpoints. Imagine if you were in a school computer lab where every 45 minutes the class changed and the endpoint had to run an entirely new suite of software.  With Provisioning Services, we can quickly re-provision physical endpoints with the speed of a reboot.

Machine Creation Services, on the other hand, requires virtualization.  It communicates with the underlying hypervisor and deploys new virtual machines based off of a master image.  Not only does this approach allow one to run on XenServer, Hyper-V and vSphere, but it also allows Machine Creation Services to deploy virtual machines to the Microsoft Azure and Amazon AWS clouds.

If I put this into a simple to understand table, we would get the following:

PVS 1

Provisioning Services bridges the gap between the physical and virtual world.

Machine Creation Services bridges the gap between the on-premises and cloud world.

But of course, the similarities/differences are far greater than what type of resources each method delivers.  And we will get into more in future blogs.

Daniel (Follow on Twitter @djfeller)
XenApp Advanced Concept Guide
XenApp Best Practices
XenApp Videos

Sizing XenApp Windows 2012R2 Virtual Machines


I guess I’m not done yet.

Last week, I posted the latest recommendations on sizing Windows 10 and Windows 7 virtual machines for a XenDesktop environment.  I received a few replies from people asking for any updates regarding Windows 2012R2.

Unfortunately, when we discuss Windows 2012R2 and XenApp, the recommendations are not as straightforward as Windows 10 and Windows 7.

  1. Because Windows 2012R2 will do session virtualization (where many users share the same VM but get a separate session) it makes sizing CPU and RAM more difficult.
  2. Because we can publish multiple resources from the same VM, we can have a mix of light, medium and heavy users on the same VM at the same time.
  3. Because each VM will host multiple users, our VMs will be sized larger when compared to Windows 10 and Windows 7. To size correctly, we need to align our recommendations with the nuances of the hardware.

Let’s take a look at the latest recommendations before we go into more detail.

Win12RwSizingvCPU

For vCPU, you notice it is based on NUMA.  What is NUMA?  I recommend you read these two blogs by Nick Rintalan.

  1. An intro to NUMA
  2. A Discussion about Cluster on Die

To summarize, you get the best physical server density when you have the same number of vCPU for your XenApp VMs with either the number of cores within a NUMA node or 1/2 of a NUMA node.  If you go with 1/2 of a NUMA node, then you will just have two times as many VMs.

Cluster on Die is a little more complex as newer hardware chips don’t have equal sized NUMA nodes across cores.  Cluster on Die is a BIOS option that balances cores equally by creating clusters of cores.

RAM

Sizing RAM is also a little different than when comparing it to Windows 10 and Windows 7. With session virtualization, like XenApp, all users share the same OS instance. Users also share the same application instances. The OS and app instances only consume RAM once. That is a huge reduction in overall RAM usage, which is why the RAM recommendations are significantly lower than the desktop OS.

Of course, the amount of RAM you allocate is going to be based on the specifics of your applications.

PVS RAM Cache

Just like with Windows 10 and Windows 7 recommendations, the PVS RAM cache is extremely valuable in a Windows 2012R2 XenApp environment.  With PVS RAM Cache, we see huge reductions in IOPS for Windows 2012R2.

Daniel (Follow on Twitter @djfeller)
XenApp Advanced Concept Guide
XenApp Best Practices
XenApp Videos

Sizing Windows 10 and Windows 7 Virtual Machines


After reviewing all of the scalability tests we conducted over the past few months, I thought it was time to revisit the recommendations for sizing Windows 10 virtual machines.  I also reached out to Nick Rintalan to see if this is in line with what is currently being recommended for production environments (if you disagree, blame him 🙂 ).

Win10Sizing

A few things you will notice

  1. Windows 7 and Windows 10 recommendations are similar.  Microsoft’s resource allocation for both operating systems are similar.  The Windows 10 and Windows 10 scalability tests resulted in similar numbers.
  2. Density – Experience: For some of the recommendations, there are 2 numbers. The first is if you are more concerned with server density and the second is if you are more concerned with the user experience.  What I find curious is if you have a heavy workload, are you as concerned with server density?
  3. PVS RAM Cache: Using the RAM cache will drastically reduce storage IOPS.  This will be critical to providing a good user experience and will be taken from the total allocated RAM.  The RAM column takes the RAM Cache numbers into account.
  4. Hypervisor: There is no hypervisor identified.  Testing showed minor differences between XenServer, Hyper-V and vSphere.

Daniel (Follow on Twitter @djfeller)
XenApp Advanced Concepts Guide
XenApp Best Practices
XenApp Videos