Improving Logon Time with PVS Accelerator


The title is correct.  We can improve user logon time by implementing PVS accelerator in XenServer 7.1.

This actually makes perfect sense.

We already showed that PVS Accelerator drastically improves VM boot times because portions of the master vDisk image are cached locally.  Booting a VM equates to roughly 80% reads and 20% writes.  VMs using the same image are reading the same blocks of data. Due to this similarity, we are able to see huge network utilization reductions by using the PVS Accelerator cache. These reductions in the network utilization translates into faster boot times.

But what about logon time?

Logon time isn’t nearly as read IO intensive as boot, but it does yield a 50% read and 50% write ratio.  This would leave me to believe that PVS Accelerator should also improve a user’s logon time (at least for the read portion of the IO).

For the write IO occurring during logon time, we can utilize something within Provisioning Services that has been shown to improve write IO performance: RAM Cache with Disk Overflow.

Let’s look at the results (Note that these results were NOT done on enterprise-grade server hardware):

Our logon times are dropping.

  • Optimize Windows 10 = faster logon times
  • Implement XenServer PVS Accelerator = even faster logon times
  • Enable PVS RAM Cache with Disk Overflow = even faster logon times

Daniel (Follow on Twitter @djfeller)
Citrix XenApp and XenDesktop 7.6 VDI Handbook
XenApp Best Practices
XenApp Videos

Advertisements

6 thoughts on “Improving Logon Time with PVS Accelerator”

    1. PVS Accelerator is READ I/O optimization integrated into XenServer. RAM CAche with Disk Overvflow is WRITE I/O optimization integrated into the actual VM. PVS Accelerator is only for XenServer. RAM Cache with Disk Overflow is for any hypervisor.

      Like

      1. ok, thanks for the clarification, I assume RAM Cache was the PVS server doing.
        eg: on xenServer with PVS-Accel (xen01),
        PVS-01 send boot data to xen01 to load a VM, xenvdi01
        xen01 cache the data & boot xenvdi01
        xenvdi02 starts and use data from xen01 cache
        now
        xenvdi01 makes lots of changes, that needs to overflow to disk
        1) it sends the overflow data to xen01, then to disk
        2) it sends the overflow data to xen01, then pvs-01, then to disk
        3) it sends the overflow data to pvs-01, then to disk

        Which is it, or am i getting the flow wrong?

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s