Tag Archives: xenserver

Citrix Workspace Visio Stencils


The latest Visio stencils for the Citrix Workspace, including XenApp, XenDesktop, NetScaler, XenMobile, XenServer, ShareFile and Citrix Cloud.

Daniel (Follow on Twitter @djfeller)

Change Log

October 3, 2017:

  • Visio:
    • Added logon process diagram
    • Added app launch process diagram
    • Added on-premises port diagram
    • Added Citrix Cloud port diagram
    • Updated conceptual diagram for on-premises deployment
    • Updated conceptual diagram for hybrid-cloud deployment
    • Updated conceptual diagram for Citrix Cloud deployment

September 6, 2017

  • PowerPoint:
    • Added PowerPoint template with icons and Citrix Workspace diagrams

August 21, 2017

  • Visio

June 1, 2017

  • Visio
    • 7.14 Release: Added NetScaler MAS, NetScaler SD-WAN, NetScaler Gateway, and two Citrix Cloud services: XenApp & XenDesktop Service and XenMobile Service

February 28, 2017

  • Visio
    • 7.13 Release: Added Antivirus, PVS Accelerator. Updated XenServer, PVS, VM and Physical Server icons

December 7, 2016

  • Visio
    • 7.12 Release: Added Active Directory, Workspace Environment Management, Federated Authentication Service and Machine Catalog

February 23, 2016

  • Visio
    • 7.9 Release: Added Secure Browser, AppDisk, AppDNA and Provisioning Services

January 4, 2016

  • Visio
    • 7.7 Release: Added Personal and Shared Linux desktop, zone, App-V and Cloud Connector

June 24, 2015

  • Visio
    • 7.6 Release: Added Linux virtual desktop, receiver, GPU/vGPU and Driver icons

August 29, 2013

  • Visio
    • 7.5 Release: Updated entire stencil set with new style

January 23, 2013

 

 

 

 

 

 

 

 

Advertisements

XenServer PVS Accelerator Sizing – Part 2


As you might have read, I recently ran a few XenServer PVS Accelerator tests to determine a starting point for the cache size.  This initial investigation looked at Windows 10 and Windows 2012R2 for boot and logon operations.

Looking back, I determined that I want to include three additional items

  1. Impact of a larger cache size – Increase from 2GB to 4GB RAM cache
  2. Impact of applications
  3. Impact of Windows 2016

Before I get into the results, let me explain the graphs.

  • The blue, green and orange line denotes boot, logon and steady state operations. The first time those colors appear depicts the first VM; the second time the colors appear depicts the second VM. These colors are linked to the axis on the right showing percent of cache used.
  • The solid red area graph depicts the amount of network traffic sent from the Provisioning Services server to the host.  The line should initially be large and then diminish as the cache is used. It is linked to the left axis with bytes per second.

With that understanding out of the way, let’s look at the results.

Continue reading XenServer PVS Accelerator Sizing – Part 2

XenServer PVS Accelerator Cache Sizing


How large should we make our PVS Accelerator cache? Too large and we waste resources. Too small and we lose the performance.

Let’s take a step back and recall our best practice for sizing the RAM on Provisioning Services.  We would typically say allocate 2GB of RAM for each vDisk image the server provides.  This simple recommendation gives the PVS server enough RAM to cache portions of the image in Windows system cache, which reduces local read IO. So for a PVS server delivering

  • 1 image:  we would allocate 2GB of RAM (plus 4GB more for the PVS server itself)
  • 2 images:  we would allocate 4GB of RAM (plus 4GB more for the PVS server itself)
  • 4 images:  we would allocate 8GB of RAM (plus 4GB more for the PVS server itself)

Easy.

Let’s now focus on the XenServer portion of PVS Accelerator. If we use RAM as our PVS Accelerator cache, how many GB should we allocate?

Continue reading XenServer PVS Accelerator Cache Sizing

Monitoring PVS Accelerator


I love data. I like seeing numbers and graphs. I like to see if something is having an impact.

I like when new capabilities provides us with the means to monitor because this data gives me reassurance that the feature has an impact instead of me simply believing it does.

Let’s look at XenServer 7.1 and Provisioning Services Accelerator. I was able to show that

  1. PVS Accelerator reduced VM boot times
  2. PVS Accelerator reduced user logon times

But I didn’t have any details beyond what I was able to gather from a stopwatch.

That was until I starting poking around XenCenter. I was thrilled to see a set of metrics specific to PVS Accelerator

Continue reading Monitoring PVS Accelerator

Improving Logon Time with PVS Accelerator


The title is correct.  We can improve user logon time by implementing PVS accelerator in XenServer 7.1.

This actually makes perfect sense.

We already showed that PVS Accelerator drastically improves VM boot times because portions of the master vDisk image are cached locally.  Booting a VM equates to roughly 80% reads and 20% writes.  VMs using the same image are reading the same blocks of data. Due to this similarity, we are able to see huge network utilization reductions by using the PVS Accelerator cache. These reductions in the network utilization translates into faster boot times.

But what about logon time?

Continue reading Improving Logon Time with PVS Accelerator

Provisioning Services Accelerator


An interesting new feature was included with the XenServer 7.1 release: Provisioning Services Accelerator.

In a single sentence,

PVS Accelerator overcomes PVS server and network latency by utilizing local XenServer RAM/Disk resources to cache blocks of a PVS vDisk to fulfill local target VM requests.

Take a look at the demo video to see:

Continue reading Provisioning Services Accelerator

Machine Creation Services RAM Cache and XenServer IntelliCache


As I was discussing the storage optimization capabilities in the Machine Creation Services vs Provisioning Services debate, I mentioned the use of a XenServer RAM-based read cache. This can be misunderstood as XenServer IntelliCache (a mistake I’m sad to say I’ve made in the past).

XenServer IntelliCache (released with XenServer 5.6 SP1) and XenServer RAM Cache (released with XenServer 6.5) are two different capabilities of XenServer, both of which tries to reduce the  IO impact on shared storage.

Let’s walk through different deployment scenarios with Machine Creation Services in XenApp and XenDesktop 7.9.

Scenario 1: Shared Storage on any Hypervisor

When defining a host connection, the default storage management option is to use shared storage.

DefaultThis configuration results in the following architecture

Default Storage

The virtual machines read (denoted by the black lines) from the master OS disk on shared storage. Writes (denoted with the red lines) from the virtual machine are captured in the differencing disk, located on the same shared storage as the master OS disk.

It is also important to note that there will also be reads coming from the virtual machine’s differencing disk back to the VM.

Scenario 2: Shared Storage with Optimized Temp Data on any Hypervisor

With XenApp and XenDesktop 7.9, admins, when creating their host connection configuration, can select the “Optimize temporary data on available local storage” option.

Storage ConfigSelecting this option results in the following changes to the architecture:

Opt Storage

For non-persistent desktops, instead of the temporary writes going into the shared storage differencing disk, the writes are now captured within the write cache disk on the local hypervisor storage.

Persist optBut for persistent desktops, the optimize temporary data setting is not used as all data is permanent. The writes are captured on shared storage within the differencing disk.

The value is that we don’t waste shared storage performance with data we don’t care about. We instead shift the storage IO to local, inexpensive disks.

Scenario 3: Shared Storage with Optimized Temp Data and RAM Caching on any Hypervisor

With XenApp and XenDesktop 7.9, a portion of the virtual machine’s RAM can be used to cache disk writes in order to reduce the impact on local/shared storage. During the creation of a new machine catalog, an admin defines the size of the RAM and disk cache.

CacheThe RAM cache operation adjusts the architecture as follows

mcs cacheFor non-persistent desktops with a RAM-based write cache on a non-persistent desktop, the writes first go into the non-paged pool portion of RAM within the VM. As the RAM cache is consumed, the oldest data is written to the temporary disk-based write cache.

However, this option is not applicable for persistent desktops due to the risk of data loss. If disk writes are cached in volatile RAM and the host fails, those writes will be lost, potentially resulting in lost data and corruption.

For non-persistent desktops, when used in combination with optimizing temporary data, we not only shift our write performance to low-cost local disks, but we also reduce the amount of write IO activity going to those disks.  This should further help reduce the costs by not requiring the use of local SSDs.

Scenario 4: Shared Storage with Optimized Temp Data and RAM Caching on Citrix XenServer

If the environment is deployed on Citrix XenServer, the architecture automatically includes a RAM-based read cache on each host.

Non-persistent on XenServer

Persistent desktopFor both, non-persistent and persistent desktops, portions of the master OS image is cached within the XenServer’s Dom0 RAM so subsequent requests are retrieved from the local RAM instead of generating read IOPS on shared storage.

This is valuable because we significantly reduce the master image reads from our shared storage. If you have 50 XenServer hosts, with each running 100 Windows 10 virtual machines, each virtual machine will read the same data from the same master image. This will add significant amounts of read IO activity on shared storage. By caching the reads in local RAM for each XenServer host, we can drastically reduce our impact on shared storage.

We also have a RAM-based read cache in Provisioning Services.  This capability increased boot performance by 4X.  I would expect to see similar results with this XenServer feature.

Scenario 5: Shared Storage with Optimized Temp Data and RAM Caching on Citrix XenServer with XenServer IntelliCache

When the admin defines the host connection properties, Studio includes the IntelliCache option if the host connection is XenServer.

XS ICFor non-persistent and persistent desktops, a local, disk-based cache of the master OS image is captured on each XenServer host, reducing read IOPS from shared storage. As items are accessed, they are placed within XenServer’s RAM-based cache.

The write operations are different based on whether the desktop is non-persistent or persistent.

Non-persistent with IntelliCacheFor non-persistent, disk writes are first captured in the VM’s RAM cache. When the RAM cache is consumed, the oldest content is written to the local write cache.

Persistent with IntelliCacheFor persistent desktops, disk writes are simultaneously captured in the local IntelliCache disk (.VHDCache file in /var/run/sr-mount) and in the shared storage differencing disk. When the VM reads data from disk, it first checks the local IntelliCache disk and then the shared storage differencing disk.

The value for this configuration is two-fold:

  1. Host-based IntelliCache Disk: Using IntelliCache with the Read Cache (RAM) provides us with two levels of caching on XenServer.  This could help reduce reads from shared storage in situations where our Read Cache (RAM) is not large enough.  Imagine if we have multiple images being delivered to each XenServer host. Our read cache (RAM) will not be large enough, resulting in increase read IO activity on shared storage. By combining the two, we should be able to keep shared storage Read IO activity to a minimum.
  2. VM-Based IntelliCache Disk: For persistent desktops, even though each write is performed twice (local IntelliCache disk and differencing disk on shared storage), the reads will come from the local IntelliCache disk, thus helping to reduce the load to shared storage. How much will this help the user experience and cost?  That is still to be determined.

Daniel (Follow on Twitter @djfeller)
XenApp Advanced Concept Guide
XenApp Best Practices
XenApp Videos