Design XenDesktop for the SMB


I often receive questions around XenDesktop design like

  • How should I design my XenDesktop environment for 50 or 100 users?
  • What hardware should I use?
  • What if I want to start small and slowly grow the environment?

This is an interesting use case, the small to medium business. Do you need all of the bells and whistles that a large enterprise needs? Probably not. Do you need to include extensive high-availability options? Probably not. Do you need to dedicate infrastructure components? Probably not.

In fact, designing a XenDesktop solution for the SMB can be done quite easily, which is what we will be discussing during the upcoming Ask the Architect TechTalk – Virtual Desktops for the SMB. This Ask the Architect TechTalk is going to be a little different than the ones in the past. This time we are going to spend roughly 30 minutes providing the recommended SMB approach for XenDesktop. After that, we will be taking your questions and answering them as best we can. Who will be answering your questions on the TechTalk?

  • Tarkan Kocoglu – Director within the Worldwide Consulting Solutions team
  • Thomas Berger – Architect within the Worldwide Consulting Solutions team
  • Daniel Feller (Me) – Lead Architect within the Worldwide Consulting Solutions team

I’m extremely excited about this Ask the Architect TechTalk because I’ve got two very smart people helping me answer your questions. I am looking forward to hearing your questions so make sure you registrar and attend.

Daniel – Lead Architect
XenDesktop Design Handbook

Advertisements

The Big Bang Effect with Desktop Virtualization


Do you believe these statements?

  • In order for me to see any value in doing desktop virtualization, I have to move all of my users to a hosted VDI model.
  • I don’t need a VDI type desktop as I can simply use hosted shared with XenApp.

I’ve had many discussions with people who have these, as well as plenty of other statements about desktop virtualization. This leads me to believe that many are seeing desktop virtualization as a big bang effect (great TV show by the way), that is in an all or nothing approach. This couldn’t be further from reality.

Many of the organizations doing desktop virtualization cannot live by a single model; there is just too much complexity and requirements from the users and business. Many who struggle with virtualizing desktops try to put all users into a single bucket or virtual desktop type, or don’t fully understand why they are even doing virtual desktops. This is the wrong approach. I’m not going to stop and say you are doing it wrong and be done with it. The good news is that I and my fellow consultants are here to help guide you to the right approach.

Citrix Consulting has done so many desktop transformation projects that we are going to show everyone who attends Synergy 2011 San Francisco what we do. Here’s the deal, we are going to focus on the complete desktop transformation model. Each session in the 5-part series will be presented by experienced Citrix Consultants focusing on the following:

    1. SYN329: Understanding the desktop transformation process and getting started
    2. SYN328: Designing an architecture for desktop transformation that can adapt and scale
    3. SYN305: Storage infrastructure design guidelines for successful desktop transformation
    4. SYN348: Design and deliver a delightful virtual desktop user experience the first time and every time
    5. SYN349: Lessons learned from the desktop transformation frontier – the Good, the Bad and the Ugly

      Interested yet? Then let’s talk about this analysis topic. Truthfully, analysis isn’t the sexiest thing in the world to talk about, but guess what, if you mess this up, you will struggle. Organizations that fail to properly assess their environment are the ones who more than likely believe in the opening two statements. By assessment, I’m not simply talking about gathering traditional desktop information (CPU/RAM usage and application statistics), I’m talking about the overall goals (why are we doing desktop virtualization), the user segmentation (what do users need and why), and how to quickly see value in this type of a solution (what do I do first, second, third, etc).

      The analysis information allows us to create a design taking all of these items into account, but allowing us to move as fast or as slow as we wish. It will help us better align storage with the user needs. Why use the SAN if I don’t need HA? It will identify what types of user optimizations are required. Local, remote, secure are unique user requirements that must be met, we know how. And guess what, we’ve got plenty of lessons learned and best practices to discuss.

      I hope you are interested and are asking what you need to do…

      1. Attend Synergy
      2. Listen to the Citrix Consulting sessions hosted by: Florian Becker, Daniel Feller, Doug Demskis, Eric Jurasinski, Andy Paul, Michael Schaeffer, Dan Allen, Thomas Berger and Nicholas Rintalan.
      3. Ask us questions

      Daniel – Lead Architect
      XenDesktop Design Handbook

      PVS or MCS – What’s Your Decision Going to Be


      Note: This blog is in reference to Citrix XenDesktop 5.0 ONLY.

      Note: The following link contains the  latest discussion between PVS and MCS (https://virtualfeller.com/2016/05/17/pvs-vs-mcs-part-1-resource-delivery-options/)

      The decision between using Provisioning Services or Machine Creation Services is based on many things, with a few being discussed previously:

      1. Big Picture
      2. Operations
      3. Resource Requirements

      Let’s say you’ve gone through these discussions and are still trying to determine what approach you should take. Personally, I like to use decision trees, like the following:

      By answering these questions, you will get a better idea of what is most appropriate:

      1. Hosted VDI Desktops Only: Larger enterprise environments are often more complex, in terms of end user requirements. The complex user requirements cannot be completely met with Hosted VDI desktops, which require the organization to expand into different options. Using Provisioning Services for these architectures is recommended due to the ability to deliver images more than Hosted VDI desktops.
      2. Dedicated VDI Desktops: If there is a user requirement for dedicated desktops, there is an increased recommendation to use Machine Creation Services or to use installed images.
      3. Large Boot/Logon Storms: Boot and logon storms create massive IO activity on the storage infrastructure, requiring greater levels of IOPS. For larger deployments with a large boot/logon storm, Provisioning Services is recommended due to IOPS savings.
      4. Blade PCs: Certain users require the performance of a Blade PC, while still secure within the data center. Because Blade PCs are standalone hardware devices, Machine Creation Services cannot be used.
      5. SAN: Provisioning Services has the flexibility to work with and without a SAN infrastructure. However, Machine Creation Services becomes more challenging without a shared storage infrastructure, like a SAN. If a shared storage solution is not in scope or is too costly for the environment, Provisioning Services is a better option.
      6. Change Control Processes: Maintaining Provisioning Services desktop images requires proper processes depending on the type of update required (hotfix versus network driver update). Smaller environments will most likely not have processes in place. Maintaining a Machine Creation Services image is often seen as easier.

      What did you come up with? Surprised? Or is it what you expected? If you want the full breakdown on deciding between the two options, then please refer to the recently released version of the “Planning Guide: Desktop Image Delivery” which was recently added to the XenDesktop Design Handbook.

      If you are new to the handbook, then I suggest you read about it from Thomas’s blog discussing how to get the handbook for offline use.

      Daniel – Lead Architect
      XenDesktop Design Handbook

      XenDesktop 5 Scalability – Site Capacity


      When we last looked at XenDesktop 5 scalability, we really focused on the user experience in that users should not be required to wait longer than 2.5 seconds before the system responded to an authentication or launch request. We said that if the controller got very busy due to logon storms, we could add additional controllers to help lower the overall load and get us back to that 2.5 second goal. But guess what? The 2.5 second goal might require that we look at other aspects of the XenDesktop 5 architecture beyond the controllers. We already looked at the maximum size of a XenDesktop controller, so th next thing to look at is how big can a XenDesktop 5 site become?

      We can keep adding controllers, but won’t we eventually hit a site limit? If the controllers “talk” to the SQL Database for pretty much every transaction, we want to make sure that the database server is designed appropriately to support the storms. We could easily run into an instance where the controllers are functioning well below thresholds, but the user experience is still poor. We need to take a look to see if we allocated enough resources to make sure that the SQL database is not bogged down.

      The SQL Database is the heart of a XenDesktop 5 site. Here’s the good news… Databases are built for transactional type processes, and that is what XenDesktop 5 is using the SQL Database for. We should be able to see some crazy scalability numbers and with testing being conducted, we are.

      For example, I’ve seen one test that simulated 20,000 logon connections in 13 minutes (25 new requests per second). An 8 core, 16GB RAM dedicated SQL Server consumed 32% CPU utilization. This is a fairly sizable logon storm. Even if you extrapolated this out, you should be able to get an insane size of a single XenDesktop site. I haven’t seen anything in production at this size yet on XenDesktop 5, but things are looking very promising.

      As you would expect, once you are running, you need to monitor the database server (especially if you are sharing this database with other database or services). At a high level, focus on the standard items:

      • CPU
      • Memory
      • Disk

      If you need to drill down more, due to performance issues, this is when you need to be looking closer at the SQL Server metrics:

      • Buffer manager
      • Memory manager
      • Statistics

      Daniel – Lead Architect
      XenDesktop Design Handbook

      PVS or MCS – We Talking About IOPS Again


      Deciding between PVS and MCS is a tough decision for many organizations. Although MCS is limited in that it can only do virtual machines, it does appear to be easier to setup than PVS. In fact, MCS just works while PVS requires another server and configuration of bootstrap processes like TFTP/PXE. So it sounds like we should be using MCS for everything. Right? Not so fast. We need to look at the resource requirements, beyond servers, as this might negate the benefit of easier setup/configuration.

      If you’ve gone through the other two discussions so far, we focused on the Big Picture of your environment and the operational model. This time, we are focusing on resource requirements.

      First, we know that using PVS will require, at least, 2 additional servers (remember I’m including fault tolerance). MCS doesn’t require any extra hardware besides the hypervisor. Now let’s look at storage requirements, and with storage I’m talking about IOPS, our favorite topic.

      If you look at PVS, we do all reads from one location and do all of our writes on another location. Because of this, we can optimize the systems. We know that on the PVS server, we allocate enough RAM so the reads happen in RAM and not disk, thus greatly reducing read IOPS.

      MCS is different. The reads and writes happen on the same storage. This is a big deal. Look at the graph below.

      We know that during desktop startup, we have a huge amount of read IOPS, during logon, it is evenly split, and during the steady state (working), the ratio moves towards writes. Most people are concerned with the boot and logon storms. Because these are more read intensive, you would think that PVS would be the better option for large boot/logon storms as we cache the vDisk in PVS RAM. This line of thinking is correct.

      Now before people say, “Hey, my SAN can cache”. You are correct. Of course there are SAN caching solutions, but these cost money. With PVS, it is just part of the Windows operating system. Because of this, we can see a MCS implementation generating more IOPS than a PVS implementation. How much more? I’ve seen as much as 1.5X more IOPS. For deployments with 50 desktops, this might not be a big deal, but what if you are talking about 20,000 desktops?

      Some might be thinking about using XenServer IntelliCache to bring these more inline. This has a potential to help lower MCS IOPS requirements, but the products aren’t integrated yet, so I’ve got no data points to share.

      But regardless, you need to take the resource requirements into consideration before making your PVS/MCS decision.

      Daniel – Lead Architect
      XenDesktop Design Handbook