Category Archives: 7 Requirements for Virtual Desktop Success

Seven Requirements for Virtual Desktop Success


What do you think are the main ingredients to any successful desktop virtualization project?  Is it application integration methodology? Is it hardware? What about the IT team?  Based on my experience, the top requirements really boils down to a few core items, all of which I’ve discussed many times in previous blog postings (applications, standards, and executive buy-in to name a few).

Before we get into the seven requirements, we must understand the point of desktop virtualization.  Continue reading Seven Requirements for Virtual Desktop Success

Advertisements

Desktop Virtualization… The Right Way (Politics and Puppy Chow)


Politics and dog food… some might say they go hand-in-hand (especially if you watched any coverage about the healthcare debate). But politics and dog food are also relevant in most organizations, especially when undertaking a massive restructuring in the way you deliver desktops to users.

Desktop virtualization is not something you can just turn on one day. It takes planning. Although some organizations made the decision to only implement a small-scale, limited virtual desktop environment, the anticipated improvements with such an environment are lost. Instead of having a centrally managed desktop environment, capable of supporting users in all geographies, the small-scale implementation ends up creating more complexity as two different types of desktop environments must now be supported: traditional and virtual.

If the desktop virtualization solution will be a success in the long run, it requires collaboration between multiple IT groups: network engineers, desktop administrators, server specialists, application experts, and the support team. In many organizations, these teams each have their own objectives and responsibilities. Taking on a desktop virtualization project requires time, resources and commitment; something that is unlikely to happen organically.

If done correctly, a desktop virtualization initiative must have executive level buy-in. Only when the executives are on board with the new initiative will all of the pieces fall into place, which includes:

  • IT Collaboration: Once the executives make desktop virtualization a corporate imperative, the IT groups must forgo the day-to-day politics of their own silos and work together to try and come up with a solution that meets the objectives of the business. This will be difficult and involve breaking down the typical barriers between groups, but with a mandate from the highest levels within the organization, the groups will have no alternative but to work together on a common goal.
  • Funding: Desktop virtualization requires the purchase of additional data center hardware, the levels of which are based on the scale, configuration and virtual desktop types delivered. Most departments do not have the financial resources to create a best-of-breed solution for the users, which results in a fragmented solution not meeting the organization’s expectations. However, with the executive buy-in, funding must be made available to upgrade the infrastructure to support the new environment. This funding must include new server hardware, storage infrastructure, management tools, and networking optimizations.
  • Users and Change: Most users hate change, especially when it comes to their desktop.  Change often means downtime, broken applications or lost files. However, this change can be mitigated by hearing first hand from the executives about why the initiative is being undertaken, what the expectations are, and how the users can help make the project a success. In fact, executives should be the first ones embracing the new environments configured in the same way that users will be expected to work.  By watching the leaders eat the puppy chow, users are more inclined to believe that the new solution is the right thing for them.

With all of the talk about dog food, I think I’ll buy some stock in Purina Puppy Chow.

This is Part 7 in the Desktop Virtualization… The Right Way series:

Daniel Feller
Lead Architect – Worldwide Consulting Solutions
Citrix Systems, Inc.
Blog: Virtualize My Desktop

Desktop Virtualization… The Right Way (Migration)


We’ve followed all of the best practices, did a proper analysis and design and are ready to start moving users to their brand new virtual desktop. But not so fast. We need to make sure we have the proper plan in place or else we will end up with incorrect applications, confused users, or lost files. A migration plan must be put into place providing the following for the users.

  • Personalization Synchronization: A user’s traditional desktop has become polluted with numerous application installs, updates, and patches. Blindly transferring these settings into the new environment will have unforeseen results. An alternate approach is to start with a clean environment for every user and only then migrate the required settings (example: outlook signature, browser favorites, etc). By identifying the settings beforehand, the migration process can be validated and the settings can be tested to make sure they remain compatible with the new system. By the way, there are solutions out there (like AppSense) that can help simplify this aspect of the migration.
  • Data Synchronization: In addition to the user’s settings, any data resident on the local desktop must be transferred to a location accessible by the virtual desktop, which preferably is a network share. As users have the tendency to store this data anywhere on their desktop, analysis should quickly determine two likely spots: My Documents or a folder on the local drive. When a user is ready for migration to the virtual desktop, the data is moved to the network share. Once the move is complete, the user should start working from the virtual desktop immediately. Even though these locations are often accessible from the traditional desktop, doing so would typically result in poorer performance, slow file access and potential contention issues. Because of this, it is advisable to stay within the virtual desktop once the user has been migrated.
  • End User Support: A migration is going to have an impact on the users. By identifying this as a fact, a proper support structure can be put into place beforehand. The support team should accommodate the typical issues encountered during a migration. The common questions/issues must be documented and communicated to all users who are about to undergo migration. These materials should be in an easy to find location. But these steps alone are not all that is required. During the first week of migration, there will be a flood of user issues and questions. If a thorough User Acceptance Test was completed, many of these challenges would have already been identified and a valuable FAQ would have already been created. The support team needs the tools and training in place to be able to assist the users in a timely manner. The support team is much more effective if they are able to see the user’s end-point device and the user’s virtual desktop, which is possible with GoToAssist. This gives support full visibility into the user’s challenges.

The point to remember is that the migration plan must not be set in stone. As the first users are migrated, gaps in the process will come to light. The process must be flexible to accommodate unforeseen challenges along the way. Changes to the process must be communicated and followed by the rollout team. And finally, once a user’s data/settings are migrated, they MUST move to the virtual desktop. If not, expect data/personalization discrepancies between the physical and virtual desktop worlds.

Daniel Feller
Lead Architect – Worldwide Consulting Solutions
Citrix Systems, Inc.
Blog: Virtualize My Desktop

Desktop Virtualization… The Right Way (User Experience)


The user discussion doesn’t just end with an understanding of the topology. Desktop virtualization architecture will only get a users so far: access to a virtualized desktop. If the virtualized desktop does not provide the required experience in different scenarios, users will find ways of reverting back to their traditional model or find a way to make life very difficult for you, the architect of this less than stellar solution.

Trying to get these users back is challenging as the bad perceptions must be changed and that takes time. Many of the missteps with regards to the user experience are based on improper analysis and planning. In order to have an environment that is aligned with the user community, understanding the following items are critical.

  • Network Impact: Desktop virtualization requires a network connection, either temporary or permanent depending on the virtual desktop model selected. Trying to understand the network impact is not a trivial task and will never get one to the exact numbers because user will do different things like typing, printing, browsing, Flash video, WMV video, online Facebook games, etc. However, the Performance Assessment and Bandwidth Analysis white paper should help understand the impact of each activity and allow an architect to plan appropriately.
  • Peripherals: One of the beauties of a traditional desktop is it is customizable with peripherals: printers, scanners, webcams, and external drives. These requirements must be understood and supported, but not at the expense of security. For example, should users be able to copy data from the data center to a personal USB storage drive? This might be construed as a security hole. What about allowing a user to copy a file from the USB drive to the data center? This might put the data center at risk for viruses or malware. The justification for certain devices must be determined, but regardless of the outcome, proper security procedures must be put into place.
  • Resources: Users who are not given the proper amount of dedicated resources (CPU and memory) are either left with a desktop experience that is unusable due to the constant delays and sluggish responses because of competing resource requests or a desktop with ample resources but costing the business significant amounts of money due to unused and idle hardware. Although it is easier to allocate one resource configuration for every user, users have different requirements and should be given different configurations. It is usually a better option to create 3-4 different resource configurations for Light, Normal and Power users. With proper analysis of the requirements, users can be placed into one of a few defined configurations.
  • Mobility: A user’s requirement for offline mobility plays an important part in the over analysis. This one requirement significantly limits the possibilities for the user in respect to the most appropriate FlexCast model. Many desktop virtualization models require an active network connection. An active network connection is not guaranteed for the mobile user. Identifying this group of users allows for the design of an offline model of desktop virtualization.

These are some of the most important things to understand regarding the users and their experience expectations. If a user believes they are allowed to use a webcam within their virtual desktop and it does not work, that user now has a bad perception. The experience matters to the user, so it must matter to the architect.

Desktop Virtualization… The Right Way (User Topology)


One office with one type of desktop… Easy. Hundreds of offices with any type and age of desktops… Difficult but not impossible.

Most organizations find themselves in the difficult camp. A user’s desktop can be completely different (in terms of hardware, resources, applications and configuration) than the person sitting next to them doing a similar job. As the environment includes users from different departments, in different offices, with different requirements it becomes clear that the understanding of the user topology for an organization is critical before one can create a desktop virtualization solution.

In previous blogs, I’ve discussed how understanding the underlying standards, applications and storms plays an important role in creating a successful virtual desktop design. The fourth requirement is to understand the organization’s user topology. More specifically, one must get a grasp of the endpoints and user locations.

First, the endpoints. Most organizations follow a 3-5 year desktop refresh cycle. At a minimum, there will be 5 different hardware configurations for each of the 5 years (in actuality, there will likely be many, many, many more configurations). Also, the desktops that are less than 2-3 years old have hardware configurations that can easily support Windows 7 and the latest applications. These newer desktops have more virtual desktop options than an endpoint that is 5+ years old. Newer desktops have the processing power to support the Local Streamed Desktop FlexCast model instead of the hosted VM-Based VDI desktop model.

With Local Streamed Desktop, the desktop is still virtualized and centrally managed, the desktop still receives the virtualized applications, and the users still have their personalized settings applied. The difference is that instead of using resources on a physical server in the data center, the local desktop resources are used. Because local desktop resources are consumed, fewer data center servers are required to support the same number of users

This is but one example of how understanding the endpoints helps determine the type of virtual desktop a user requires. However, just knowing the endpoints is only one aspect of the user topology. The second aspect, user’s location, also plays an important role in selecting the most appropriate virtual desktop.

Certain desktops require a high-speed connection to the infrastructure while other options can allow slower networks with higher latency. By assessing the user locations and the connections to the data center, the proper solution can be put into place to support the virtual desktop FlexCast model.

  • Hosted shared desktop: Can be used on networks with low speeds and high latency
  • Hosted VM-based VDI desktop: Can be used on networks with low speeds and high latency
  • Hosted blade PCs: Can be used on networks with low speeds and high latency
  • Streamed local desktop: Requires a fast, low latency network to the physical desktop for optimal performance
  • Virtual Apps to Installed Desktops: Can be used on networks with low speed and high latency. If application streaming is used (as compared to hosted applications), slower networks will delay application startup time, but users have the ability to work disconnected.
  • Local VM-based desktop (not yet available): Can be used on networks with low speed and high latency, although the slower the network the longer it will take to sync the image to the endpoint. Images can be tens of GBs in size. But once delivered to the end point, all communication remains local to the desktop.

When deciding on the appropriate virtual desktop type, the endpoint and the user’s location matter. Without taking both into account, a user might end up with a fast virtual desktop that takes 5 minutes to start Microsoft Word. Gather all the information before deciding on your virtual desktop type.

Desktop Virtualization… The Right Way (Standards)


Total power often leads to corruption. No, I’m not talking about business or politics. I’m talking about desktops. Have you been in a meeting where people talk about giving users admin rights to workstations. I have two words for you… Be afraid… Be very afraid… OK that was 5 words, but the point is clear. Be afraid.

Many of the challenges with the traditional, distributed desktop operating environment are the lack of standard definitions and enforcement. Most organizations strive for a secured and locked down desktop environment, but over time users were granted exceptions. Throughout the months and years, those exceptions became the new de facto standard.

Now, users have local admin rights. Thousands of unique applications are installed throughout the organization. Every desktop configuration is unique. This is an almost impossible situation for any IT organization to support. This environment did not happen overnight; it took time. Standards slipped because it was simply easier and faster to circumvent the standards instead of troubleshooting the issue. Because of the lack of standards, the environment is so convoluted and complex, it is excruciatingly difficult to make any changes or updates without causing mass confusion.

That being said, can these types of organizations still use desktop virtualization? Yes. And they will see many of the benefits with desktop virtualization that have been discussed over and over again. It will just be more difficult to achieve than an organization who has the desktop standards in place and actively followed.

Many organizations look at desktop virtualization at being the solution to simplify the desktop operating environment. Desktop virtualization is an enabler.

If done to the fullest extent, desktop virtualization is an enabler towards better IT management. Desktop virtualization can enable an organization to discard the bad habits of the past and replace them with best practices that can help an IT organization survive and succeed within an ever increasingly complex computing environment. In order to simplify the management of the desktop, reduce desktop operating costs, and achieve desktop virtualization success, the organization must have alignment in terms of:

  • User rights: Users must have enough abilities to do their job, but this does not mean users should be a local administrator. IT must be able to provide the users with the correct applications and resources when requested. If modifications are required, IT must be able to accommodate in a reasonable amount of time. If IT is unable to meet the agreed upon time frames, alternatives must be made available so users can continue to be productive, which might require an open, and temporary virtual desktop playground area where users can utilize these applications until IT integrates them into the mix. I discussed this in a previous blog about a virtual desktop playground.
  • Applications: Allowing users to install their own applications into the corporate desktop image increases complexity and reduces the security of the system. IT has no visibility into the application and is unable to plan upgrades, updates, or hardware refreshes. The applications could open up holes in the infrastructure that others could exploit. The organization must gain control of the applications if the organization is going to be more flexible.
  • Operating Procedures: IT must deliver the resources users require in an adequate amount of time. This involves the development of new IT processes and ways of working. If a user requires an application, IT must find a way of either incorporating the application into the environment, or finding the user an acceptable alternative while working within the confines of the corporate standards.

Simply moving to desktop virtualization will help us solve some of our challenges, but if you want to make a significant improvement in the way IT is seen within your organization, there must be a new approach. Without clear definition of the operating standards, moving to a desktop virtualization solution will result in many of the same challenges observed with the traditional, distributed desktop operating model. Chaos. Except this time it will be virtual chaos.

Desktop Virtualization… The Right Way (Storms)


One of the questions you must ask yourself when designing a desktop virtualization solution is understanding the user patterns.  This has a direct impact on XenDesktop farm design and scalability with respects to boot up storms and logon storms. Let’s take two different examples so you can get a better idea for what I’m talking about:

Scenario 1: 9-5:
In this scenario, all users logon in the morning and logoff in the evening.  There might be some sporadic users working after hours, but for the most part users stay within these working hours.  This is a fairly easy scenario, which is why I’ve started with it.

To design your environment, you need to make sure that the boot up storm doesn’t overwhelm your environment.  You will be starting a large number of hosted virtual desktops and that has a direct impact on your hypervisor of choice, your storage solution and your network infrastructure.  You can easily overcome any challenges with a boot up storm in this scenario by using the XenDesktop idle desktops configuration to pre-boot desktops X minutes before the main rush begins (X is based on how many desktops you need up and running before users start connecting).  By the time users come online, the system should have calmed down from the boot up storm.

Each hypervisor limits the number of simultaneous bootups (XenServer being 3).  Although this helps limit the number of virtual desktops powering on at once, that process only requires a short amount of time as it does not include the actual OS loading.  If you have 1,000 desktops (across 10-20 hypervisors) that must be ready by 9AM, and you assume each desktop takes 30 second to fully boot, you want to start your bootup sequence by at least 8:30.

The second aspect is the logon storm.  There is little we can do to the environment to spread the storm over a greater amount of time as it is based solely on the user pattern.  The logon storm is going to have a direct impact on your farm design.  You need to look at the following:
1.    Number of user connections per minute
2.    The IOPS requirements per minute
3.    The logon times you require

As you add more users to the environment, you need to optimize your architecture and allocate additional resources in order to accommodate the storm.  This might require you to dedicate XenDesktop controllers as XML Brokers and Farm Masters.  By giving the controllers specific roles, you optimize those systems to be able to support greater numbers of simultaneously connecting users.

Scenario 2: 24/7 (3 shifts)
This scenario brings about a few more challenges in that users are always online.  The organization is running 100% of the time and as users are connecting, other users are logging off .  The cycle continues over and over again.  This architecture is really dependent on the environment in question. Even though the organization might be 24/7, those shifts might be located around the world in different locations connecting to different data centers (follow-the-sun model).  But if we have a unique scenario where we have 1 data center and all shifts connecting to that 1 site, this type of an environment would make us change our design as follows (safe to assume that all shifts are different sizes. In fact, many 24/7 models located in one site have one large shift and the remaining 2 shifts are significantly smaller):

In the 9-5 scenario, a boot storm wouldn’t impact other users as no users were online before the start of the workday.  In the 24/7 scenario, we have active users.   If we sized our environment based on max concurrency for a single shift, we have little extra capacity to pre-boot desktops.

  • First, we start all available workstations ahead of time to build up our idle pool (without disrupting working users).
  • Second, we disable the reboot after logoff option for the shift immediately before the largest shift starts. This will allow the desktops to be ready to go even faster.  This can be done by creating a workstation group per shift. This does bring the risk of the users not receiving a clean desktop, but this is mitigated by the desktops being rebooted (cleaned) after the other 2 shifts end.
  • Third, when the logon storm begins, we can also expect a logoff storm to begin as well because one shift begins as a different one ends.  Disabling the reboot for one shift change will help overcome the boot storm impact. To accommodate the logon/logoffs, we need to optimize our environment, just like we did in the 9-5 operational model, dedicating controllers for XML brokering and farm master.  This type of configuration allows us to support the largest possible number of users within one farm, although at a certain point we will require a new farm.

Two different user pattern scenarios to think about during a desktop virtualization design. A few things to keep in mind:

  • Does it require and understanding of the user environment? Yes
  • Will it impact scalability of the underlying infrastructure? Yes
  • Can the environment be designed in such a way to support these usage patterns? Yes