Desktop Virtualization… The Right Way (Migration)


We’ve followed all of the best practices, did a proper analysis and design and are ready to start moving users to their brand new virtual desktop. But not so fast. We need to make sure we have the proper plan in place or else we will end up with incorrect applications, confused users, or lost files. A migration plan must be put into place providing the following for the users.

  • Personalization Synchronization: A user’s traditional desktop has become polluted with numerous application installs, updates, and patches. Blindly transferring these settings into the new environment will have unforeseen results. An alternate approach is to start with a clean environment for every user and only then migrate the required settings (example: outlook signature, browser favorites, etc). By identifying the settings beforehand, the migration process can be validated and the settings can be tested to make sure they remain compatible with the new system. By the way, there are solutions out there (like AppSense) that can help simplify this aspect of the migration.
  • Data Synchronization: In addition to the user’s settings, any data resident on the local desktop must be transferred to a location accessible by the virtual desktop, which preferably is a network share. As users have the tendency to store this data anywhere on their desktop, analysis should quickly determine two likely spots: My Documents or a folder on the local drive. When a user is ready for migration to the virtual desktop, the data is moved to the network share. Once the move is complete, the user should start working from the virtual desktop immediately. Even though these locations are often accessible from the traditional desktop, doing so would typically result in poorer performance, slow file access and potential contention issues. Because of this, it is advisable to stay within the virtual desktop once the user has been migrated.
  • End User Support: A migration is going to have an impact on the users. By identifying this as a fact, a proper support structure can be put into place beforehand. The support team should accommodate the typical issues encountered during a migration. The common questions/issues must be documented and communicated to all users who are about to undergo migration. These materials should be in an easy to find location. But these steps alone are not all that is required. During the first week of migration, there will be a flood of user issues and questions. If a thorough User Acceptance Test was completed, many of these challenges would have already been identified and a valuable FAQ would have already been created. The support team needs the tools and training in place to be able to assist the users in a timely manner. The support team is much more effective if they are able to see the user’s end-point device and the user’s virtual desktop, which is possible with GoToAssist. This gives support full visibility into the user’s challenges.

The point to remember is that the migration plan must not be set in stone. As the first users are migrated, gaps in the process will come to light. The process must be flexible to accommodate unforeseen challenges along the way. Changes to the process must be communicated and followed by the rollout team. And finally, once a user’s data/settings are migrated, they MUST move to the virtual desktop. If not, expect data/personalization discrepancies between the physical and virtual desktop worlds.

Daniel Feller
Lead Architect – Worldwide Consulting Solutions
Citrix Systems, Inc.
Blog: Virtualize My Desktop

Advertisements

Desktop Virtualization… The Right Way (User Experience)


The user discussion doesn’t just end with an understanding of the topology. Desktop virtualization architecture will only get a users so far: access to a virtualized desktop. If the virtualized desktop does not provide the required experience in different scenarios, users will find ways of reverting back to their traditional model or find a way to make life very difficult for you, the architect of this less than stellar solution.

Trying to get these users back is challenging as the bad perceptions must be changed and that takes time. Many of the missteps with regards to the user experience are based on improper analysis and planning. In order to have an environment that is aligned with the user community, understanding the following items are critical.

  • Network Impact: Desktop virtualization requires a network connection, either temporary or permanent depending on the virtual desktop model selected. Trying to understand the network impact is not a trivial task and will never get one to the exact numbers because user will do different things like typing, printing, browsing, Flash video, WMV video, online Facebook games, etc. However, the Performance Assessment and Bandwidth Analysis white paper should help understand the impact of each activity and allow an architect to plan appropriately.
  • Peripherals: One of the beauties of a traditional desktop is it is customizable with peripherals: printers, scanners, webcams, and external drives. These requirements must be understood and supported, but not at the expense of security. For example, should users be able to copy data from the data center to a personal USB storage drive? This might be construed as a security hole. What about allowing a user to copy a file from the USB drive to the data center? This might put the data center at risk for viruses or malware. The justification for certain devices must be determined, but regardless of the outcome, proper security procedures must be put into place.
  • Resources: Users who are not given the proper amount of dedicated resources (CPU and memory) are either left with a desktop experience that is unusable due to the constant delays and sluggish responses because of competing resource requests or a desktop with ample resources but costing the business significant amounts of money due to unused and idle hardware. Although it is easier to allocate one resource configuration for every user, users have different requirements and should be given different configurations. It is usually a better option to create 3-4 different resource configurations for Light, Normal and Power users. With proper analysis of the requirements, users can be placed into one of a few defined configurations.
  • Mobility: A user’s requirement for offline mobility plays an important part in the over analysis. This one requirement significantly limits the possibilities for the user in respect to the most appropriate FlexCast model. Many desktop virtualization models require an active network connection. An active network connection is not guaranteed for the mobile user. Identifying this group of users allows for the design of an offline model of desktop virtualization.

These are some of the most important things to understand regarding the users and their experience expectations. If a user believes they are allowed to use a webcam within their virtual desktop and it does not work, that user now has a bad perception. The experience matters to the user, so it must matter to the architect.

Desktop Virtualization… The Right Way (User Topology)


One office with one type of desktop… Easy. Hundreds of offices with any type and age of desktops… Difficult but not impossible.

Most organizations find themselves in the difficult camp. A user’s desktop can be completely different (in terms of hardware, resources, applications and configuration) than the person sitting next to them doing a similar job. As the environment includes users from different departments, in different offices, with different requirements it becomes clear that the understanding of the user topology for an organization is critical before one can create a desktop virtualization solution.

In previous blogs, I’ve discussed how understanding the underlying standards, applications and storms plays an important role in creating a successful virtual desktop design. The fourth requirement is to understand the organization’s user topology. More specifically, one must get a grasp of the endpoints and user locations.

First, the endpoints. Most organizations follow a 3-5 year desktop refresh cycle. At a minimum, there will be 5 different hardware configurations for each of the 5 years (in actuality, there will likely be many, many, many more configurations). Also, the desktops that are less than 2-3 years old have hardware configurations that can easily support Windows 7 and the latest applications. These newer desktops have more virtual desktop options than an endpoint that is 5+ years old. Newer desktops have the processing power to support the Local Streamed Desktop FlexCast model instead of the hosted VM-Based VDI desktop model.

With Local Streamed Desktop, the desktop is still virtualized and centrally managed, the desktop still receives the virtualized applications, and the users still have their personalized settings applied. The difference is that instead of using resources on a physical server in the data center, the local desktop resources are used. Because local desktop resources are consumed, fewer data center servers are required to support the same number of users

This is but one example of how understanding the endpoints helps determine the type of virtual desktop a user requires. However, just knowing the endpoints is only one aspect of the user topology. The second aspect, user’s location, also plays an important role in selecting the most appropriate virtual desktop.

Certain desktops require a high-speed connection to the infrastructure while other options can allow slower networks with higher latency. By assessing the user locations and the connections to the data center, the proper solution can be put into place to support the virtual desktop FlexCast model.

  • Hosted shared desktop: Can be used on networks with low speeds and high latency
  • Hosted VM-based VDI desktop: Can be used on networks with low speeds and high latency
  • Hosted blade PCs: Can be used on networks with low speeds and high latency
  • Streamed local desktop: Requires a fast, low latency network to the physical desktop for optimal performance
  • Virtual Apps to Installed Desktops: Can be used on networks with low speed and high latency. If application streaming is used (as compared to hosted applications), slower networks will delay application startup time, but users have the ability to work disconnected.
  • Local VM-based desktop (not yet available): Can be used on networks with low speed and high latency, although the slower the network the longer it will take to sync the image to the endpoint. Images can be tens of GBs in size. But once delivered to the end point, all communication remains local to the desktop.

When deciding on the appropriate virtual desktop type, the endpoint and the user’s location matter. Without taking both into account, a user might end up with a fast virtual desktop that takes 5 minutes to start Microsoft Word. Gather all the information before deciding on your virtual desktop type.

Desktop Virtualization… The Right Way (Standards)


Total power often leads to corruption. No, I’m not talking about business or politics. I’m talking about desktops. Have you been in a meeting where people talk about giving users admin rights to workstations. I have two words for you… Be afraid… Be very afraid… OK that was 5 words, but the point is clear. Be afraid.

Many of the challenges with the traditional, distributed desktop operating environment are the lack of standard definitions and enforcement. Most organizations strive for a secured and locked down desktop environment, but over time users were granted exceptions. Throughout the months and years, those exceptions became the new de facto standard.

Now, users have local admin rights. Thousands of unique applications are installed throughout the organization. Every desktop configuration is unique. This is an almost impossible situation for any IT organization to support. This environment did not happen overnight; it took time. Standards slipped because it was simply easier and faster to circumvent the standards instead of troubleshooting the issue. Because of the lack of standards, the environment is so convoluted and complex, it is excruciatingly difficult to make any changes or updates without causing mass confusion.

That being said, can these types of organizations still use desktop virtualization? Yes. And they will see many of the benefits with desktop virtualization that have been discussed over and over again. It will just be more difficult to achieve than an organization who has the desktop standards in place and actively followed.

Many organizations look at desktop virtualization at being the solution to simplify the desktop operating environment. Desktop virtualization is an enabler.

If done to the fullest extent, desktop virtualization is an enabler towards better IT management. Desktop virtualization can enable an organization to discard the bad habits of the past and replace them with best practices that can help an IT organization survive and succeed within an ever increasingly complex computing environment. In order to simplify the management of the desktop, reduce desktop operating costs, and achieve desktop virtualization success, the organization must have alignment in terms of:

  • User rights: Users must have enough abilities to do their job, but this does not mean users should be a local administrator. IT must be able to provide the users with the correct applications and resources when requested. If modifications are required, IT must be able to accommodate in a reasonable amount of time. If IT is unable to meet the agreed upon time frames, alternatives must be made available so users can continue to be productive, which might require an open, and temporary virtual desktop playground area where users can utilize these applications until IT integrates them into the mix. I discussed this in a previous blog about a virtual desktop playground.
  • Applications: Allowing users to install their own applications into the corporate desktop image increases complexity and reduces the security of the system. IT has no visibility into the application and is unable to plan upgrades, updates, or hardware refreshes. The applications could open up holes in the infrastructure that others could exploit. The organization must gain control of the applications if the organization is going to be more flexible.
  • Operating Procedures: IT must deliver the resources users require in an adequate amount of time. This involves the development of new IT processes and ways of working. If a user requires an application, IT must find a way of either incorporating the application into the environment, or finding the user an acceptable alternative while working within the confines of the corporate standards.

Simply moving to desktop virtualization will help us solve some of our challenges, but if you want to make a significant improvement in the way IT is seen within your organization, there must be a new approach. Without clear definition of the operating standards, moving to a desktop virtualization solution will result in many of the same challenges observed with the traditional, distributed desktop operating model. Chaos. Except this time it will be virtual chaos.

Your primary desktop is a


Fill in the blank if you will.  There are many people who are super excited about the upcoming release of the latest tablet PCs (iPad, Slate, etc).  I recently received a comment from someone on Facebook related to a previous blog saying that the iPad Will Not Replace Your Desktop.  The comment basically said

Does the iPad and like devices need to be fully functional to be successful?  How many people have more than one mobile device like a laptop and a netbook?”
That is an interesting question.  But I’m starting to wonder if we need a laptop and an iPad?  Do we need a laptop and a netbook?  Depending on what you do, the iPad or the netbook could potentially replace your laptop.  As I see it, most users have a smartphone and a main work computer, for many that is a laptop because they require a larger form factor device while not in their office.  But what if we did the following:

•    Main computer: Thin client
•    Mobile computer: iPad/Netbook
•    Ultra-mobile computer: Smartphone

If we have Citrix Receiver on all of these devices, we access the same applications/data/environment.

Think about all of the problems we hear about with laptops: stolen, dropped, lost, expensive, etc.  If we went down the virtual desktop route, stolen, broken or lost laptops would not be a problem because your data would be in the data center with your virtual desktop.  So why use a laptop?

Is it possible that tablets and netbooks could mean that those of us with laptops can toss them away?  If the tablets/netbooks provides us with a connection to a virtual desktop from anywhere, why would we need the laptop functionality?

Of course this won’t work for everyone. Some people will need a laptop. But what we will see in the coming months/years is a much more diverse end point environment. We know this is coming, so it is  good idea to start planning how you will integrate all of these endpoints into your infrastructure while still trying to keep the environments secure.

Desktop Virtualization… The Right Way (Storms)


One of the questions you must ask yourself when designing a desktop virtualization solution is understanding the user patterns.  This has a direct impact on XenDesktop farm design and scalability with respects to boot up storms and logon storms. Let’s take two different examples so you can get a better idea for what I’m talking about:

Scenario 1: 9-5:
In this scenario, all users logon in the morning and logoff in the evening.  There might be some sporadic users working after hours, but for the most part users stay within these working hours.  This is a fairly easy scenario, which is why I’ve started with it.

To design your environment, you need to make sure that the boot up storm doesn’t overwhelm your environment.  You will be starting a large number of hosted virtual desktops and that has a direct impact on your hypervisor of choice, your storage solution and your network infrastructure.  You can easily overcome any challenges with a boot up storm in this scenario by using the XenDesktop idle desktops configuration to pre-boot desktops X minutes before the main rush begins (X is based on how many desktops you need up and running before users start connecting).  By the time users come online, the system should have calmed down from the boot up storm.

Each hypervisor limits the number of simultaneous bootups (XenServer being 3).  Although this helps limit the number of virtual desktops powering on at once, that process only requires a short amount of time as it does not include the actual OS loading.  If you have 1,000 desktops (across 10-20 hypervisors) that must be ready by 9AM, and you assume each desktop takes 30 second to fully boot, you want to start your bootup sequence by at least 8:30.

The second aspect is the logon storm.  There is little we can do to the environment to spread the storm over a greater amount of time as it is based solely on the user pattern.  The logon storm is going to have a direct impact on your farm design.  You need to look at the following:
1.    Number of user connections per minute
2.    The IOPS requirements per minute
3.    The logon times you require

As you add more users to the environment, you need to optimize your architecture and allocate additional resources in order to accommodate the storm.  This might require you to dedicate XenDesktop controllers as XML Brokers and Farm Masters.  By giving the controllers specific roles, you optimize those systems to be able to support greater numbers of simultaneously connecting users.

Scenario 2: 24/7 (3 shifts)
This scenario brings about a few more challenges in that users are always online.  The organization is running 100% of the time and as users are connecting, other users are logging off .  The cycle continues over and over again.  This architecture is really dependent on the environment in question. Even though the organization might be 24/7, those shifts might be located around the world in different locations connecting to different data centers (follow-the-sun model).  But if we have a unique scenario where we have 1 data center and all shifts connecting to that 1 site, this type of an environment would make us change our design as follows (safe to assume that all shifts are different sizes. In fact, many 24/7 models located in one site have one large shift and the remaining 2 shifts are significantly smaller):

In the 9-5 scenario, a boot storm wouldn’t impact other users as no users were online before the start of the workday.  In the 24/7 scenario, we have active users.   If we sized our environment based on max concurrency for a single shift, we have little extra capacity to pre-boot desktops.

  • First, we start all available workstations ahead of time to build up our idle pool (without disrupting working users).
  • Second, we disable the reboot after logoff option for the shift immediately before the largest shift starts. This will allow the desktops to be ready to go even faster.  This can be done by creating a workstation group per shift. This does bring the risk of the users not receiving a clean desktop, but this is mitigated by the desktops being rebooted (cleaned) after the other 2 shifts end.
  • Third, when the logon storm begins, we can also expect a logoff storm to begin as well because one shift begins as a different one ends.  Disabling the reboot for one shift change will help overcome the boot storm impact. To accommodate the logon/logoffs, we need to optimize our environment, just like we did in the 9-5 operational model, dedicating controllers for XML brokering and farm master.  This type of configuration allows us to support the largest possible number of users within one farm, although at a certain point we will require a new farm.

Two different user pattern scenarios to think about during a desktop virtualization design. A few things to keep in mind:

  • Does it require and understanding of the user environment? Yes
  • Will it impact scalability of the underlying infrastructure? Yes
  • Can the environment be designed in such a way to support these usage patterns? Yes

Desktop Virtualization… The Right Way (Applications)


Have you experienced this before? You need an application to help you with a project. You ask your manager if you can purchase the software and you get approval.  You go out and buy the software and install it onto your desktop and away you go to do your job.

This is a common situation, one I’ve done myself on many occasions. These applications make up the non-IT delivered application set of every organization, and it is a massive list.  This happens over and over again in every organization and in every department. So when you hear organizations say they have 10,000 or 20,000 applications, they are likely not exaggerating.  Out of that massive list, only 500-1,000 of those applications are IT-managed.

This brings about the main challenge with desktop virtualization, how do you deal with the non-IT delivered applications? With Citrix XenDesktop, if you use the recommended strategy of a single image for many users you lose the ability to install the application into the virtual desktop and have it persist across reboots.   This is a major issue that must be dealt with or users will not accept the virtual desktop.

First, you need an application assessment. You have a few options.

  • Entire site assessment: By using a tool or doing a manual assessment you can get a list of applications deployed throughout the organization.  This will give you the data points, but the amount of data might be overwhelming. Imagine looking at a list of 20,000 applications. How do you even start determining your optimal solution.  This is information overload
  • Department-by-Department assessment: By focusing at the departmental level, you get a better grasp of the applications without being overwhelmed from the start.  .  If you focus at a departmental or group level, your application list should be more manageable.
  • Survey: Leave it up to the departments to create a list of what their users NEED to effectively do their job and not what they HAVE.  Many of the applications are outdated and unused.  By identifying what is needed, the number of applications can be better managed.

Regardless of the approach taken, the following is needed for each application:

  1. User
  2. Application
  3. Dependencies
  4. Mobility requirements

Second, it’s time for layoffs but this time we need to layoff applications.  If you ask your users what applications they have installed, they will miss most of them.  In fact, many of the applications installed on a typical desktop are not needed anymore.  By laying off applications, we can start to get control of our application set and give our IT organizations an opportunity to succeed.

Third, develop an application delivery strategy.  We can either install, host or stream.  Do you need all three? Potentially.  The point to remember is you need to be flexible. Certain strategies will work better in certain situations.   Think about it this way.

  • Certain applications will be used by 100% of your users.  These applications are best served by installing into the virtual desktop image. Why add another process (streaming/hosting) for an application that will be used by everyone, everyday?
  • Certain applications have such a massive memory footprint. Executing the application within a virtual desktop will result in massive amounts of RAM being consumed.  However, if that application were hosted on XenApp, those DLLs and EXEs could be shared between users, thus reducing the overall memory footprint required.
  • Certain applications are used by a small group of users (1-2% of users).  These applications might best be served via the hosting model on XenApp or via application streaming into the virtual desktop.
  • Certain applications go through constant updates (daily/weekly).  It would appear to be easier to use a single application image that can be distributed to any device when needed. Instead of maintaining hundreds/thousands of installations, the single package model would appear to be easier.

The point of all of this is if you going to be successful, you must have a strategy for delivering the applications into the virtual desktop.  The strategy is also dependent on how well your IT group can service the user requests for all of these applications.  If it is just not possible, your other alternative is to go down the Bring Your Own Computer (BYOC) route.

In the BYOC model, my physical desktop is maintained and managed by myself.  I’m not part of the domain nor do I call support when I have an issue, I do it myself.  This also means that the non-IT delivered applications are installed on my own personal desktop.  So far, this model has worked for me but I’m a savvy user and know how to fix a lot of issues I run into to. This approach might be more difficult for those not used to self-supporting.  But if a user installed their own applications, then technically they are already self-supporting their non-IT delivered applications.

Remember, the desktop is the easy part.  Spend your time looking at your application set and remember the following:

  1. Application Assessment
  2. Application Layoffs
  3. Application Delivery Strategy

What other application characteristics have you seen that would help determine your application delivery strategy?