As we all know, a single image management solution is extremely important in the VDI world. We will have hundreds or thousands of desktops that must be built and maintained. Image management is even more important when we have stateless desktops because we want our desktop to be reset to its original state after each user session. If you are doing stateless desktops without some form of image management in order to keep the desktops identical on initial startup, people would think you are crazy (and I think you would be).
What about RDS-based (session virtualization) implementations? I don’t hear much discussion on the need for image management with session virtualization. Is it because it is simply a common sense requirement that everyone already does or is it because people believe that you can get by without it?
Just in case it’s the latter, that’s take a trip in the “Wayback machine” (sorry, just saw Mr. Peabody and Sherman with my kids) to the 1990s and remember the world of WinFrame and MetaFrame and a world without single image management. We would build MetaFrame servers with Ghost, automated scripts and other deployment tools. They worked great. They let us build servers without having to sit in front of a screen all day hitting Next, Next, Next and did I mention Next. Deployment was easy. And then, a week or two later, you would start to hear the user complaints…
My app worked correctly yesterday, but it doesn’t today
Why is this different than it was yesterday?
Where did my add-on go?
This sucks. I hate it.
It all came down to a single reason, although our servers were built identically, they start to take on a unique persona once the server is turned on. And when users start connecting and doing things (work), the servers change more and more. Eventually, you begin to hear the users and life is no longer good if you are the IT Admin.
This was a major issue for every organization, which is why we have Provisioning Services. Regardless if your server is physical or virtual, they will be identical because Provisioning Services delivers a single image to every target. And Provisioning Services makes sure those targets remain consistent because on startup, each target starts at a clean state.
There are also some organizations that will want to extend their session virtualization environment to include VDI desktops. It only makes sense that your enterprise image management solution should be able to handle physical and virtual VDI, physical and virtual RDS and any other combination.
Before single image management solutions like Provisioning Services and Machine Creation Services came around, a user’s computing workspace was like a box of chocolates, you never knew what you were going to get.
Virtual Feller’s virtual thoughts
One of the most common things every design document has is a conceptual diagram showing how the entire solution fits together. XenApp 7.5 and XenDesktop 7.5 is no different. If you looked at the XenApp 7.5 and XenDesktop 7.5 blueprint blog, you would have seen a new conceptual diagram based on the Citrix 5-layer model (Users, Access, Resources, Control and Hardware).
The good news is that I’ve put all of these images into a new Microsoft Visio stencil. But it gets better!!!
I receive a lot of emails from Citrix architects and admins who want the Visio diagram in addition to the stencil. I end up sending these out because it saves many of you a lot of time so you don’t have to recreate the wheel (I mean diagram). So I started to think about how I can make the diagram available to everyone as well.
Simply make the diagram into a stencil!!! So simple and it works
So when you download the Visio stencil, the first two items will be the Conceptual and Detailed Architecture diagrams.
Get your hands on the new stencils:
Virtual Feller’s virtual thoughts
If you look at the XenApp and XenDesktop deployments to date, you will notice that most of the users’ requirements fit into one of the following buckets:
- User requires access to a line of business application
- User requires access to a standardized desktop environment
- User requires access to a personalized desktop environment
- A combination of the above
Another thing you will notice with these 4 buckets is that “Users” “Access” a “Resource”, whether that resource is an app or a desktop or a combination. This is how you should start to think about your own XenApp and XenDesktop solution… What does your user need access to? This simple question is the basis for the XenApp and XenDesktop 7.5 blueprint.
The blueprint is based on the Citrix 5-layer model.
The top 3 layers are unique for each user group. There might be similarities between user groups, but the important aspect is that you walk through those three layers (users, access and resources) for each user group. Once those are defined, you design a single control layer running on your hardware platform for the entire solution.
The XenApp and XenDesktop 7.5 blueprint follows this model and defines each layer based on the most common design scenarios.
To get started, we begin with a standard conceptual architecture diagram for XenApp 7.5 and XenDesktop 7.5.
We continue the blueprint by:
- Identifying the most common type of user groups and recommended resource
- Defining authentication and security policies to gain access to the resources
- Detailing the image, application and personalization settings for each resource catalog
- Allocating the right number of control systems to support the overall solution
- Determining how much hardware (CPU, RAM and storage) we will need in order to build the environment.
So grab a drink, sit down, get comfortable and grab the following deployment blueprints
Virtual Feller’s virtual thoughts
The desktop is a unique beast within the data center. It is something different than what we’ve typically placed within the tightly controlled, highly-available environment. What happened so far is that the desktop has changed to better align with data center practices. That means having high levels of availability and utilizing shared storage. But is this the right approach? Should the desktop align with the data center or should the data center align with the desktop by developing a new service level?
I’ve seen many people apply high-availability requirements to the desktop, which often requires the use of shared storage. Personally, I think this is going too far. I don’t think it is needed, but as soon as I state something like this, I get a lot of pushback.
Let’s walk through the arguments I normally hear:
There is a belief that utilizing local storage will not provide enough performance to accommodate virtual desktops. Organizations have successfully used local storage. I’ve seen lab tests help back this up. The important thing is that you have enough performance (which is no different if you use shared storage). On the local storage side of things, your limiting factor will be how many drive bays you have. With a 8 core, dual processor server (16 total cores), you will need roughly eight 15,000 RPM spindles. It is also a good idea to utilize an array controller that can do caching of the writes to help reduce the peaks.
Once you finally believe that local storage can provide the required performance, the next roadblock is availability. What happens if the server has a catastrophic failure and everything is lost?
Truth be told, a server hosted virtual desktop has better availability than any traditional desktop I’ve ever seen. Think about all of the redundancies we apply to any data center workload:
• RAID: A server will most likely use RAID, allowing me to overcome a failure of a drive. My traditional desktop can’t do this.
• Power: A server will most likely have redundant power supplies. My traditional desktop doesn’t have this.
• Network: A server will most likely have redundant network connections. My traditional desktop has one.
• Monitoring: A server will most likely be constantly monitored by a team of admins. My traditional desktop is ignored.
Already, my server hosted virtual desktop will have better availability than my traditional desktop and I haven’t even made a requirement for shared storage.
Even with all of these redundancies at the local server level, what happens if the server still has a catastrophic failure and I’m using local storage?
• If I have a shared desktop (XenApp), and it fails, I simply start a new session. The impact is pretty minimal.
• If I have a pooled desktop and it fails, I simply start a new session. The impact is pretty minimal.
• If I have a personal desktop (Personal vDisk or dedicated desktop) and it fails. The user gets a fresh desktop with the standard, corporate image including the standard application set. The user completes the process by personalizing it as necessary. This isn’t as bad as it sounds. Think about what goes into personalization
○ Data: The user’s data should be stored somewhere besides locally (ShareFile or network share is optimal).
○ User Settings: Desktop and application settings are still part of the profile, which should be on a network share and configured as a roaming profile.
○ User Applications: If we are using local storage for the virtual desktop, then our user applications have been lost.
Is losing user applications a reason that would require us to use shared storage? I don’t think so.
Remember, by their very nature, user installed apps are not supported/managed/maintained by IT, they are user-managed. User apps are the user’s responsibility. It is the user’s responsibility to install the apps if they need them. It is the user’s responsibility to fix the apps if they break. The nature of user apps is that it is all up to the user.
This is the point many people forget to consider when debating about local vs. shared storage.
Your hardware platform will dictate whether you can even approach local storage. If you go with blades, local storage is not going to happen due to the limitations on drive bays.
What if you have 2000 or fewer users? Too many times, we focus squarely on the enterprise environments (10,000+ users) and ignore the small-to-medium business. So I ask you, does it make sense to go with blade servers with SMB? Will you even fill up a single chassis?
You will more than likely run rack-mounted servers where you will be able to fit at least 8 drive spindles within each server.
Remember, we are dealing with a desktop. In the traditional desktop model where we each have a physical PC sitting at our desk, what is the SLA to get that PC repaired if it fails? How long will it take to get a loaner device? What will be included within the loaner? I can guarantee you it will not include your user apps, and chances are it will be some outdated piece of hardware that no one wants.
Simply moving towards a server hosted model, even on local storage, will still have a better SLA than traditional desktop. I can simply access a fresh virtual desktop and start working. This is much faster than the traditional model.
I ask you, is local storage such a bad thing for virtual desktops within the data center?
Daniel – Lead Architect
If it wasn’t for the cost, I would…
Cost is one of the major barriers to doing almost anything. With enough money and resources, a person can do anything, but this makes a lot of things unfeasible because we don’t have an unlimited supply of money.
When we tried to create a solution to mobilize Windows applications for 500 users, cost was a major concern. How can we create this solution while keeping costs in check?
Let’s use local storage!
Of course anytime you talk about local storage, you get tons of negative reasons why it won’t work.
When it comes to storage, the fear is that you won’t have enough performance to keep up with user demands. This is understandable, especially as servers get faster and traditional disk spindles remain the same, spinning at 15,000 RPMs. However, XenDesktop App Edition (XenApp) is different. It is different because you don’t have a single OS for each user; you have a single OS for many users. And because of this one important point, storage performance is not what you would expect.
But this is mostly theory. Theory is good, but I like to see theory put to the test. As I’ve said before, we wanted to validate that this solution is in fact viable, which is why we had the Citrix Solutions Lab help us with the testing.
First, we need to understand our Read/Write ratio. For Machine Creation Services, we have typically said that for Windows 7 desktops, we have about a 40/60 ratio between reads/writes; Provisioning Services is 10/90. What about when doing a Windows Server 2012 workload? As we’ve seen in previous versions Provisioning Services has a similar R/W ratio regardless of the operating system. What about Machine Creation Services? This is the first release where Machine Creation Services can do Windows Server 2012. Will it resemble a Windows 7 desktop R/W ratio?
Not even close
I will be completely honest with you, this result completely shocked me. It surprised me so much that we ran the test 3 different times and got very similar results. I was still skeptical and had them re-run the test a 4th time roughly 3 weeks later. Same results (all using Windows Server 2012 with Hyper-V, by the way).
So the R/W ratios are very different between Windows 7 and Windows Server 2012. What about steady state IOPS per user? Just so you know, when trying to determine steady state IOPS, I prefer to look at the 95th percentile, instead of an average. That way we make sure we don’t under-allocate storage. If we look at the Windows Server 2012 test using Machine Creation Services, you get the following results
Regardless of which of the 4 tests I looked at, the numbers and graphs were almost identical. This is the highest of the 4 tests resulting in 6 IOPS per user at the 95th percentile (average is roughly 5 IOPS).
So what does this mean? It means that local storage, as we configured it within the Mobilizing Windows Applications design guide, is a viable, low-cost option.
Daniel – Lead Architect