Virtual Desktops with Local Storage, Good Enough for Me


The desktop is a unique beast within the data center. It is something different than what we’ve typically placed within the tightly controlled, highly-available environment. What happened so far is that the desktop has changed to better align with data center practices. That means having high levels of availability and utilizing shared storage. But is this the right approach? Should the desktop align with the data center or should the data center align with the desktop by developing a new service level?

I’ve seen many people apply high-availability requirements to the desktop, which often requires the use of shared storage. Personally, I think this is going too far. I don’t think it is needed, but as soon as I state something like this, I get a lot of pushback.

Let’s walk through the arguments I normally hear:

Performance

There is a belief that utilizing local storage will not provide enough performance to accommodate virtual desktops. Organizations have successfully used local storage. I’ve seen lab tests help back this up. The important thing is that you have enough performance (which is no different if you use shared storage). On the local storage side of things, your limiting factor will be how many drive bays you have. With a 8 core, dual processor server (16 total cores), you will need roughly eight 15,000 RPM spindles. It is also a good idea to utilize an array controller that can do caching of the writes to help reduce the peaks.

Availability

Once you finally believe that local storage can provide the required performance, the next roadblock is availability. What happens if the server has a catastrophic failure and everything is lost?

Truth be told, a server hosted virtual desktop has better availability than any traditional desktop I’ve ever seen. Think about all of the redundancies we apply to any data center workload:

• RAID: A server will most likely use RAID, allowing me to overcome a failure of a drive. My traditional desktop can’t do this.

• Power: A server will most likely have redundant power supplies. My traditional desktop doesn’t have this.

• Network: A server will most likely have redundant network connections. My traditional desktop has one.

• Monitoring: A server will most likely be constantly monitored by a team of admins. My traditional desktop is ignored.

Already, my server hosted virtual desktop will have better availability than my traditional desktop and I haven’t even made a requirement for shared storage.

Even with all of these redundancies at the local server level, what happens if the server still has a catastrophic failure and I’m using local storage?

• If I have a shared desktop (XenApp), and it fails, I simply start a new session. The impact is pretty minimal.

• If I have a pooled desktop and it fails, I simply start a new session. The impact is pretty minimal.

• If I have a personal desktop (Personal vDisk or dedicated desktop) and it fails. The user gets a fresh desktop with the standard, corporate image including the standard application set. The user completes the process by personalizing it as necessary. This isn’t as bad as it sounds. Think about what goes into personalization

○ Data: The user’s data should be stored somewhere besides locally (ShareFile or network share is optimal).

○ User Settings: Desktop and application settings are still part of the profile, which should be on a network share and configured as a roaming profile.

○ User Applications: If we are using local storage for the virtual desktop, then our user applications have been lost.

Is losing user applications a reason that would require us to use shared storage? I don’t think so.

Remember, by their very nature, user installed apps are not supported/managed/maintained by IT, they are user-managed. User apps are the user’s responsibility. It is the user’s responsibility to install the apps if they need them. It is the user’s responsibility to fix the apps if they break. The nature of user apps is that it is all up to the user.

This is the point many people forget to consider when debating about local vs. shared storage.

Hardware

Your hardware platform will dictate whether you can even approach local storage. If you go with blades, local storage is not going to happen due to the limitations on drive bays.

What if you have 2000 or fewer users? Too many times, we focus squarely on the enterprise environments (10,000+ users) and ignore the small-to-medium business. So I ask you, does it make sense to go with blade servers with SMB? Will you even fill up a single chassis?

You will more than likely run rack-mounted servers where you will be able to fit at least 8 drive spindles within each server.

SLA

Remember, we are dealing with a desktop. In the traditional desktop model where we each have a physical PC sitting at our desk, what is the SLA to get that PC repaired if it fails? How long will it take to get a loaner device? What will be included within the loaner? I can guarantee you it will not include your user apps, and chances are it will be some outdated piece of hardware that no one wants.

Simply moving towards a server hosted model, even on local storage, will still have a better SLA than traditional desktop. I can simply access a fresh virtual desktop and start working. This is much faster than the traditional model.

Summary

I ask you, is local storage such a bad thing for virtual desktops within the data center?

Daniel – Lead Architect

Follow @djfeller

Advertisements

Storage and IOPS guidance for delivering Apps using XenDesktop


If it wasn’t for the cost, I would…

Cost is one of the major barriers to doing almost anything. With enough money and resources, a person can do anything, but this makes a lot of things unfeasible because we don’t have an unlimited supply of money.

When we tried to create a solution to mobilize Windows applications for 500 users, cost was a major concern. How can we create this solution while keeping costs in check?

Let’s use local storage!

Brilliant! J

Of course anytime you talk about local storage, you get tons of negative reasons why it won’t work.

When it comes to storage, the fear is that you won’t have enough performance to keep up with user demands. This is understandable, especially as servers get faster and traditional disk spindles remain the same, spinning at 15,000 RPMs. However, XenDesktop App Edition (XenApp) is different. It is different because you don’t have a single OS for each user; you have a single OS for many users. And because of this one important point, storage performance is not what you would expect.

But this is mostly theory. Theory is good, but I like to see theory put to the test. As I’ve said before, we wanted to validate that this solution is in fact viable, which is why we had the Citrix Solutions Lab help us with the testing.

First, we need to understand our Read/Write ratio. For Machine Creation Services, we have typically said that for Windows 7 desktops, we have about a 40/60 ratio between reads/writes; Provisioning Services is 10/90. What about when doing a Windows Server 2012 workload? As we’ve seen in previous versions Provisioning Services has a similar R/W ratio regardless of the operating system. What about Machine Creation Services? This is the first release where Machine Creation Services can do Windows Server 2012. Will it resemble a Windows 7 desktop R/W ratio?

Not even close

I will be completely honest with you, this result completely shocked me. It surprised me so much that we ran the test 3 different times and got very similar results. I was still skeptical and had them re-run the test a 4th time roughly 3 weeks later. Same results (all using Windows Server 2012 with Hyper-V, by the way).

So the R/W ratios are very different between Windows 7 and Windows Server 2012. What about steady state IOPS per user? Just so you know, when trying to determine steady state IOPS, I prefer to look at the 95th percentile, instead of an average. That way we make sure we don’t under-allocate storage. If we look at the Windows Server 2012 test using Machine Creation Services, you get the following results

Regardless of which of the 4 tests I looked at, the numbers and graphs were almost identical. This is the highest of the 4 tests resulting in 6 IOPS per user at the 95th percentile (average is roughly 5 IOPS).

So what does this mean? It means that local storage, as we configured it within the Mobilizing Windows Applications design guide, is a viable, low-cost option.

Daniel – Lead Architect

Follow @djfeller