From the words of Ralph Wiggum, I Choo, Choo, Choose You [to be my FlexCast model].
Choosing the correct FlexCast model always leaves people wondering if they made the right decision. The answer to this question requires us to look closer into the user requirements. For example, the ABC School District Reference Design was recently published, and as can be expected from the title, it is based on a large school district (70,000 total users, 20,000 concurrent). How did we decide which FlexCast model was most appropriate?
Continue reading I Choo, Choo Choose You
We’ve heard about it, we’ve seen it and we’ve read about it from Citrix, from VMware, from Microsoft and from just about everyone else. We see one report showing one technology is better than the other but then we see another report showing the exact opposite. Doesn’t this leave you wondering what you should do next. You might be wondering what in the world am I talking about?
I’m talking about SCALABILITY!
Continue reading The truth about XenDesktop 4 scalability
We all know the impact a server failure can have on a group of users, but what if that server was a core component of a desktop virtualization solution? That’s a lot of unhappy users. Before desktop virtualization, nobody gave a second thought about desktop availability. If a desktop failed, it only impacted a single user and chances are you wouldn’t hear much. However, if a certain server fails in a desktop virtualization environment, that one server could impact 50, 100 or 1,000 users. I can guarantee one thing, you will hear that many users.
Continue reading Danger, Danger My Server Crashed
What do you think are the main ingredients to any successful desktop virtualization project? Is it application integration methodology? Is it hardware? What about the IT team? Based on my experience, the top requirements really boils down to a few core items, all of which I’ve discussed many times in previous blog postings (applications, standards, and executive buy-in to name a few).
Before we get into the seven requirements, we must understand the point of desktop virtualization. Continue reading Seven Requirements for Virtual Desktop Success
Imagine an environment where:
- The endpoints are over 5 years old
- Users’ personal computers are state of the art
- Applications have not been patched in over one year
- Each office has different configurations, although they should be identical
These are some of the challenges with one particular environment: the ABC School District.
This particular school district consists of 50 school campuses that supports 70,000 users. Due to limited funding, the technology infrastructure is aging quickly. Thanks to a voter approved tax levy, the ABC School District is receiving an infusion of money to upgrade their computing infrastructure. Instead of going down the same path of distributed computing, the ABC School District has decided to implement desktop virtualization based on the following architecture: Continue reading This School House Rocks with Virtual Desktops
Previously I’ve talked about how using local storage can help reduce the costs of desktop virtualization. Paul Wilson tested this type of environment to determine if it is possible or to see if I was talking crazy. The result: it is possible and I’m a little crazy. So we have a new design decision, which way will you go?
Continue reading Local or Shared Storage – that is the question
The latest question into the Ask the Architect mailbag comes from Andy. Andy is creating a Provisioning services design for an environment based on Windows Server 2008, with the write cache stored on a NetApp share. Andy’s question is if the write cache estimates are correct. Basically, Andy is estimating 650 MB write cache per virtual desktop. He achieves this by taking the assigned RAM and multiplying it by 25%.
First, using Windows 2008 is great for Provisioning services as this provides the largest system cache for the vDisk, which will speed up delivery as local disk reads are not required as often.
Second, write cache is a tricky thing to determine. You best bet is to set this up and let users go at it for a few days to see what you end up with. However, that might not be possible. In that case ou have to remember that the write cache is based on a few things:
- Application delivery approach: Streamed apps will impact write cache more than installed apps, which impact the write cache more than hosted apps. I can tell you my streamed Office applications are consuming 300MB of space on my disk (which would mean 300MB of write cache if the application is not pre-cached).
- Reboot cycle: If the default behavior is to reboot the virtual desktop upon each logoff, this will keep the write cache small as it is deleted on each reboot.
- Pagefile: The pagefile is included within the write cache file. I’m assuming this is the RAM portion of the formula.
- User work flow: What the user does will have an impact on the size. Many of the apps require writes to the disk. The more apps a user utilizes, the greater the impact on the write cache.
That is just a summary of what is involved. If you want to see the blackboard discussion, check out the Ask the Architect Write Cache Video.
What do you think? Did I miss anything? How are you estimating your write cache size as part of the design process?