Blog Archives

You don’t need to be a rocket scientist to see the value in RAM Cache


A few years ago, we replaced all of our windows in our home (I’m talking about the panes of glass you look through, not the operating system). We, of course, talked with a few different companies who stopped by, went through their product portfolio and brought along samples. One demonstration stuck with me. The sales person, placed his sample window flat on the ground and stood on it, demonstrating the strength of the window. I immediately started thinking, “That was totally wicked” and “I wonder if it has ever shattered before”.

As the practical part of my brain kicked in, I began to wonder when I am ever going to need to walk on my windows. Is Batman stopping by and going to climb up my house? Is this unique to this particular window? Not knowing much about windows, I wondered if my old windows were just as strong.

Demos are meant to impress us, but we need to ask ourselves if the demo really demonstrates everyday life.

And this was the goal I set out to achieve when I was trying to see how much of a benefit to the user experience would the new RAM Cache with Disk Overflow feature provide. I wanted a demonstration that showed a very typical user.

A typical office user, like myself, uses a Windows desktop with the following

  1. Outlook
  2. Internet Explorer
  3. Microsoft Word

Even with the apps defined, you can still have quite a difference in the workload depending on the websites you visit or the type of document you create. Instead of visiting a website going overboard with multimedia, Citrix.com was used as it resembles a simple, common site.

Instead of creating a large, document, multiple pictures, different aspect ratios and 3d rendering, the demo creates a small document with a single paragraph and a simple chart.

With this simple workload, would we see any noticeable difference in the user experience? And by noticeable, I’m not talking about an application take a 1/2 second longer to load. I’m talking about a “WOW, anyone who sees this will definitely be able to notice the improvement”.

In this very simple demonstration, with a minimal workload, I saw 2 major things

  1. A drastic drop in disk activity
  2. Very noticeable change in the user experience

Try it for yourself. Flip the switch


From the virtual mind of Virtual Feller

About these ads

ESG Lab Spotlight Report: Up to 80% Reduction in Storage Cost for VDI and RDS


You’ve heard the news, you’ve seen the videos, and now the storage savings have been verified! According to an ESG report, the new RAM Cache with Disk Overflow feature, included in the XenApp and XenDesktop 7.6 release, has the potential to reduce storage costs by 80% or more. Now before you stop reading thinking this is too good to be true, think about the storage cost problem for a moment.

Storage costs associated with RDS/VDI solutions is for throughput and not space. We need to have enough throughput or IOPS so the user experience doesn’t suffer. And believe me, it can suffer drastically, as you can easily see in this simple demonstration (pay particular attention from the 3 to 4 minute mark J).

To visualize how this works, take the following diagrams into perspective


IO is destined for the disk. Disks are slow when compared to RAM. So the Cache on RAM with Overflow feature substitutes RAM for disk. And because RAM is not infinite, we will overflow portions of the RAM to disk as needed. But even this overflow is more efficient. The overflow is sequenced and consolidated into large, sequential blocks of data instead of small, random blocks.

Many implementations required massive SANs or expensive SSDs. People were spending large amounts of money on storage, not for space, but instead to achieve the throughput required by RDS/VDI. With the Cache on RAM with Overflow feature, we can drastically reduce the number of disks. We don’t need hundreds of disks to give us our throughput. We don’t need to implement SSDs. We can drastically reduce our disk count and focus more on storage space, which is by far, easier and cheaper to implement.

According to the ESG report on Provisioning Services, when you focus on disk throughput

  • A XenDesktop implementation requiring 26 disks can be reduced to 3
  • A XenApp implementation requiring 74 disks can be reduced to just 4

And because of the way this feature works, it provides value to multiple hypervisors.

From the virtual mind of Virtual Feller

PROOF[Video] – New XenDesktop and XenApp Storage Optimizations Does Improve the User Experience


I’ve written and seen numerous blogs/tweets about how great the new storage optimization feature is for XenApp and XenDesktop. I’ve read how this feature can reduce IOPS from an average of 15 IOPS per Windows 7 user down to 0.1 IOPS. I’ve read how this feature functions by creating a small RAM buffer within each VM. I’ve seen tweets showing crazy IOPS numbers on using standard, spinning disks.

In fact, I’ve done some of this analysis and was completely blown away by the results.

But who cares? Who cares if my IOPS are reduced by 99%?

Unfortunately, unless you are responsible for storage, you probably don’t care.  But what if this drastic reduction in IOPS had a direct impact on the user experience?  And from someone who uses VDI remotely 100% of the time, the user experience is what I really care about.

Let’s see what the new RAM Cache with Disk Overflow feature can do for the user experience…

What impresses me the most is that the workload used isn’t some crazy operation that a typical user wouldn’t really do.  You can easily see the improvement to the user experience with something as simple as browsing a few web pages.

And all of this is done

  • Without complex configurations
  • Without expensive SANs
  • Without SSDs
  • Without additional hardware
  • Without additional licenses
  • Without a learning curve

From the virtual mind of Virtual Feller

Does Cache Trump IOPS


With desktop virtualization, we hear more and more about how important IOPS are to being able to support the virtual desktop. I’ve had a few blogs about it and plan to have a few more. What I wanted to talk about was an interesting discussion I recently had with 3 Senior Architects within Citrix Consulting (Doug Demskis, Dan Allen and Nick Rintalan).  There are 3 smart guys who I talk to fairly regularly and the discussions get quite interesting.

This particular discussion was no different.  We were talking about the importance of IOPS, RAID configs, spindle speeds with regards to an enterprise’s SAN infrastructure. (Deciding if you are going to use a SAN for your virtual desktops is a completely different discussion that I’ve had before and Brian Madden had more recently). But for the sake of this article, let’s say you’ve decided “Yes, I will use my SAN.” If your organization already has an enterprise SAN solution, chances are that the solution has controllers with plenty of cache. Does this make the IOPS discussion a moot point? Read the rest of this entry

Dear Architect, is my write cache estimate correct?


The latest question into the Ask the Architect mailbag comes from Andy.  Andy is creating a Provisioning services design for an environment based on Windows Server 2008, with the write cache stored on a NetApp share.  Andy’s question is if the write cache estimates are correct.  Basically, Andy is estimating 650 MB write cache per virtual desktop.  He achieves this by taking the assigned RAM and multiplying it by 25%.

First, using Windows 2008 is great for Provisioning services as this provides the largest system cache for the vDisk, which will speed up delivery as local disk reads are not required as often.

Second, write cache is a tricky thing to determine. You best bet is to set this up and let users go at it for a few days to see what you end up with.  However, that might not be possible.  In that case ou have to remember that the write cache is based on a few things:

  1. Application delivery approach: Streamed apps will impact write cache more than installed apps, which impact the write cache more than hosted apps. I can tell you my streamed Office applications are consuming 300MB of space on my disk (which would mean 300MB of write cache if the application is not pre-cached).
  2. Reboot cycle: If the default behavior is to reboot the virtual desktop upon each logoff, this will keep the write cache small as it is deleted on each reboot.
  3. Pagefile: The pagefile is included within the write cache file.  I’m assuming this is the RAM portion of the formula.
  4. User work flow: What the user does will have an impact on the size.  Many of the apps require writes to the disk. The more apps a user utilizes, the greater the impact on the write cache.

That is just a summary of what is involved.  If you want to see the blackboard discussion, check out the Ask the Architect Write Cache Video.

What do you think? Did I miss anything? How are you estimating your write cache size as part of the design process?

Follow

Get every new post delivered to your Inbox.

Join 469 other followers