Unsolved
This post is more than 5 years old
2 Intern
•
211 Posts
0
4501
June 20th, 2010 10:00
Which RAID protection to use with VDI
Which RAID protection to use with VMware View.
We are intent to build 25 VMware VDI's for videoconferencing.
What is better to use with 25 VMware VDI on storage. RAID 3 or 5 when using videoconferencing on it? Because of the streaming I would say 3.
I know that everything depends on the IOPS profile but this one is difficult. Difficult to foresee.
No Events found!
bwhitman1
7 Posts
0
June 24th, 2010 19:00
I'll second Alex. When VDI was first hitting the scene, many folks thought a 90% read profile was the most common. But over time and after analyzing many customers' VDI environments, we usually see a much higher write ratio. The most common I've seen is 60%-70% writes. This type of workload is most efficiently served by RAID 10 (really anything 40%+ writes). On Alex's point of FAST Cache, we have already seen some VERY strong benefits especially in the VDI space. I'm attaching a great preso from EMC World that shows some of the testing we did with FAST cache and VDI.
I would do some testing up front though to try to determine what the IO profile will be. Can you run a physcial or virtual desktop with your video streaming and measure the IO read and writes? I'd guess this is a very heavy read environment and in that case RAID 5. Either way, FAST cache will help tremendously.
1 Attachment
FASTcache_View4.ppt
teovmy
2 Intern
•
211 Posts
0
June 24th, 2010 23:00
Thanks guy's this helps alot. You both make a point and I will definitely consider looking at it.
The EFD 'disks' seems to be great. Still but still expensive. Also here I guess definitely consider looking at it.
bwhitman1
7 Posts
0
June 25th, 2010 02:00
You're right that EFD's are much more expensive... from a per GB standpoint. But they are also MUCH cheaper from a per IOPs standpoint. The fastest FC drive handless about 180 IOPs, EFD's handle about 2500 IOPs. So for a workload like VDI, where many people are using some form of writeable snap like VMware View Composer, your capacity requirement is reduced greatly while your IO requirement stays the same. This is where EFD's can make sense. Also, using a small amount of EFD's as FAST cache in an array can give you the added performance benefit of dozens of spinning disks. You'll see in that preso I posted that we found adding just 66 GB of EFD as FAST Cache (two 73 GB drives in RAID 1) handled as much IO as 60 FC drives in some of the use cases. The cost differences there are easy to see, EFD can save money when you are IO bound.
teovmy
2 Intern
•
211 Posts
0
June 25th, 2010 05:00
Hi Brian,
I'm aware of the fact by using VMware View Composer (which we do) our capacity requirements will reduced. IO requirments stay's the same. Absolutely true.
I guess and correct me if I'm wrong you ment that when we use EFD's we need less traditional disks because of the capacity?
Thanks in advance
Roy Mikes
bwhitman1
7 Posts
1
June 29th, 2010 05:00
What I mean is that for every workload you are either capacity or IO bound. If you're reducing the capacity through composer then you are usually IO bound. in other words, for a given workload you may need only 15, 400 Gb drives to get to the capacity you need, but to support the IO you need 100 drives. The old method of "fixing" this gap is "short-stroking"... using large amounts of smaller drives to keep up with the IO. Since EFD's come in generally the same capacities as traditional drives but handle ~10x IO each, you can solve the short-stroking problems of the past with EFD. SO in this case instead of buying 100 fibre channel drives you may be able to buy 10 EFDs. In the end, small numbers of EFD drives can be less cost than FC drives to serve the same IO... assuming you are not capacity bound.
teovmy
2 Intern
•
211 Posts
0
July 1st, 2010 02:00
It's completely clear to me. Infact I was thinking the same way. So far thanks for all your response. I think I'll definitely will look at the EFD's.
Two in production in a raid 1/0 and in my failover site instead of a lot of FC disks
DaveHenry1
121 Posts
0
July 1st, 2010 12:00
Roy,
That's exactly the way to think about EFDs.
If you look at EFDs by the "standard" storage measurement of "cost/GB" they are, far and away, the most expensive drive we sell. If you want "cost/GB" value, you want to be looking at SATA, especially the 2TB drives (and drive manufacturers have started to announce 3TB drives...).
But, if you look at EFDs to provide performance for your applications rather than simply capacity, and instead measure them by "cost/IOPs" they are, at approximately 2,500 IOPS per drive, far and away the least expensive drive we sell.
Use the right tool for the job, put the right workload on the right tier. AFor workloads that either change, or can't be easily separated, the soon-to-be-released sub-LUN capabilities of EMC's FAST (Fully Automated Storage Tiering), announced at EMC World in May, will take care of that for you automagically.
If you do end up installing the EFDs, make sure to come back here and let us know what kind of performance you're getting from them. I think a lot of readers here would love to see some actual-customer real-world info on this.
-Dave
teovmy
2 Intern
•
211 Posts
0
July 2nd, 2010 01:00
Dave and txtee. Thanks for the additional information. I'm now tend to EFD's. So we see what the future us brings.I will sure post my experience here.
dynamox
9 Legend
•
20.4K Posts
0
July 5th, 2010 18:00
Alex,
why such a high write ratio in the VM ? Page file or just the applications that people run ?
Thanks