Unsolved
This post is more than 5 years old
7 Posts
0
799
May 27th, 2015 05:00
Meta Volumes performance
Hi, when creating LUNS greater than 200GB we generally create them as meta volumes, the members are generally 200GB.
If I was to create a 2TB Meta Volume would it make a difference to performance if the members were 100GB compared to say 200GB? I'm thiking that would require a greater spread of TDATS across more disks
I'm seeing consistent high latency on a SQL mirror backed by a raid6 pool lun. FAST isn't promoting the LUN which is a total of 4TB, I'm told that only hot spots are moved not the entire LUN, but nothing moves and performance isn't great, which brings me to my second question - if I migrate the LUN to FC on a RAID10 pool would that be my safest bet to preserve data and improve performance?
Thanks All
Quincy561
1.3K Posts
0
May 27th, 2015 07:00
Those response times don't look so bad, but I think you are looking at the average of all FE directors. You said you see latency issues, so probably want to focus in on the FAs that are active for the storage group in question. Also I see 4 FAs that are 90% busy (orange) while many others are blue. Moving that work around to get the blue ones more active and take load off the ones that are orange may help.
BTW, is this running on ESX server by any chance? If so, what is the path manager?
jcalero1
7 Posts
0
May 27th, 2015 07:00
Hi and thnx for the reply, I don't see anything too alarming on the FE or WP see attached, I'm probably missing something.
Quincy561
1.3K Posts
0
May 27th, 2015 07:00
More meta members will add some performance, but only to a point. Generally 8 to 16 are plenty for performance. In your case it would be roughly 10 or 20 members.
The number of members has nothing to do with how the data is spread over the backend.
I'm guessing your latency issues have nothing to do with the backend, but are likely front-end bottlenecks, but without data I'm only guessing. If it is front-end issues, FAST movements won't help.
If your write latency is high, and your WP counts are low, chances are you have front-end contention.
jcalero1
7 Posts
0
May 27th, 2015 09:00
Yeah, running on ESX using NMP round robin. I'm only seeing the FE as being 75% utilization (worst) see attached. Is there anything else I can look at to see where the bottleneck may be?
Quincy561
1.3K Posts
1
May 27th, 2015 10:00
There may be bursts of IO that are making the FAs 100%. What you are looking at is the average.
Also if you haven't changed the round robin setting, see the following article. And 75% is pretty high.
VMware KB: EMC VMAX and DMX Symmetrix Storage Array recommendations for optimal performance on VMware ESXi/ESX