Unsolved
This post is more than 5 years old
2 Posts
0
2943
January 25th, 2018 00:00
Multi-member group pool split up or bind volume to member
Hello
I've got two PS4100 arrays on one site. There is a default pool and two thin LUNs (12TB and 4TB) on it. Now I want to have separate pools on each member and move LUNs to this separate pools to avoid situation, that if one array goes down I have both LUNs unavailable. I also found a topic about pinning volumes to members, without affecting the pool... And here's my consideration - which solution will be more suitable? And how the hosts will react if one array will go down - if there is common pool for both volumes, but they are pinned to separate members.
No Events found!



kombayn
2 Posts
0
January 26th, 2018 15:00
Hello
Thanks for your answer. I'm using RAID 6, but the firmware is unfortunately not so fresh (7.1.5).
Getting back to my considerations - if I bind volume to a member (having one default pool across members), will this volume stay online if the other member will go down? I'm 99,999% sure it will (pools configurations are transparent to hosts) but I'm just searching for confirmation of my thoughts. And from the other side - if I would have a large datastore composed from 2 volumes which would be bound to separate members or spreaded across members in default pool - whole datastore goes down with one member failure?
Lady Margaret
7 Posts
0
August 17th, 2019 11:00
Hi Don,
In relation to the conversation here, I was looking for potential MPIO solutions to my 1G fabric setup which comprises 6x ESX hosts and 2x PS6210, 1x PS4100 members
As per the our conversation on the ESX/MEM thread, this implementation is constrained to 1G due to the lowest common denominator which is the PS4100 member. Everything else is 10G ready.
The issue is I can't feasibly break the group and discard the PS4100 array.
Is it possible to attach the high IO utilisation volumes to the PS6210 members and implement 10G connectivity? If I could cut the low IO data loose and attach it to the PS4100 I'd be happy to take my chances with that, but I need to preserve IO and sanctity to the higher utilisation volumes.
As an aside, the vSphere datastores are all VVols, so not monolithic volumes from a EQL perspective.
Thanks,
Greg
dwilliam62
4 Operator
•
1.5K Posts
0
August 17th, 2019 15:00
Hello Greg,
The problem is going to be 10GbE hosts connecting to GbE array. That's usually not a good thing as the 10GbE server can overrun the array. Usually when you split it like that you have a mix of GbE and 10GbE servers.
What firmware are you running on the arrays? Hopefully current?
For vvols in multimember groups especially did you set the NO-OP timeout in addition to the login_timeout and delayed ack disabled?
First thing I would look at is SANHQ. How hard are you hitting these arrays? If well within the GbE network then going to 10GbE isn't going to yield greater performance.
You can't bind Storage Containers or individual VVOLs to a member. So you would have to move the 4100 into it's own pool.
Regards,
Don