This post is more than 5 years old
4 Posts
0
20596
November 13th, 2014 11:00
Question:Two arrays in a pool, load balancing, etc.
Hi,
So, I'm kind of confusion about how this works. I know I can put two arrays that are in the same group into the same pool, and I know that I can create a volume that spans the two arrays and the EQL will spread the volume equally between the two boxes, etc etc.
My question I guess is on reliability.... Am I missing something or does this seem like a "RAID 0" type thing between two EQLs? I mean, if the volume is split between the two boxes, how do I ever do a reboot on a box? What if one of the EQL fails? Do I lose that whole volume?
What am I missing? Or is this how it really works?
I'm kind of afraid of putting anything into production where a failure on one device would kill everything like that.
Any explanation would be appreciated.



Origin3k
4 Operator
•
2.3K Posts
1
November 13th, 2014 11:00
The button is named "restart" and initiate always a failover to the standby CM. Yes, there is no complete reboot option or task.
Well... is the same "risk" of having a storage with multible heads and a couple of extension shelfs. If the storage pools or groups disk across shelfs and you loose a complete shelf you are in trouble.
In theorie the risk goes up if you put more members to a pool because a volume splits over 3 volume per default. But this is a softlimit and you can adjust the volume distribution to more or less members. If you have a volume which is large and the 3 members cant deliver this capacity it can go 4 or more members. The biggest volume i ever seen spreads across 8 members. Spreading volumes across members is how EQL scales out.
If you dont like you have to disable this option or replicate your data with the SyncRep option.
Regards,
Joerg
Origin3k
4 Operator
•
2.3K Posts
1
November 13th, 2014 11:00
- An EQL offer 99.999% availability
- You dont shutdown a member of a multi member pool. If you restart a member or doing an FW upgrading the standby controller module becomes active
If you ever loose a member the effected volumes goes offline. In 6 years with now 27 EQLs we never lost a member or a volume.
Regards,
Joerg
cwildermuth
4 Posts
0
November 13th, 2014 11:00
Hi Joerg, thanks for your prompt reply!
AH, ok so what you are saying is that when I choose to reboot a member when there are two in the pool I'm actually not rebooting the whole box. I'm rebooting one of the CM's first, and then the other? (like a CM failover?)
But the issue still stands, if one of the members ever goes offline for ANY reason you lose the whole volume correct? We have dual power at our data center, and each CM is plugged into a separate switch...but, I guess I am always afraid of Murphy's law...and if something CAN go wrong, it seems like it does.
Does this issue become less of a problem the more members you have? Like, if you had 3 members, would there be enough parity information available on 2 to have the volume survive if you lost the third one?
Or, is this where SyncRep comes into play. Should I have 2 members Pool1, 2 members in Pool2 and syncrep between them?
Origin3k
4 Operator
•
2.3K Posts
0
November 13th, 2014 12:00
I assume you have 4 identical members than and keep in mind that SyncRep have no Automated failover right now. Its a manual Task of performing a switch to the SyncAlternate.
Regards,
Joerg
cwildermuth
4 Posts
0
November 13th, 2014 12:00
I only have 2 right now, so I'll have to get 2 more. Somehow I think Dell plans this stuff this way on purpose! Hahaha. :-)
I'm ok with switching to the SyncAlternate manually; if my choice is "manual switch to SyncAlternate" or "lose a whole volume because one member went down".... I'm definitely ok with the manual switchover :-)
cwildermuth
4 Posts
0
November 13th, 2014 12:00
Yes thanks. It sounds like my best bet is to have 4 total members, 2 in Pool1 and 2 in Pool2 and then SyncRep the Volume between the two Pools. (sadly, this removes my ability to replicate offsite though...*sigh*).