Start a Conversation

Unsolved

This post is more than 5 years old

12170

September 11th, 2012 18:00

Merging two members in a pool - how long should it take?

We have three members in our array - all three are PS6010.  

1) 2TB 7200 rpm SATA

2) 10K rpm 640 gig SAS

3) 15K rpm 640 gig SAS

For the longest time we had the three in individual pools - the 7200 rpm drives as Raid 5, and the SAS drives as Raid 50.

The 7200's store our archives, etc, and the SAS drives are used for our databases.

For performance reasons, we recently decided to rebuild the SAS drives as raid 10, and merge the two members into one pool.  So what we did was:

1) Moved all production databases onto the 15k drives

2) Rebuilt the 10k drives as Raid 10

3) Moved the production databases onto the raid-10 10k rpm member

4) Rebuilt the 15k drives as Raid 10

5) Merged the two members into one pool.

We completed this step (merging the two members) this afternoon, and the member stopped verifying about an hour ago.

Looking in SAN HQ, I can see that they are merged, and I can see that pool has significantly higher maximum IOPS.  But looking at the individual members, I still see the 10k drives being pegged at 100%, and the 15k drives are at 0% usage.  

I expected the data to begin to "migrate" (or expand, not sure the right word) almost immediately in order to spread the IOPS amongst all available drives.  

Am I wrong about this / just not being patient enough?  Is there something else that needs to be done to have the data "shared" between both members?

Many thanks in advance...

1 Rookie

 • 

62 Posts

September 12th, 2012 07:00

Also just in case.. Check if the Load balancer is actually enabled in the group manager:

1. Click Group.

2. Click Group Configuration.

3. Click the Advanced tab to open the Load Balancing panel.

4. Select or deselect Enable performance load balancing in pools (Alt+E).

5. Click Save all changes (Ctrl+S).

I am not entirely sure if it disables stuff like Capacity load balancing, but give it a look atleast

7 Technologist

 • 

729 Posts

September 12th, 2012 07:00

The Capacity Load Balancing (CLB) in this case would be started when the rebalance plan (RBP) kicks in and should be immediate.

Things to check:

1. Inter communication between members, use the ping/traceroute from each eth interface of member_A to each eth interface to member_b, etc, (see below for ping/traceroute commands).

2. No free space on the first member (the one currently holding the volume data), we need at least 10% free space on that member in order to move the slices to the other member(s).

Testing Ping/Traceroute:

>> You should test all the members, start with the ones in the same pool, then test the other pool members as well <<

To Ping:

Telnet/SSH into one of your members

ping "-I "

(that is a –I as in Capital letter “eye”, ensure you use the quotes after the command the end of the destIP).  Use CTRL-C to quit.  

The “source_ETH_IP” is one of the eth interfaces of the member you connected to (eth0, eth1, etc.)

The “dest_IP” is each of the eth interfaces of the other member(s)

To Traceroute:

Telnet/SSH in to the member use one of the ETH interface ip’s not the group IP), then traceroute out of each specific ETH port,

GrpName>support

GrpName(support)>traceroute “-s [ETH port source IP] [destinationIP]”

“ETH port Soruce IP” is the member you connected to

The “destinationIP” is the other member test each eth combination (eth0, eth1, eth2, etc.; Member2 eth0, eth1, eth2 etc.).  Also note the quotes after the command and at the end of the destinationIP.

To get out of the support prompt, type:exit

If you checked these, I would suggest that you open a support case so they can have a closer look.

-joe

7 Technologist

 • 

729 Posts

September 12th, 2012 14:00

The setting (Enable/Disable Performance Load Balancing), is for the “Automatic Performance Load Balancer” (APLB) feature.  In this case, disabling this option will not do anything to cause the Capacity Load Balancer (CLB) to balance across to the other member.  Dell recommends that you leave this setting enabled, because disabling this setting will degrade SAN performance.

You can learn more about the CLB and APLB here: www.equallogic.com/.../DownloadAsset.aspx

Also, the “online” help has information too, just search for “performance load balance”

v5.x

psonlinehelp.equallogic.com/.../groupmanager.htm

v6.x

psonlinehelp.equallogic.com/.../groupmanager.htm

-joe

1 Rookie

 • 

62 Posts

September 12th, 2012 23:00

Ah i learned something new then. :-)

The option has been there for alot of versions before APLB was even implemented.

Check f.x this page psonlinehelp.equallogic.com/.../Enabling_or_Disabling_Pe.htm

or psonlinehelp.equallogic.com/.../controlling_performance_load_balancing.htm

Anyhow didnt want to hijack the thread!

7 Technologist

 • 

729 Posts

September 13th, 2012 08:00

Yes, I can see the confusion, I’ll try to explain:

In both links you provided (FW v3.x and v4.x online help) there wasn’t the APLB feature (swapping HOT pages to members with low latency), so the option in earlier version of the FW used the Performance Load Balancing (PLB) to move volumes to the member with a RAID configuration that is optimal for volume performance, based on the internal group performance metrics.  

With FW v5.x and above, we introduced Automatic Performance Load Balancing (APLB), which in addition to the PLB, introduced HOT page movement to members in the same pool with low latency.  So not only did you have PLB (volumes), but also APLB (pages).  The option now turns both features on or off.

However Capacity Load Balancing (volumes spanning multiple members) wasn’t affected by selecting this option in any of the versions.

-joe

No Events found!

Top