Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2368

October 5th, 2010 09:00

NS120 FS Network and interface utilization

I have a FS network created using 3 of the 4 1GB interface on my primary datamove, the problem is I only see traffic going across two of the three:

http://imagebin.ca/img/IZJK1aQe.png

I'm trying to understand why nothing is going across cge2?  here is how my network device is defined:

http://imagebin.ca/img/RkUvNd3.png

The box is primarily being used for ISCSI, have some NFS, but during the day, when this shoot was taken, its 100% NFS traffic, the only thing I was thinking was because maybe since I only have two targets defined on the box, each target sticks with a particular interface?   That wouldnt seem right, but am looking for some advice:

http://imagebin.ca/img/iGC9nx.png

366 Posts

October 5th, 2010 10:00

Also, if you have only 7 clients acessing the Celerra, there is a real chance one link is not being used at all....

2 Intern

 • 

227 Posts

October 5th, 2010 10:00

[nasadmin@mi-mke-imccs ~]$ server_sysconfig server_2 -virtual -info trk0
server_2 :
*** Trunk trk0: Link is Up ***
*** Trunk trk0: Timeout is Short ***
*** Trunk trk0: Statistical Load Balancing is IP ***
Device     Local Grp   Remote Grp Link  LACP Duplex Speed
------------------------------------------------------------------------
cge0       10000       1280       Up    Up   Full     1000 Mbs
cge1       10000       1280       Up    Up   Full     1000 Mbs
cge2       10000       1280       Up    Up   Full     1000 Mbs

9 Legend

 • 

20.4K Posts

October 5th, 2010 10:00

Gustavo,

but with 7 clients i would think it's very likely that the load should be distributed over the 3 links, that's what LACP is supposed to do, even if it's very low throughput.

366 Posts

October 5th, 2010 10:00

If you change the statistical load balance to "TCP", you should see a more distributted load between all ports on the trunk.

See https://community.emc.com/docs/DOC-7034 .

But, as per your stats output above, it does not seem you are reaching the limit of the nics, so I doubt this will improve your overall performance.

Gustavo Barreto.

2 Intern

 • 

227 Posts

October 5th, 2010 10:00

[nasadmin@mi-mke-imccs ~]$ server_sysconfig server_2 -v
server_2 :
Virtual devices:
fsn0    active=trk0 primary=trk0 standby=cge3
trk0    devices=cge0 cge1 cge2  :protocol=lacp
fsn    failsafe nic devices : fsn0
trk    trunking devices : trk0

2 Intern

 • 

227 Posts

October 5th, 2010 10:00

Right now I have about 7 hosts that access a total of 14 file systems across two iscsi targets, mostly sql server luns and vmfs luns.  Is there a more optimal setup for a more distributed approach across all the nics, or is it like you said, not necessarily a problem, just how its designed.  I was looking into possible performance issue, and its the main reason I noticed the infrequant use of the one nic

2 Intern

 • 

227 Posts

October 5th, 2010 10:00

any thoughts then dynamox on why i might not be seeing better distribution?

366 Posts

October 5th, 2010 10:00

Hi,

please, post the output of :

# server_sysconfig server_2 -v

Gustavo Barreto.

366 Posts

October 5th, 2010 10:00

Hi,

So, you have a trunk composed by three ports using LACP with load balance "IP", and one FSN with this trunk as primary and cge3 as standby path.

Link aggregations provide more overall bandwidth, but any single client only communicates through one port and is limited to the bandwidth of that port.

Therefore, you might not see any traffic in one port, and this is not necessarily a problem.

Also, since the statistical load balance is set to IP ( which is the default ), the dm will use the source and destination IP addresses of the outbound packet to select the link. It is effective when remote hosts  are on the local LANs or reached through routers.

How many clients you have accessing the Celerra ?


Gustavo Barreto.

2 Intern

 • 

227 Posts

October 5th, 2010 10:00

appreciate all the good information.  so i was wrong to assume that creating multiple iscsi targets would help distribute the load coming in?  or no?

9 Legend

 • 

20.4K Posts

October 5th, 2010 10:00

if you run

server_sysconfig server_2 -virtual -info trk0

are all LACP links up ?

9 Legend

 • 

20.4K Posts

October 5th, 2010 11:00

not sure, i have a 2 port LACP trunk and load is distributed evenly. Now i have a couple of hundred clients establishing CIFS connections.

366 Posts

October 5th, 2010 11:00

You can have only one target, since you have two different devices on the portal list.

For example, you could have two trunks of two ports with one IP interface on each, and put these two IP's on the portal list of the iSCSI target.

If you put both IP's on the initiator, the traffic would be balanced across these two trunks ( but remember, a single client would use only one port of each trunk ).

Since you have only 4 NICs, you couldn't have a FSN on the above example.

You could have a FSN composed by a 2 port LACP as the primary path and cge2 as standby path, and cge3 with another IP interface.

If you add both IP's ( LACP and cge3 ) on the portal list of the Celerra, and also on the initiator, you would balance your traffic between the trunk and the single cge3 port.

Regarding the MPIO question, I am not sure what's the best config, but I use "Vendor Specific", and it works well...

Gustavo Barreto.

2 Intern

 • 

227 Posts

October 5th, 2010 11:00

thats a good point though, I decided to look @ my ns502 and it shows similar behavior and i also have several hundred cifs connections alongs with iscsi sessions and nfs mounts:

http://imagebin.ca/img/QQduoM1.png

Just odd has the cge1 is pretty quiet on the outbound side, and 2 is pretty quite on the inbound side, maybe i'm over analyzing and am looking for problem, but an just trying to understand whats to be expected.

2 Intern

 • 

227 Posts

October 5th, 2010 11:00

Thanks you provided some valuable input.  My last question is, if i have two cisco 3560 switches today, and my config exsists for the primary datamover where i have 3 active links on switch one and 1 standby link on switch two (fsn0), is that a good practice?  Right now I do use MPIO on my iscsi hosts, but I'm starting to question what benefit I'm really gaining from that when its pointing @ the same target?

http://imagebin.ca/img/k9EwhEmY.png

Just trying to make sure I have the best config for the hardware that I have.  These two switch are exclusivley used for iscsi traffic, with one port being an uplink to our production network which gives us the cifs access pipe.  Also on the MPIO topic, what is the best config when talking network on the NS side?

No Events found!

Top