Start a Conversation

Unsolved

I

1 Rookie

 • 

14 Posts

7448

September 28th, 2021 00:00

Powerstore - Link Aggregation

Hi, nice to meet you

we are designing port connectivity on powerstore T series, example 500T or 1000T

from the network switch side
- 2 switches S4128 with sfp+ for ToR
- ready VLTi with QSFP

can we do the following options
#option1
port0 (10Gb) & port1 (10Gb), (default bonding or LACP): for traffic cluster, NAS, iSCSI
The question is, can we increase the throughput for the iSCSI connection, to 20Gb with this option?

#option2
port0 (10Gb) & port1 (10Gb), (default bonding or LACP): for traffic cluster, NAS
port2 (10Gb) & port3 (10Gb): for iSCSI
the question is, can port2 & port3 be bonded or LACP? thus yielding 20Gb for iSCSI

#option3
port0 (10Gb) & port1 (10Gb), (default bonding or LACP): for traffic cluster, NAS
port2 (10Gb) & port3 (10Gb), not-bonding port : for iSCSI
the question is, do port2 & port3 only support multi-path, or can it be bonded through a network switch configuration? so we can increase the throughput for the iSCSI connection, to 20Gb

 

Thank youu,,

 

1 Rookie

 • 

54 Posts

October 1st, 2021 02:00

For PowerStore T, only port 0 and port 1 of mezz card configure as system bond by default. Port 2 and port 3 does not support system bond, use multipath for load balancing. 

You can refer to example of cabling from Dell EMC PowerStore Networking Guide for PowerStore T Models doc:

https://www.dell.com/support/manuals/en-us/powerstore-1000t/pwrstrt-ntwkg/cable-the-base-enclosure-to-the-tor-switches?guid=guid-209dd57f-632e-4bc2-bf6d-a6f4e2608b4b&lang=en-us 

1 Rookie

 • 

14 Posts

October 3rd, 2021 19:00

Hi Hoo,

thanks for the guide info, so I'm assuming

#option1
port0 (10Gb) & port1 (10Gb), (default bonding or LACP): for traffic cluster, NAS, iSCSI
The question is, can we increase the throughput for the iSCSI connection, to 20Gb with this option?
yes can, because this port already bonding, and this port can we set for iSCSI connection traffic

#option2
port0 (10Gb) & port1 (10Gb), (default bonding or LACP): for traffic cluster, NAS
port2 (10Gb) & port3 (10Gb): for iSCSI
the question is, can port2 & port3 be bonded or LACP? thus yielding 20Gb for iSCSI
no, because port2 & port3 not support bonding

#option3
port0 (10Gb) & port1 (10Gb), (default bonding or LACP): for traffic cluster, NAS
port2 (10Gb) & port3 (10Gb), not-bonding port : for iSCSI
the question is, do port2 & port3 only support multi-path, or can it be bonded through a network switch configuration? so we can increase the throughput for the iSCSI connection, to 20Gb
yes can, with multipath configuration, support failover and load balancing

please correct my assumption..

and can you explain about the port channel for iSCSI connection traffic ?
can we configure port2 & port3 with port channel without configuring multipath to get 20gb connection ?

thanks ,,

1 Rookie

 • 

14 Posts

October 5th, 2021 02:00

Hi @Ooi Hoo Hong 

Right, thanks for info

and please correct my assumption

#option1
port0 (10Gb) & port1 (10Gb), (default bonding or LACP): for traffic cluster, NAS, iSCSI
The question is, can we increase the throughput for the iSCSI connection, to 20Gb with this option?
yes can, because this port already bonding, and this port can we set for iSCSI connection traffic
and support for port channel

#option2
port0 (10Gb) & port1 (10Gb), (default bonding or LACP): for traffic cluster, NAS
port2 (10Gb) & port3 (10Gb): for iSCSI
the question is, can port2 & port3 be bonded or LACP? thus yielding 20Gb for iSCSI
no, because port2 & port3 not support bonding
and no support port channel

#option3
port0 (10Gb) & port1 (10Gb), (default bonding or LACP): for traffic cluster, NAS
port2 (10Gb) & port3 (10Gb), not-bonding port : for iSCSI
the question is, do port2 & port3 only support multi-path, or can it be bonded through a network switch configuration? so we can increase the throughput for the iSCSI connection, to 20Gb
yes can, with multipath configuration, support failover and load balancing
and no support port channel

Thank you

1 Rookie

 • 

54 Posts

October 5th, 2021 02:00

Hi @IbnuR ,

Port2 and port3 does not support port-channel configuration because it is not configure as system bond with LACP. 

Only the first 2 ports of the 4-Port Card are aggregated in an LACP bond that support port-channel configuration.

Please refer to this doc for more information about system bond (page 31):

https://www.delltechnologies.com/asset/en-us/products/storage/industry-market/h18149-dell-emc-powerstore-platform-introduction.pdf 

Example of the steps to configure the LACP port channels on the switch ports for the nodes:

https://www.dell.com/support/manuals/en-tt/powerstore-40u-rack/pwrstrt-ntwkg/configure-the-lacp-port-channels-on-the-switch-ports-for-the-nodes?guid=guid-da3248ca-637f-41dd-9d9f-22e9ef87c2cd&lang=en-us 

1 Rookie

 • 

54 Posts

October 6th, 2021 08:00

Hi @IbnuR 

Yes, your assumption is correct.
Please take note that LACP port-channel provides high availability and potentially increased throughput, however the maximum throughput between any two devices will not exceed the link speed of the physical port. Meaning single flow max throughput is still 10Gb.

#option2 - port2 and port3 can be use for iSCSI but no configured as port-channel

1 Rookie

 • 

14 Posts

October 6th, 2021 18:00

Hi @Ooi Hoo Hong 

Ok, everything is clear and makes sense, beautiful discussion ..

thanks a lot...

I'll leave this post open first, maybe someone wants to add something related to this discussion

January 16th, 2022 07:00

Just be aware that LACP/LAG is Not supported on the 500T on board IO module. And there is no RG45 option for the mezzanine card, only optical. 

January 25th, 2022 09:00

I also believe this to be true that the 500T does not support LACP/LAG on port 0 and 1 like Dell says.  WE are try to implement a 500T iSCSI with ESXi and Cisco switches.  We cannot get LACP to come up on IO module ports 0 and 1.  Dell says this is the proper config and I disagree.  Does anyone have a sample config for iSCSI, ESXi and Cisco 3650 or 9300 switches?  It seems to be rocket science to get Dell to help with this configuration. It would be great to know the Cisco configure, Storage config and the ESXi configure someone has in production.

2 Intern

 • 

727 Posts

January 31st, 2022 20:00

PowerStore supports LACP bonding on the first two ports of the on-board mezzanine cards. Are you saying that the LACP bonds are not configurable on those first two ports? If so, I recommend working with Support to triage this further.

Note that PowerStore does not support LACP on the IO Module ports yet. This is a roadmap item.

February 4th, 2022 14:00

Ports 0 and 1 on the ON Board 4 port I/O board (the one that ships) does not support LACP on the 500T. I had the exact same issue and was told it only supports LACP/LAG with the mezzanine card which is only fiber. I use Cisco C9300 switches. Only thing I could do was configure as MPIO. Yeah, not expecting that since Dell's literature makes you assume its supported. NOPE.

See the picture I have attached. The red arrows are the mezzanine slots. The on board I/O are the ones with Cat cables plugged in. 

Inked66252091544__9707712B-0A9F-4691-B972-FD55A9F83823 (1) (002)_LI.jpg

2 Intern

 • 

727 Posts

February 7th, 2022 20:00

JNelson - in the picture, the cables are connected to IO Module ports, and LACP is NOT supported in those ports at this time. Please send me pointers to any documentation which says that LACP is supported on the IO Module ports in the current software release.

To be clear, there is no 4-port embedded module in the picture that you had attached (red arrows). Those would be the only two ports where LACP can be configured.

February 20th, 2022 18:00

That's exactly my point. And the Mez cards ARE ONLY FIBER NOT CATX. We were sold the 500T saying it supported LACP since I had at the time only 1GB switches. That was a farse. LACP is supported with most every other vendor on the IO slots. No reason why the 500% should be any different. 

2 Intern

 • 

727 Posts

February 27th, 2022 19:00

I understand that you want to be able to configure LACP on the BaseT ports in PowerStore 500T. We plan to support this as part of an upcoming release. You can work with your account team to get more details. 

I assume that you plan to run File traffic on these LACP bonded ports. Is that correct?

No Events found!

Top