Unsolved
1 Rookie
•
35 Posts
0
2365
December 14th, 2021 14:00
PowerStore 1000T Port 0 and 1 Bond \ LAG and using iSCSI
Since on the 1000T the first 2 ports on the 3 port card are always configured as a bond or LACP\LAG... and I think this cannot be changed and must always stay this way
I also heard, from many sources, that running iSCSI on ports that are part of a LAG or LACP is not recommended so in this case is it recommended to run iSCSI over ports 2 and 3 and just configure MPIO?
if iSCSI only is being used, no other protocols, do ports 0 and 1 even need to be connected to a switch? or is there some node to node communications also on those ports that must be supported?
No Events found!
leroyl
1 Rookie
•
35 Posts
0
December 16th, 2021 06:00
I see in my above post i have some confusion in my port numbering... so let me repost with it clarified
-----------------
Since on the 1000T the first 2 ports on the 4 port card are always configured as a bond or LACP\LAG... and I think this cannot be changed and must always stay this way
I also heard, from many sources, that running iSCSI on ports that are part of a LAG or LACP is not recommended so in this case is it recommended to run iSCSI over ports 3 and 4 and just configure MPIO?
if iSCSI only is being used, no other protocols, do ports 1 and 2 even need to be connected to a switch? or is there some node to node communications also on those ports that must be supported?
Kumar_A
2 Intern
•
727 Posts
0
December 25th, 2021 20:00
You are correct that the first two ports on the 4-port card are automatically bonded and that cannot be changed. But there is no reason why customers should stay away from using those ports for running the iSCSI traffic. Please let us know if you saw a recommendation not to use those ports for iSCSI in a Dell document.
To be clear, there are other traffic types like NAS or intra-cluster management traffic (in a multi-appliance PowerStore cluster) that can also be running on those two bonded ports. And customers may decide to run their iSCSI traffic on other ports in the system (to set aside the bonded ports for NAS or intra-cluster management traffic only). The decision to do so would depend on the expected load on those two bonded ports from other network traffic.
dpayne-2014
1 Rookie
•
3 Posts
0
February 21st, 2022 07:00
I believe they will also always show as down, and certain NAS features may not work without these being configured. There may be cli commands to bypass this, but again, I think Avi is right.
Your concerns about LACP/LAG with iScsi and Vmware are limited to the host connectivity to the switch and configuration within VMware from there. If you are using iScsi Software adapters with a single subnet for the multipath, then you will need to distribute your uplinks in the VMkernel to a single port at a time anyhow. I have found that having the LACP at the switch for the iScsi on teh powerstore in this configuration has been fine and Vmware MPIO (Round Robin) works great.
MPAIVA-CC
1 Rookie
•
2 Posts
0
September 26th, 2022 05:00
Hello, I have a question about mixing networks (iSCSI and NVMe). I have a POWERSTORE 1000T Firmware 2.1 with only the 4 port card per controller. The first 2 ports are used for the cluster network and NAS. Following best practices https://www.delltechnologies.com/asset/en-us/products/storage/industry-market/h18241-dell-powerstore-best-practices-guide.pdf page 11-12 do not recommend mixing traffic of NAS with NVMe. So I decided to use ports 3 and 4 for iSCSI and NVMe. Following the networking guide https://dl.dell.com/content/manual55542652-dell-emc-powerstore-networking-guide-for-powerstore-t-models.pdf?language=en-us&ps=true I don't know how to configure the switch ports. With NVMe I must configure flowcontrol receive off and with iSCSI flowcontrol receive on. What would be the best way?