Unsolved
This post is more than 5 years old
1 Rookie
•
42 Posts
0
4044
August 7th, 2019 12:00
Linux bonding for iscsi traffic
Hi, I have a specific situation where I'm using openstack cinder volume driver to integrate block storage directly to an Equallogic iSCSI Group. The problem is that Cinder EQL Driver for Openstack does not support MPIO, so, When I connect a block storage volume to my openstack nodes, only one iscsi session is created between the host and the storage. I'm thinking into create a linux bonding to prevent downtime in case of a switch fail where the unique iscsi session is connected to the EQL group iscsi portal. But I'm not aware if this configuration is supported. Anyone here uses bonding with iscsi? there is any problem within this setup? or iscsi should not be used with bonding (active/passive mode=1), in this case will not be a LACP, just a independent linux bonding (no switch lag configured). Regards.
No Events found!



dwilliam62
4 Operator
•
1.5K Posts
0
August 7th, 2019 13:00
DMONTAGNA
1 Rookie
•
42 Posts
0
August 7th, 2019 13:00
For me, will not be a problem to only have a unique iscsi session between the host and the storage, since it is protected if a single nic or switch fail (using bonding in this case and stacked switches)
My only concern is if it's a supported configuration to connect the EQL group using linux open-iscsi with bonding (active-passive) without mpio
dwilliam62
4 Operator
•
1.5K Posts
0
August 7th, 2019 13:00
DMONTAGNA
1 Rookie
•
42 Posts
0
August 7th, 2019 13:00
Complementing my last response, Cinder EQL Driver does use the native linux iscsi initiator to login and rescan the iscsi target, but does not rely on native mpio to create the /dev/mpathX devices, it uses a native openstack cinder driver (os-brick) to create iscsi sessions.
DMONTAGNA
1 Rookie
•
42 Posts
0
August 7th, 2019 13:00
Hi Don,
The host itself is working normally with mpio, the problem is that Cinder EQL Driver does not uses the native Linux MPIO for creating iscsi sessions to attach volumes to hosts... that's why I need to use a bonding in this case to avoid a failure in one of my switches which is used to cinder driver to connect to the iscsi target (EQL).
Other Cinder Drivers for Dell Storage products work like a charm, only the EQL driver that does not support native mpio for attaching volumes to nodes..
DMONTAGNA
1 Rookie
•
42 Posts
0
August 7th, 2019 14:00
Thanks again Don,
In the MIKATA release of openstack, the nova/libvirt was working correctly with iscsi multipath and cinder eql driver, after NEWTON the functionality was broke again, since this driver is being deprecated by openstack (will not be available after STEIN release and the driver maintenance is over) I 'm stuck with this setup and my only alternative now is to try to use bonding to avoid this problem (I'm using the QUEENS release of openstack)..
DMONTAGNA
1 Rookie
•
42 Posts
0
August 7th, 2019 14:00
Thanks, In my case, the switches are stacked, and even they were not, I'll not be using dynamic LACP, so, it's would not be a problem either. I will make some more researches about support matrix of EQL and bonding to see if I find any downside on this. Thanks in advanced, and sorry about my bad English :-)!!
dwilliam62
4 Operator
•
1.5K Posts
0
August 7th, 2019 14:00
dwilliam62
4 Operator
•
1.5K Posts
0
August 7th, 2019 14:00
Hello,
Thanks for the update and information. That's sad to hear.
Only other downside of trunking is typically on non-stacked switches no switch redundancy.
Don
DMONTAGNA
1 Rookie
•
42 Posts
0
August 15th, 2019 05:00
dwilliam62
4 Operator
•
1.5K Posts
0
August 15th, 2019 18:00
Hello,
That's great to hear! Thanks for letting us know.
Don