Unsolved

This post is more than 5 years old

1 Rookie

 • 

42 Posts

4044

August 7th, 2019 12:00

Linux bonding for iscsi traffic

Hi, I have a specific situation where I'm using openstack cinder volume driver to integrate block storage directly to an Equallogic iSCSI Group. The problem is that Cinder EQL Driver for Openstack does not support MPIO, so, When I connect a block storage volume to my openstack nodes, only one iscsi session is created between the host and the storage. I'm thinking into create a linux bonding to prevent downtime in case of a switch fail where the unique iscsi session is connected to the EQL group iscsi portal. But I'm not aware if this configuration is supported. Anyone here uses bonding with iscsi? there is any problem within this setup? or iscsi should not be used with bonding (active/passive mode=1), in this case will not be a LACP, just a independent linux bonding (no switch lag configured). Regards.

4 Operator

 • 

1.5K Posts

August 7th, 2019 13:00

Hello, Interesting. It seems like it still potentially can hand off to Multipathd? # what helper do you want to use to get root access? root_helper = "sudo" # The ip address of the host you are running on my_ip = "192.168.1.1" # Do you want to support multipath connections? multipath = True # Do you want to enforce that multipath daemon is running? enforce_multipath = False initiator = connector.get_connector_properties(root_helper, my_ip, multipath, enforce_multipath) In doing some quick searching seems like it's expects multiple discovery addresses on different IP subnets for MPIO. Open-ISCSI is capable of MPIO on single subnet iSCSI storage like EQL . https://specs.openstack.org/openstack/cinder-specs/specs/kilo/iscsi-multipath-enhancement.html Regards, Don

1 Rookie

 • 

42 Posts

August 7th, 2019 13:00

For me, will not be a problem to only have a unique iscsi session between the host and the storage, since it is protected if a single nic or switch fail (using bonding in this case and stacked switches) 

My only concern is if it's a supported configuration to connect the EQL group using linux open-iscsi with bonding (active-passive) without mpio

4 Operator

 • 

1.5K Posts

August 7th, 2019 13:00

Hello, 1st. You can technically trunk switch ports from Linux to switch. But that would still be only one iSCSI session. And only go from server to switch not all the way to storage. re:MPIO. I have not really worked with the Cinder product. But it's the iSCSI initiator on the Linux server that should be creating the iSCSI sessions from the defined ports. Based on the open-iscsi configuration. If you manually create volume and connect to it does that properly create multiple sessions? I would expect Cinder to be above the iSCSI transport layer but again I don't know how the driver was written. My expectation is that the driver creates the EQL volume with proper ACL then the initiator rescan is requested to pick up the new target. ISCSI initiator should then do discovery and login from all defined network ports. Multipathd would be up those devices and create an /dev/mapper device to encompass them. All I/O should be configured to use that MPIO device. Regards, Don

1 Rookie

 • 

42 Posts

August 7th, 2019 13:00

Complementing my last response, Cinder EQL Driver does use the native linux iscsi initiator to login and rescan the iscsi target, but does not rely on native mpio to create the /dev/mpathX devices, it uses a native openstack cinder driver (os-brick) to create iscsi sessions.

1 Rookie

 • 

42 Posts

August 7th, 2019 13:00

Hi Don,

The host itself is working normally with mpio, the problem is that Cinder EQL Driver does not uses the native Linux MPIO for creating iscsi sessions to attach volumes to hosts... that's why I need to use a bonding in this case to avoid a failure in one of my switches which is used to cinder driver to connect to the iscsi target (EQL).

Other Cinder Drivers for Dell Storage products work like a charm, only the EQL driver that does not support native mpio for attaching volumes to nodes.. 

1 Rookie

 • 

42 Posts

August 7th, 2019 14:00

Thanks again Don,

 

In the MIKATA release of openstack, the nova/libvirt was working correctly with iscsi multipath and cinder eql driver, after NEWTON the functionality was broke again, since this driver is being deprecated by openstack (will not be available after STEIN release and the driver maintenance is over) I 'm stuck with this setup and my only alternative now is to try to use bonding to avoid this problem (I'm using the QUEENS release of openstack)..

1 Rookie

 • 

42 Posts

August 7th, 2019 14:00

Thanks, In my case, the switches are stacked, and even they were not, I'll not be using dynamic LACP, so, it's would not be a problem either. I will make some more researches about support matrix of EQL and bonding to see if I find any downside on this. Thanks in advanced, and sorry about my bad English :-)!!

4 Operator

 • 

1.5K Posts

August 7th, 2019 14:00

Hello, I've used bonding in the past before open-iSCSI supported single subnet MPIO. Same with Solaris. Re; English. Your English is great! I had no issues understanding you. ¿Hablas español? Regards, Don

4 Operator

 • 

1.5K Posts

August 7th, 2019 14:00

Hello, 

 Thanks for the update and information.   That's sad to hear.  

  Only other downside of trunking is typically on non-stacked switches no switch redundancy.  

 Don 

 

 

1 Rookie

 • 

42 Posts

August 15th, 2019 05:00

Hi, Just to give a feedback from this thread!!! I tested yesterday the bonding with Centos7 and EQL and had no issues!! My tests involved disconnecting one interface from the bonding... no issues detected at EQL side, then I restarted one EQL active controller, and also no issues observed on the linux side, and last but not least, I restarted one of the stacks switches where this bonding is connected to, and also no issues. Thanks !!

4 Operator

 • 

1.5K Posts

August 15th, 2019 18:00

Hello, 

 That's great to hear!  Thanks for letting us know. 

 

Don 

 

No Events found!

Top