This post is more than 5 years old
8 Posts
0
2052
January 3rd, 2017 13:00
VMWare two groups - IQN issues possibly
I have two EqualLogic groups, two modular arrays in a chassis in one group, and two external arrays in another. The chassis blades are VMWare hosts, using that storage as VM storage. I've recently uplinked my isolated external group into the chassis (on another VLAN) to also use this group for VM storage. On the VMWare hosts I added a path to a new disk on this external group and mounted the disk. Immediately I started receiving disconnects of my modular group disks and new external group disk.
I believe the IQN target name prefix (iqn.2001-05.com.equallogic), being the same in both groups, is conflicting at this point. No other variables are the same and the networks are virtually separated. As far as I can tell, there is no way to change the target name prefix for future disks, am I wrong about this?
Thank you



dasheeown
8 Posts
0
January 3rd, 2017 14:00
m1000E Chassis - two stacked M8024-k's (with 10GbE addon card in each)
1x 10GbE uplink into each switch to my core switch
2x 10Gb SFP+ stacking cables in each switch
1x 10Gb SFP+ uplink cables in each switch going into each PowerConnect 6224 (I have two stacked with 10Gb SFP+ for the external group
The NICs into the M8024-k's are fragmented, with 5Gb/s each (two per host) dedicated to the SAN VLAN.
The chassis SAN units are isolated to VLAN 20 on their own subnet. The uplinks to the external group are isolated to VLAN 21 and also on their own subnet. The subnets do not overlap, even though they're on separate VLANs.
dasheeown
8 Posts
0
January 3rd, 2017 14:00
The NICs are fragmented on the hosts, and yes 5Gb of the 10Gb is fragmented into a separate NIC for each switch, so 10Gb total with two paths to the SAN. This was a recommended configuration by the deployment team for my setup. The NIC is fragmented further, but I won't go into it, but it's the only connection into these servers, so we need external and private networks for the cluster.
Two members in the external group:
PS4100X with 2x GbE into each controller (each controller uplinks into one 6224)
PS6100X with 4x GbE into each controller (each controller uplinks into one 6224)
In VMWare, we have the two iSCSI HBAs (one for each 5Gb fragment). Within that HBA, I have one adapter for VLAN 20, with an IP for that iSCSI network, and one adapter on VLAN 21, with an IP for that iSCSI network. Now, with both adapters connected to each network separately, I'm able to mount disks from each group using the group IP for each and the IQN for the disk I want.
The 6224s are behind on firmware updates, I haven't done them in awhile. The m8024-k's are fairly recent, I updated them about a month ago.
dasheeown
8 Posts
0
January 3rd, 2017 18:00
Don,
Appreciate all the help, I'm curious what you mean by oversubscribing the ports? *Side note: the issue is now fixed based on my misconfiguration *cough* misunderstanding *cough* within VMWare. However, I'm still curious about your statements. The 10Gb (SFP+ port on the rear modules) are used for switch-to-switch iSCSI traffic only. As far as redundancy, each switch group is stacked (M8024-Ks stacked together and the 6224s stacked together), then one 10Gb SFP+ from each 6224 runs up to the corresponding M8024-K. Then, each controller 1 of both external SAN units connect to the first 6224 and each controller 2 of both external SAN units connect to the second 6224. If I lost a unit controller, the traffic would flow to the opposite switch. If I lost a 6224, the traffic would flow to the opposite M8024-K, using the 6224 stack to pull connections from the now stunted 6224. And if I lost an M8024-K, the traffic would flow to the opposite M8024-K. I spent quite a bit of time on this to ensure I did have redundancy, but another set of eyes would be great, so any opportunity for improvement I will surely take.
Onto my mistake within VMWare. I was using the hardware iSCSI HBAs which require port binding, also requiring that each network adapter added to the iSCSI HBA be on the same subnet/broadcast domain. After reverting these HBAs to only serving my in-chassis units, I turned to the software iSCSI initiator to serve my external unit needs. I don't see another way to use the hardware HBAs without converging the subnets, and the software HBA does not require port binding, and instead uses the route table to create the paths. I can take the 'small' performance hit with the software adapter, since the external arrays will be used for less intensive applications. My findings are reflected in this VMWare article for future reference: here
Thanks again, Don, and if you have any suggestions on improving my configuration, please let me know.
Thanks,
Danny
dasheeown
8 Posts
0
January 4th, 2017 09:00
Don,
That's interesting, I'll have to reach out to my VMware rep and see if I can put up a test host to try it out.
In the meantime, I have both 10Gb adapters bound to the software iSCSI HBA now, and over manually added all six paths to the external group disk. In round robin configuration I'm getting very decent speeds and I can see within the equallogic statistics I'm load balanced pretty evenly on all ports.
Thanks again for all your help