This post is more than 5 years old

2 Intern

 • 

172 Posts

2185

September 14th, 2016 06:00

Group Discovery

If I have 3 PS6210 arrays in a group connected to an esxi 6 cluster via iscsi, do the vmnics learn all EQ eth ports during the initial discovery? I know the group IP is used for initial discovery, so I'm looking for the relationship between the host and EQ nics after that.

We're considering a live controlled failover from our 1g switches to 10g switches, but  we can't lag the 10g to the 1g. All network connections are redundant (teamed nics, etc). I can give more details about the plan, but it depends on the answer about the eth relationships.

Thanks.

2 Intern

 • 

172 Posts

September 14th, 2016 09:00

That answers my question, but whether I fully understand it is another matter. :emotion-1: Here is the context.


The plan would be to unplug one 1g switch port from eth0 on each inactive EQ controller and plug into the 10g switch, then manually fail over the controllers on each EQ. Now I have a 1g and a 10 connection on the EQ. Since 1g and 10g are not lagged together, eth0 and eth1 don't see each other, so that is potential problem number 1. Also, on the host, what happens when I then unplug one of the 2 teamed iscsi nics? They use originating port id. All connections should fail over to the one active vmnic, but will they know how to get to the single 1g connection on the EQ, or will the ones that were connected to eth1 on the EQ keep looking for eth1 and then freak out when it can't be found?

2 Intern

 • 

172 Posts

September 14th, 2016 11:00

My apologies for the post. I know I'm giving info in dribs and drabs, so no worries if you don't have to time to sort. I did open a ticket with Dell and I submitted my plan, and the engineer said it looks good, but I had this lingering doubt about the non-lagged switches and your explanations are always helpful.

In case you're still reading:

I'm set up per TR1091, and I have the Dell PSP module for MPIO. I have the failover order set up under the NIC Teaming tab on the host, so that's why I referred to teaming.

The hosts have 4 10g nics each, 2 for iscsi and 2 for vmotion. They dumb down to 1g for the 1g switches, just like the EQs. When I refer to changing from 1g to 10g, I just mean I'm moving from a port on the 1g switch to a port on the 10g switch.

I thought I was getting around the non-lagged issue between the different speed switches by unplugging one vmnic and making all connections move to the remaining vmnic still connected via 1g. Wait a few minutes, then plug the disconnected one into the 10g and wait a few minutes before doing anything else. At that point, one of the vmnic pair is in 1g and one is in 10g, and the EQ has one 1g and 1 10g.  If I haven't crashed anything at that point (the big question), the next step is to unplug the remaining 1g vmnic, wait a few minutes and then plug into the 10g. After a few minutes, both vmnics are communicating over 10g to the single 10g EQ port. Then I move both of the inactive EQ controller ports to 10g and manually roll over to it (for all EQs in the group). At that point, all connections are 10g, and I just need to move the remaining EQ nic on the inactive controllers to 10g.



2 Intern

 • 

172 Posts

September 14th, 2016 11:00

OK, thanks. The iscsi VMkernel ports are each set up Active/Standby. My vmotion ones are set for Active/Unused. Is that still OK?

2 Intern

 • 

172 Posts

September 14th, 2016 13:00

Yes, the VMKernel is set to Active/Unused and the associated port group is set to Active/Standby. The iscsi software adapter shows Compliant. I was looking at the port group on my vswitch when I wrote before, so sorry about the confusion there.

My vmotion vswitch is set up with one vm port group and Active/Standby vmkernels.

Thanks.

No Events found!

Top