Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3161

July 24th, 2011 02:00

VNXe3100 with direct attached ESXi 4.1 hosts

Hi,

We're currently implementing a VNXe 3100 with two direct attached ESXi 4.1 hosts accessing VMFS datastores on the 3100. Of course ,ideally we'd be using switches and not direct attaching the hosts to the VNXe, but the client's budget currently doesn't stretch to that (altho' that is under discussion). I'm having a few issues getting path failure over working smoothly and haven't been able to find any documentation around direct attaching ESXi hosts.

On the VNXe3100 I've created two iSCSI servers, one for each SP. I've also created two VMFS datastores, which are configured so that one datastore is attached to each iSCSI Server - so one datastore is presented per SP. I've configured the VNXe3100 so that eth2 on each SP is in the same subnet and eth3 on each SP are also both in the same subnet, but the eth2 subnet differs from the eth3 subnet - I believe this is necessary for fail-safe networking to work correctly.

The ESXi hosts each have two NICs for iSCSI traffic. For ESXi host 1 I've connected it to eth2 on SPA and eth2 on SPB, host 2 is connected to eth3 on SPA and SPB. This means that the two vmkernel ip addresses on the ESXi host 1 are in the same subnet (being the same subnet as in use on the eth2 ports on the VNXe3100). ESXi host 2's iSCSI NICs are in the same subnet as the eth3 ports on the VNXe3100.

So the VNXe and hosts are cabled:

SPA
Eth2: 192.168.102.1  <---> ESXi Host 1 iSCSI0: 192.168.102.100
Eth3: 192.168.103.1  <---> ESXi Host 2 iSCSI0: 192.168.103.100

SPB
Eth2: 192.168.102.2  <---> ESXi Host 1 iSCSI1: 192.168.102.101
Eth3: 192.168.103.2  <---> ESXi Host 2 iSCSI1: 192.168.103.101

The issues I'm seeing:

When I scan the iSCSI initiator on the hosts I see two VMFS datastores (which is correct), although each only has a single path when I'd normally expect to see two paths (but only one active). Additionally when I disconnect the cable from ESXi Host 1 iSCSI0 to SPA eth2, I lose access to the datastore that is accessed via SPA (probably because there is only a single path). However, if I then rescan the iSCSI HBA on ESXi Host 1 I'll pick the dropped datastore back up again (as the VNXe3100 has failed the IP Address from SPA eth2 to SPB eth2 and the rescan picks the datastore up on SPB eth2). Obviously I'd like be able to have the failover work without requiring a rescan, however I'm wondering if the behaviour I'm seeing is because I've incorrectly configured something or this is expected behaviour with direct attached hosts.

Either way, any comments would be welcome.


Cheers,
Duncan

2 Intern

 • 

727 Posts

July 25th, 2011 10:00

When I scan the iSCSI initiator on the hosts I see two VMFS datastores  (which is correct), although each only has a single path when I'd  normally expect to see two paths (but only one active).

Avi: At any point of time, the iSCSI server would be running only on one SP. If that SP fails for whatever reason, the iSCSI server will automatically failover to the other SP. In other words, only one SP will know about the existence of the iSCSI server (and the datastore) at any point of time. This explains why you see only one path to the datastores.

Additionally  when I disconnect the cable from ESXi Host 1 iSCSI0 to SPA eth2, I lose  access to the datastore that is accessed via SPA (probably because there  is only a single path). However, if I then rescan the iSCSI HBA on ESXi  Host 1 I'll pick the dropped datastore back up again (as the VNXe3100  has failed the IP Address from SPA eth2 to SPB eth2 and the rescan picks  the datastore up on SPB eth2).

Avi: When you say the datastores become unavailable, does that imply you lost access (DU) to the datastore or you only need to rescan to see the datastore again? Was I/O running when you tested disconnecting the cable?

We’ve test configured direct attached with iSCSI. In both cases, with the cable pull test, we see an approximate 30 seconds pause in I/O. No loss of access from an application point of view. What kind of HBA card are you using?

11 Posts

July 27th, 2011 02:00

Hi Avi,

Thanks for the info. Based on that I've done some more testing on the ESX iSCSI configuration and think I've figured out where I went wrong. In the iSCSI set up I'd set up a single vSwitch with two vmknics and two vmnics. I had configured the iSCSI vmknics with explicit vmnic bindings; so each of the iSCSI vmknics only had one active adapter (the other adapter on the iSCSI vswitch being set to unused).

Once I changed the other adapter from unused to standby, then everything worked as expected - when I pulled the cable on the active adapter the vmknic failed over to the standby adapter which, since it was plugged into the equivalent port on the other SP, was then able to see the expected VNXe IP.

Cheers,

Duncan

1 Message

August 15th, 2011 15:00

Hi Duncan,

I'm having the same scenario but the failover didn't work with the stand by adapters. Only with both NICs active as shown in page 40 of the iSCSI SAN Configuration Guide. We're using the Software Initiator binded to the two physical NICs but we see at that all Paths from the vmhba35 (iSCSI Software) goes only from just one SP.
On my understanding, to get the failover working, the paths should go for both SPs isn'it?

Another question is if you configured the Jumbo frames in both VNXe and ESXi?

If yes, is just set in vmknics and vswitch the MTU to 9000 and in the VNXe the same, to go Advanced Configuration at the IO Modules and set the MTU Size to from 1500 to 9000?

In my understanding the MTU 9000 is a high recommended configuration when used for VMware Datastores isn'it?

Thank you.

No Events found!

Top