This post is more than 5 years old
11 Posts
0
3161
July 24th, 2011 02:00
VNXe3100 with direct attached ESXi 4.1 hosts
Hi,
We're currently implementing a VNXe 3100 with two direct attached ESXi 4.1 hosts accessing VMFS datastores on the 3100. Of course ,ideally we'd be using switches and not direct attaching the hosts to the VNXe, but the client's budget currently doesn't stretch to that (altho' that is under discussion). I'm having a few issues getting path failure over working smoothly and haven't been able to find any documentation around direct attaching ESXi hosts.
On the VNXe3100 I've created two iSCSI servers, one for each SP. I've also created two VMFS datastores, which are configured so that one datastore is attached to each iSCSI Server - so one datastore is presented per SP. I've configured the VNXe3100 so that eth2 on each SP is in the same subnet and eth3 on each SP are also both in the same subnet, but the eth2 subnet differs from the eth3 subnet - I believe this is necessary for fail-safe networking to work correctly.
The ESXi hosts each have two NICs for iSCSI traffic. For ESXi host 1 I've connected it to eth2 on SPA and eth2 on SPB, host 2 is connected to eth3 on SPA and SPB. This means that the two vmkernel ip addresses on the ESXi host 1 are in the same subnet (being the same subnet as in use on the eth2 ports on the VNXe3100). ESXi host 2's iSCSI NICs are in the same subnet as the eth3 ports on the VNXe3100.
So the VNXe and hosts are cabled:
SPA
Eth2: 192.168.102.1 <---> ESXi Host 1 iSCSI0: 192.168.102.100
Eth3: 192.168.103.1 <---> ESXi Host 2 iSCSI0: 192.168.103.100
SPB
Eth2: 192.168.102.2 <---> ESXi Host 1 iSCSI1: 192.168.102.101
Eth3: 192.168.103.2 <---> ESXi Host 2 iSCSI1: 192.168.103.101
The issues I'm seeing:
When I scan the iSCSI initiator on the hosts I see two VMFS datastores (which is correct), although each only has a single path when I'd normally expect to see two paths (but only one active). Additionally when I disconnect the cable from ESXi Host 1 iSCSI0 to SPA eth2, I lose access to the datastore that is accessed via SPA (probably because there is only a single path). However, if I then rescan the iSCSI HBA on ESXi Host 1 I'll pick the dropped datastore back up again (as the VNXe3100 has failed the IP Address from SPA eth2 to SPB eth2 and the rescan picks the datastore up on SPB eth2). Obviously I'd like be able to have the failover work without requiring a rescan, however I'm wondering if the behaviour I'm seeing is because I've incorrectly configured something or this is expected behaviour with direct attached hosts.
Either way, any comments would be welcome.
Cheers,
Duncan