Start a Conversation

Unsolved

This post is more than 5 years old

3396

July 5th, 2010 19:00

DataStore no longer visible

I completed an job a few weeks ago of a 400 seat View implementation at a school district.  This was slightly unusual for me because of the network setup. The equipment is setup in two separate rows of racks.  There are 4 vSphere servers, one Cisco 4948 and a core switch in each row.  The NS-120 is in one of the rows. The two core Cisco switches route to each other.  My task was to get the NS-120 to be the iSCSI target for all the hosts.

Because I couldn’t etherchannel across the core switches I had to connect two ports to each 4948.  Each of these connections was setup as an etherchannel device of two ports each.  I created two interfaces, one on each device with two ip addresses.  Because iSCSI targets do not have the ip address imbedded in their iqn I didn’t see any issues with this.  I set both target ip addresses on each host and masked two LUNs to every host.  All seemed to go well at this point.  Because of the way this View implementation needed to work, I created the datastore from two 1.5TB LUNs.  Again everything went well.  I created the VDMs and VMs needed and tested the whole setup.  It all worked great, with some great i/o throughput.

Two weeks later…………  For a reason I do not know, all the vSphere hosts can no longer see the datastore.  All the hosts can see both LUNs but the DS is gone.  If I wanted to create a new datastore the LUNs are available.  Now I don’t want to do that for obvious reasons.

A ticket has been opened with VMware.  They are pointing towards the NS-120.  I’ve asked the client to open a ticket with EMC.

Visio and jpg attached.

The disks directory looks liike this:

[root@tc-esxvs02 disks]# ls

naa.600508e0000000001d4039277be5c801

naa.600508e0000000001d4039277be5c801:1

naa.600508e0000000001d4039277be5c801:2

naa.600508e0000000001d4039277be5c801:3

naa.600508e0000000001d4039277be5c801:5

naa.6006048c4a6f4d339ec336b5508a0cba

naa.6006048c678c98d1f865294f2e89d77e

naa.6006048c678c98d1f865294f2e89d77e:1

vml.0200000000600508e0000000001d4039277be5c8014c6f67696361

vml.0200000000600508e0000000001d4039277be5c8014c6f67696361:1

vml.0200000000600508e0000000001d4039277be5c8014c6f67696361:2

vml.0200000000600508e0000000001d4039277be5c8014c6f67696361:3

vml.0200000000600508e0000000001d4039277be5c8014c6f67696361:5

vml.02000000006006048c4a6f4d339ec336b5508a0cba43656c657272

vml.02000100006006048c678c98d1f865294f2e89d77e43656c657272

vml.02000100006006048c678c98d1f865294f2e89d77e43656c657272:1

There is only onle one local datastore and one SAN datastore.

SO, my questions are…How do I get the datastore back?. Did I configure this solution in correctly?    I could see no other way to make it work with the setup of the core switches.

It is possible that I will now have to re-implement this whole solution if VMware/EMC cannot retrieve the Datastore.

2 Attachments

9 Legend

 • 

20.4K Posts

July 5th, 2010 22:00

is this Celerra iSCSI or native Clariion iSCSI ?

1 Rookie

 • 

60 Posts

July 6th, 2010 05:00

Celerra iSCSI.

1 Rookie

 • 

60 Posts

July 6th, 2010 06:00

From my network specialist:

The 4948s are providing Layer 2 services as an access switch only so the "show ip route" at that layer won't provide much information.  The source and targets are in the same VLAN so they don't need to cross a Layer 3 boundary.  They do pass an Etherchannel through a pair of Cisco 4507s that serve as the core.

Loops wouldn't be a problem since spanning tree is designed to block them and allow redundancy.  Since they can ping each other and there are no packet filtering devices between them the network layer isn't a likely suspect as this point.

1 Rookie

 • 

60 Posts

July 6th, 2010 08:00

Additional information..  Each host has two dynamic discovery ip addresses, and each host sees one target with two LUNS and 4 paths.

1 Rookie

 • 

60 Posts

July 6th, 2010 09:00

I have just finished a cll with VMware tech support.  I will be posting sections from the logs shortly.  It definitely looks like a network/celerra issue.  The hosts can see the LUNs but cannot read them.  If you try to add storage using one of the LUNs it hangs trying to read the disk.  The logs show many SCSI reservation issues.

One thing that confused me was the first LUN (1.5TB) was showing as sdd and the extent (1.4TB) was showing as sdc.  When we setup the system we only had the one LUN.  The extent LUN was dded later.  Are these devices round the wrong way?

1 Rookie

 • 

60 Posts

July 6th, 2010 15:00

UPDATE.          The EMC support center failed over and back the datamover.  The datastore came back.  OK so now I'm happy for a while.  However, what could have happened to make this occur.  Is there an issue with the design of the setup.  Is this going to happen again?

This is the currrent list of disks in the /vmfs/devices/disks  directory.  Is this correct?


naa.600508e0000000001d4039277be5c801
naa.600508e0000000001d4039277be5c801:1
naa.600508e0000000001d4039277be5c801:2
naa.600508e0000000001d4039277be5c801:3
naa.600508e0000000001d4039277be5c801:5
naa.6006048c4a6f4d339ec336b5508a0cba
naa.6006048c4a6f4d339ec336b5508a0cba:1
naa.6006048c678c98d1f865294f2e89d77e
naa.6006048c678c98d1f865294f2e89d77e:1
vml.0200000000600508e0000000001d4039277be5c8014c6f67696361
vml.0200000000600508e0000000001d4039277be5c8014c6f67696361:1
vml.0200000000600508e0000000001d4039277be5c8014c6f67696361:2
vml.0200000000600508e0000000001d4039277be5c8014c6f67696361:3
vml.0200000000600508e0000000001d4039277be5c8014c6f67696361:5
vml.02000000006006048c4a6f4d339ec336b5508a0cba43656c657272
vml.02000000006006048c4a6f4d339ec336b5508a0cba43656c657272:1
vml.02000100006006048c678c98d1f865294f2e89d77e43656c657272
vml.02000100006006048c678c98d1f865294f2e89d77e43656c657272:1

No Events found!

Top