Start a Conversation

Unsolved

This post is more than 5 years old

1102

October 1st, 2013 08:00

c160t0l0 No such device or address

I started this process months ago and I'm just not getting back to it.  Forgive me if I'm all over the place.

The bottom line is I've got an NS480 running 6.0.70-4.  When I select "Rescan All Storage Systems", it results in "c160t0l0 No such device or address". 

The back story isn't so cut and dry.  I'm in the process of removing and adding new storage pools.  I had an issue with not being able to delete an empty pool.  Support assisted in shrinking the pool to release v86 which allowed us to delete d7.  During this time, I added new LUNs to the NAS pool and Celerra never renamed them in the Storage Group Properties, on the Clariion side.  Coming back to it now, I added a new LUN to the NAS Storage Group and scanned the storage systems.  This resulted in the new LUN being renamed but it doesn't show up when I run nas_disk.  I then deleted d13 using nas_disk, which put the LUN name back to the original name I gave it.  I removed that LUN and three other LUNs, which never got renamed to the Celerra name and performed the rescan.  That rescan results in "c160t0l0 No such device or address".

Any ideas on how to proceed?

Thanks,

Antonio

Update0:

nas_summary shows 6 disk volumes.  nas_disk shows 6 LUNs used.  There are two LUNs, with the Celerra naming convention, in the NAS storage group.  Is it safe to remove them seeing as they're not being listed in nas_disk or nas_summary?

Update1:

Removed the two LUNs, mentioned in Update0.  The scan still results in "c160t0l0 No such device or address".

Update2:

"server_devconfig ALL -p -s -a" shows 13 chain id's showing diskerr= unmarked.  I believe below is why the storage scan is failing.  I assume I need to clear all 13 of those chains before the storage scan will complete.  Any ideas on how to clear these?

   chain= 160, scsi-160

  stor_id= 000000000000  celerra_id=

  tid/lun= 0/0 type= disk sz= 960449 val= -99 info= KASHYA KBOX3_SCOTTSDALE350000000001000000000000 diskerr= unmarked

Update3:

I opened an SR with Support but they weren't able get too far.  They did come across the below DB errors. 

$ ./dbchk -p

Error: did not find disk volume for 'c16t5l10+5 04305A005A005ANI+' at index '70'.

Error: did not find disk volume for 'c16t5l11+5 04305B005B005BNI+' at index '71'.

Error: Supressing some error checks, correct previous errors and rerun.

The output from "server_devconfig ALL -probe -scsi -all"  shows a chain16/tid5/lun11 but not 5/10.

chain= 16, scsi-16

  stor_id= APMxxxxxxxxx  celerra_id=

  tid/lun= 0/0 type= disk sz= 11263 val= -5 info= DGC RAID 5 04300000000000NI

  tid/lun= 0/1 type= disk sz= 11263 val= -5 info= DGC RAID 5 04300100010001NI

  tid/lun= 0/2 type= disk sz= 2047 val= -5 info= DGC RAID 5 04300200020002NI

  tid/lun= 0/3 type= disk sz= 2047 val= -5 info= DGC RAID 5 04300300030003NI

  tid/lun= 0/4 type= disk sz= 2047 val= -5 info= DGC RAID 5 04300400040004NI

  tid/lun= 0/5 type= disk sz= 32767 val= -5 info= DGC RAID 5 04300500050005NI

  tid/lun= 5/11 type= disk sz= 5242879 val= -5 info= DGC RAID 5 04305B005B005BNI

Update4:

server_snmpwalk results in "the lockbox stable value threshold was not met because the system fingerprint has changed. 

I ran /nas/sbin/cst_setup -reset, as suggested in emc257327, and it cleared it up. 

That didn't fix my dbchk -p or server_devconfig issues but would it make sense that the issues I'm seeing is because changes were done on the Clariion side when the lockbox was off?

No Responses!
No Events found!

Top