This post is more than 5 years old
19 Posts
0
6110
September 29th, 2011 00:00
Celerra will not add new LUNS RAID5(-1+1), doesn't match any storage profile
Hi,
I just added a CX-2PDAE0-FD DAE to our old Celerra NS20. I used USM to add the disk shelf and as we were running the Celerra at 2 Gbps (instead of 4 Gbps), everything went fine.
Once the DAE was added, I used Unisphere to connect to the Clariion (CX3-10) side of the Celerra and configured a RAID 5 (4+1) RG on 5 x 300 GB FC drives. I then created two Luns of the same size on different SP's and then added them to the NS20 Storage Group. All normal so far.
When I recan the Celerra, I get the following:
17716810659: server_2 c16t1l6 skipping unmarked disk with health check error,
CK200074600886 stor_dev=0x0016, RAID5(-1+1), doesn't match any storage profile
For some reason, it cannot read the number of disks in the RAID group. ( It should be (4+1). I googled this and searched on powerlink but have found nothing.
Has anyone seen this error and know of a fix ?
Kind Regards.
davow
19 Posts
0
October 2nd, 2011 23:00
Hi,
I fixed this myself. For anyone who searches on this the answer is:
I rebooted both Storage processors on the Clariion side first. (One at a time obviously).
I then rebooted the Standby Data Mover and then suddenly the Celerra could see the DAE and the disks properly. I did not reboot the master Datamover (although I will be doing that just for good measure).
Cheers
davow
19 Posts
0
September 29th, 2011 18:00
A bit more information, if I unbind the drives on the new DAE and remove the RG, and then run:
/nas/sbin/setup_clariion -init on the Celerra, I get the following:
Enclosure(s) 0_0,0_1 are installed in the system.
Enclosure info:
----------------------------------------------------------------
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
----------------------------------------------------------------
0_1:
INV NE NE NE NE NE NE NE NE NE NE NE NE NE NE NE UB
----------------------------------------------------------------
0_0: 146 146 146 146 146 146 300 300 300 300 300 300 300 300 300
FC 0 0 0 0 0 HS HS 1 1 1 1 1 1 1 1 MIX
----------------------------------------------------------------
"*" indicates a diskgroup/spare which will be configured
So, the Celerra can see the enclosure but not the disks? What does NE and INV mean?
Do I need to reboot the Celerra/DataMovers? I have confirmed the DAE and disks are compaibale with the Clariion CX3-10 so I cannot see how the Celerra would have anything against them. Is it possible the firmware on the drives are too old?
Thanks.
dynamox
9 Legend
•
20.4K Posts
0
September 29th, 2011 18:00
What FLARE on the array ? Can you see these drives ok in Navisphere ?
davow
19 Posts
0
September 29th, 2011 19:00
Hi,
I'm using FLARE 3.26.010.5.027 which is the latest bar one or two update (I think the latest is .029).
The DART code is 6.0.36.
I'm using Unisphere and yes, I can see everything as per normal. Nothin looks amiss from the Clariion side.....only from the Celerra side.
thanks
Rainer_EMC
4 Operator
•
8.6K Posts
0
September 30th, 2011 03:00
I think INV stands for invalid
you dont have to use setup_clariion - you can just as well use the storage provisioning wizard or create the LUNs manually with NaviSphere.
see http://nasweb/doc/custsvc/landing_page/common/post_install/Config_storage_FC_NaviManager.htm
I would suggest to open a service request for this.
too many things to be checked for this to work effectively on a forum
Rainer
davow
19 Posts
0
September 30th, 2011 05:00
Hi Rainer,
I created a ticket. What is that URL for? It does not work as nasweb is not a FQDN.
I don't have any issues creating Raid Groups or Luns on the Clariion. I was simply providing some further information. The RAID Groups and Luns are fine with Unisphere on the Clariion. The problem is when I rescan the Celerra.
Kind Regards,
David