Start a Conversation

Unsolved

This post is more than 5 years old

1641

May 14th, 2014 02:00

CSA Health check failed Error:Blade 2 is booting up (rc=0)

Hi,

I had an EMC Celerra NX4 system which was working fine few days back. I deleted the LUNs on the system to clear all data on it. After this, I was unable to access Unisphere Console.

I then rolled back the system to CSA pre-configured state following the article no: 43174 (KB: Restoring the Celerra and client back to a pre-CSA-configured state in order to rerun the CSA wizard) and tried to configure the system via CSA. Now, CSA does not proceed with health check for Blade2. It gives an error "Blade 2 is booting up (rc=0)" at 46% of its completion (Screenshot is attached). Status of blade is shown as "0 - slot_2 reset" (as output of /nasmcd/getreason) and the fault LED for blade is blinking continuously as amber colour with different frequency. 


We then tried to install DART on the system referring the article no: 50056 (KB: Performing a backend cleanup and factory reinstallation of NAS on an NX4 Celerra). But clariion_mgmt -stop command gives an error "NAS_DB environment variable not defined" (Screenshot is attached). We also cleaned the system manually according to the same article but same errors persist.

Any idea how to get back the system ??

2 Attachments

2 Intern

 • 

812 Posts

May 14th, 2014 02:00

Hope you haven't deleted the control LUNs(6 LUNs) from the clariion (backend).  Ensure the cabling between the data movers and SPs (SP-A and SP-B) are also good and both the SPs are up and running.

May 14th, 2014 03:00

All LUNs are destroyed. there are no any LUNs/Virtual disks & Raid group on clariion.PFA

But I can see output of df -h as below:

[root@SO-INMU-EMCCS ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/hda3             2.0G  1.6G  360M  82% /

/dev/hda1             259M   12M  237M   5% /boot

none                 1013M     0 1013M   0% /dev/shm

/dev/mapper/emc_vg_pri_ide-emc_lv_home

                      591M  285M  277M  51% /home

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backup

                      827M   97M  689M  13% /celerra/backup

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backendmonitor

                      7.8M  1.2M  6.3M  16% /celerra/backendmonitor

/dev/hda5             2.0G  1.2G  721M  63% /nas

[root@SO-INMU-EMCCS ~]#

3 Attachments

2 Intern

 • 

812 Posts

May 14th, 2014 03:00

Have you checked for the celerra control LUNs ?

2x11GB, 3x2GB and 1x32GB (I believe for NX4 it is 32GB). It should be present for the Celerra to work.

May 14th, 2014 03:00

Hi,

The backend cleanup is done & all the raid pools are also destroyed.

There are no any virtual disks & raid pool on clariion.

The cabling is fine & SP are running in good condition.

May 14th, 2014 05:00

Thanks for the reply

We tried booting from the DVD Express Install as well.

As soon as we reboot the Control Station as in the KB, the boot menu is not displayed i.e the DVD does not boot.

The Control station's default OS is booted.

One more thing, the boot menu asks for password when booting through the BIOS... How do I get that password ? Any default password for the same ?

Any other work around to boot from the DVD ?

674 Posts

May 14th, 2014 05:00

As all Luns are destroyed, the system is way before CSA will be able to continue.

Btw. the datamovers are booting from the backend luns, which are destroyed.

You need to reinstall NAS from scratch, use the Install CD/DVD and perform a fresh new install.

Continue from step 9 of KB 50056

674 Posts

May 14th, 2014 23:00

Are you connected using the serial port of the CS?

May 15th, 2014 01:00

yes connected via serial cable to back of cs.

May 27th, 2014 01:00

Hi,

We are able to boot the DVD and also done the factory re-installation of NAS.

The 6 Control LUNs were created in the back end(PFA) but we are unable to see the nas volumes.

Does these nas volumes created along with CSA process or do we need to do any configuration before CSA.LUNs.jpg

Regards,

Ashish Indoriya

No Events found!

Top