Start a Conversation

Unsolved

This post is more than 5 years old

1346

May 17th, 2012 16:00

celera manager fail after a strip command

hello

on my Celera NS480, i can not usr Celera manager GUI and CS

-> i create 3 Lun Raid 0+1 on the CX480 backend for the Xblade.

-> i scan the newx disk in Celera Manager it is ok i see the new disk

--> i create a Stripe with this 3 disk ( fiest stepp for a voume)

-> just after  i lose the connection .: impossible to have a new connection

-> on the controle station, if i usr the commande

# nas-disk list 

i freeze the terminale and must usr another ssh connection .

i suspect that my strip commande was wrong but.. why and what can i do now ?

thanks if someone have an idea..

bye

May 18th, 2012 01:00

Hi ,

Can you please execute the following commands and paste the output  :

1 ) /nasmcd/sbin/getreason

2 ) df -k

Thanks

Vanitha

7 Posts

May 18th, 2012 05:00

Hi

sure

i have

nasadmin@NS480P ~]$ /nasmcd/sbin/getreason

10 - slot_0 primary control station

5 - slot_2 contacted

5 - slot_3 contacted

nasadmin@NS480P ~]$ df -k

Filesystem           1K-blocks      Used Available Use% Mounted on

/dev/hda3              2063536    999048    959664  52% /

/dev/hda1               124427      8789    109214   8% /boot

none                   1036784         0   1036784   0% /dev/shm

/dev/mapper/emc_vg_pri_ide-emc_lv_home

                        604736     16940    557076   3% /home

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backup

                        846632     52204    751420   7% /celerra/backup

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backendmonitor

                          7931      1554      5968  21% /celerra/backendmonitor

/dev/nde1              1739292    851744    799196  52% /nbsnas

/dev/hda5              2063504    658172   1300512  34% /nas

/dev/nda1               136368     56944     79424  42% /nbsnas/dos

/dev/mapper/emc_vg_lun_0-emc_lv_nbsnas_jserver

                       1427184    257904   1096784  20% /nbsnas/jserver

/dev/mapper/emc_vg_pri_ide-emc_lv_nas_jserver

                       1427184    260612   1094076  20% /nas/jserver

/dev/mapper/emc_vg_lun_5-emc_lv_nas_var

                         99150      5669     88361   7% /nbsnas/var

/dev/mapper/emc_vg_lun_0-emc_lv_nas_var_dump

                       1705344     35356   1583360   3% /nbsnas/var/dump

/dev/mapper/emc_vg_lun_0-emc_lv_nas_var_auditing

                        118997      5664    107189   6% /nbsnas/var/auditing

/dev/mapper/emc_vg_lun_5-emc_lv_nas_var_backup

                        846632     53448    750176   7% /nbsnas/var/backup

/dev/mapper/emc_vg_lun_5-emc_lv_nas_var_emcsupport

                        564416    137076    398668  26% /nbsnas/var/emcsupport

/dev/mapper/emc_vg_lun_5-emc_lv_nas_var_log

                        210215     15867    183494   8% /nbsnas/var/log

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_ccc

                        564416     16840    518904   4% /celerra/ccc

For example, i can not do a "ls" in /nas..

no anwer ..

strange no ?

296 Posts

May 18th, 2012 05:00

Hi,

Try reboting the control station, then access the CLI.

Make sure you add the LUNs from supported RG configurations by Celerra.

go through the article for supported RG configuration "emc138143"

Sameer Kulkarni

No Events found!

Top