Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

954

November 27th, 2010 08:00

FC pool used for celerra

hello all,

We 've a new celerra installed (NS480, all raid 5). One thing that strike me was FC_Pool  and performance pool togather.I am not sure of this pool.Was it created without AVM?

The other pool i am aware of is clarata for sata .Can someone please help me waht FC_pool ,holding 10TB of storage , could be?

[nasadmin@USMAN ~]$ nas_pool -l
id      inuse   acl     name
3       n       0       clar_r5_performance
42      y       0       FC_Pool

[nasadmin@USMAN-~]$ nas_disk -info d12
id        = 12
name      = d12
acl       = 0
in_use    = True
pool      = FC_Pool
size (MB) = 1099315
type      = CLSTD
protection= RAID5(4+1)
stor_id   = APM00101300887
stor_dev  = 0018
volume_name = d12
storage_profiles = clar_r5_performance
virtually_provisioned = False
mirrored  = False
servers   = server_2,server_3
   server = server_2          addr=c0t1l8
   server = server_2          addr=c16t1l8
   server = server_3          addr=c0t1l8
   server = server_3          addr=c16t1l8

thank you for reading

366 Posts

November 29th, 2010 16:00

Hi,

No. You can upgrade to 5.6 with LUN 5 @ 2GB.

This rule is valid for new installs.

Gustavo Barreto.

9 Legend

 • 

20.4K Posts

November 27th, 2010 08:00

that is not a default system pool name, somebody must have defined it manually.

366 Posts

November 27th, 2010 11:00

Hi,

dynamox is correct. It was certainly manually created.

Use the following command to see it's members and configuration :

nas_pool -info  FC_Pool

If you want to know when it was created, use :

cat /nas/log/cmd_log* |grep nas_pool

It might be necessary to gunzip the other cmd_log* files under /nas/log depending when it was created.

Gustavo Barreto.

4 Operator

 • 

8.6K Posts

November 28th, 2010 11:00

Hi,

I would really suggest to take the time reading the two AVM and MVM manuals to get a better understanding what pools are and how they work.

Rainer

2 Intern

 • 

261 Posts

November 29th, 2010 11:00

Thanks Gustavo and dynamox ; and EMC_rainer .

Even ,i was pretty sure it was  manually created.The reason i put this here is because, i observed ,  that there're two  11GB luns and three  2GB (instead of 4-2GBs) with the fourth one configured as 32 GB(HLU 5) ( for the first six user luns -HLU 0-5).Something not mentioned in EMC rule book. I am not involved in configuring those ,so i am not sure why this is  in place.

Is it because some body tried to scan it and when that was a failure tried to create a pool manually.

The pool gives two out put - clar_r5_performance being of 1 TB and FC_pool being 50TB


More over , all the disks are in use and that can only show when celerra has recognized the backend storage.

id      inuse   acl     name
3       n       0       clar_r5_performance

42      y       0       FC_Pool

For clar_r5_performance
[nasadmin@ ~]$ nas_pool -size clar_r5_performance
id           = 3
name         = clar_r5_performance
used_mb      = 0
avail_mb     = 0
total_mb     = 0
potential_mb = 1883372

It means only i TB is available as free disk. I am just wondering how could a system lun be configured as 32 GB?

Please suggest..

2 Intern

 • 

261 Posts

November 29th, 2010 11:00

Thanks Gustavo.. ..so,  Does it mean doing an upgrade from 5.5 to 5.6 would require a change in  system LUN 5?

we had a code upgrade recently for NS-704G.And we'd be migrating from NS-704G to NS480.

The one in question is NS480.. so , i believe it's configured the right way.

366 Posts

November 29th, 2010 11:00

Hi,

the system LUN 5 changed the value on 5.6 code.

It was 2GB on 5.5 and earlier codes, and now it's model dependant.

32GB on smaller systems, and 64GB on larger ones ( NS960 and NS-G8 ).

Gustavo Barreto.

No Events found!

Top