Start a Conversation

Unsolved

This post is more than 5 years old

4035

February 16th, 2012 10:00

remove a storage pool to free up disks

We want to remove a storage pool on our Celerra so the underlying disks can later be used by the CX-4 to present iSCSI luns directly across 10g I/O module interfaces to our ESX cluster. 

The Celerra storage pool is RAID 6 provisioned across one full DAE using 14 disks and 1 as hot swap.  How can I remove this pool, and thus the entire associated DAE disk capacity, from the Celerra?  I assume that I will also need to perform some task on the CX-4 to disassociate the storage from being presented to the Celerra host also, so I am looking for a little help on what I need to do there as well.  Thank you to anyone that can provide some assistance.

9 Legend

 • 

20.4K Posts

February 16th, 2012 10:00

all file systems have been deleted that reside on theose LUns ? Please post output from :

nas_disk -l

9 Legend

 • 

20.4K Posts

February 16th, 2012 12:00

how are you presenting storage to your ESX cluster, NFS, iSCSI or FC block ?

4 Posts

February 16th, 2012 12:00

I will be using svmotion to move current vm's off the LUNS that are currently on that file system over the next several days.  Once the vm's are moved and there is no more important data stored on that storage pool, should I next delete the LUNS, then the filesystems that house those LUN's?  I am using Unisphere primarily to administer the Celerra.   

4 Posts

February 16th, 2012 13:00

Right now, we are using iSCSI through the data movers.  My goal is to clear just this one storage pool and free it from the Celerra, then provision some of that storage through the CLARiiON as block iSCSI via the 10GbE iSCSI I/O modules.

9 Legend

 • 

20.4K Posts

February 16th, 2012 13:00

do you have extra space to svmotion these VMs to while you are reconfiguring this pool ?  From Celerra perspective you will need to delete your iSCSI LUNs, delete file systems. At that point you should be able to run nas_disk -l and the LUNs that you are trying to reclaim should have "n" in the inuse column.

28 n 1878344  APM00084901111-0049 CLATA d28       1,2

0049 means it's LUN id 73 in Navisphere.

once you verified that disk is not in use, you can delete it:

nas_disk -delete d28 -perm -unbind

26 Posts

October 25th, 2012 07:00

I have a similar situation, I have an NS80g connected to a CX340.  I have the need to rebuild a raid grp from a R5 archive to a R6. The luns in the R5 archive are assigned to the celerra, I need to migrate the luns to a R6 raid grp and rebuild the R5 archive to a R6 . If I do so the Celerra clarata_archive storage pool will shrink in size and the clarata_r6 will grow. I'm sure there is some disk clean up work that will need to be performed.

id      inuse   acl     name

3       n       0       clar_r5_performance

4       y       0       clar_r5_economy

10      y       0       clarata_archive

18      y       0       clarata_r6

Your thoughts Please...

674 Posts

October 25th, 2012 22:00

Please check Knowledgebase Primus emc144545

from there:

CLARiiON LUN Migrations are supported, but only under the following criteria. These are restrictions which MUST be followed carefully, otherwise data outages or data loss can be incurred.

A LUN migration is supported when migrating the control volumes as part of a CX to CX3 upgrade. This is normally a Professional Services engagement; an RPQ is no longer required, but a CCA is required.  Celerra boot LUNs 0-5 should reside on the CLARiiON Vault drives.

When LUNs in a Celerra storage group are greater than an HID of 15, the following rules apply to LUN migrations with regards to data LUNs:

  • The RAID type is the same (for example, RAID 5 RAID5, or RAID 3  RAID 3)

    (NOT RAID 5 -> RAID 3 or RAID 3 -> RAID5)
  • The drive type is similar.

    (FC allowed to FC), (ATA allowed to ATA)
  • The number of the physical drives in the RAID group is the same.

    (Source and target RAID groups)
  • The source and target LUNs must be identical size

    (According to block count size, not MB size)

A supported alternative would be using IP-Replicator to copy the FS from Raid-5 luns to Raid-6 Luns

9 Legend

 • 

20.4K Posts

October 26th, 2012 05:00

Peter,

what about VNX where data LUNs could reside in a pool (type: mixed, drives: mixed). If you have multiple pools can you use LUN Migrator to move datamover LUNs between them ?

Thanks

No Events found!

Top