Unsolved
This post is more than 5 years old
4 Posts
0
4035
February 16th, 2012 10:00
remove a storage pool to free up disks
We want to remove a storage pool on our Celerra so the underlying disks can later be used by the CX-4 to present iSCSI luns directly across 10g I/O module interfaces to our ESX cluster.
The Celerra storage pool is RAID 6 provisioned across one full DAE using 14 disks and 1 as hot swap. How can I remove this pool, and thus the entire associated DAE disk capacity, from the Celerra? I assume that I will also need to perform some task on the CX-4 to disassociate the storage from being presented to the Celerra host also, so I am looking for a little help on what I need to do there as well. Thank you to anyone that can provide some assistance.
No Events found!
dynamox
9 Legend
•
20.4K Posts
0
February 16th, 2012 10:00
all file systems have been deleted that reside on theose LUns ? Please post output from :
nas_disk -l
dynamox
9 Legend
•
20.4K Posts
0
February 16th, 2012 12:00
how are you presenting storage to your ESX cluster, NFS, iSCSI or FC block ?
Bwwh
4 Posts
0
February 16th, 2012 12:00
I will be using svmotion to move current vm's off the LUNS that are currently on that file system over the next several days. Once the vm's are moved and there is no more important data stored on that storage pool, should I next delete the LUNS, then the filesystems that house those LUN's? I am using Unisphere primarily to administer the Celerra.
Bwwh
4 Posts
0
February 16th, 2012 13:00
Right now, we are using iSCSI through the data movers. My goal is to clear just this one storage pool and free it from the Celerra, then provision some of that storage through the CLARiiON as block iSCSI via the 10GbE iSCSI I/O modules.
dynamox
9 Legend
•
20.4K Posts
0
February 16th, 2012 13:00
do you have extra space to svmotion these VMs to while you are reconfiguring this pool ? From Celerra perspective you will need to delete your iSCSI LUNs, delete file systems. At that point you should be able to run nas_disk -l and the LUNs that you are trying to reclaim should have "n" in the inuse column.
0049 means it's LUN id 73 in Navisphere.
once you verified that disk is not in use, you can delete it:
nas_disk -delete d28 -perm -unbind
tocs_1T
26 Posts
0
October 25th, 2012 07:00
I have a similar situation, I have an NS80g connected to a CX340. I have the need to rebuild a raid grp from a R5 archive to a R6. The luns in the R5 archive are assigned to the celerra, I need to migrate the luns to a R6 raid grp and rebuild the R5 archive to a R6 . If I do so the Celerra clarata_archive storage pool will shrink in size and the clarata_r6 will grow. I'm sure there is some disk clean up work that will need to be performed.
id inuse acl name
3 n 0 clar_r5_performance
4 y 0 clar_r5_economy
10 y 0 clarata_archive
18 y 0 clarata_r6
Your thoughts Please...
Peter_EMC
674 Posts
0
October 25th, 2012 22:00
Please check Knowledgebase Primus emc144545
from there:
CLARiiON LUN Migrations are supported, but only under the following criteria. These are restrictions which MUST be followed carefully, otherwise data outages or data loss can be incurred.
A LUN migration is supported when migrating the control volumes as part of a CX to CX3 upgrade. This is normally a Professional Services engagement; an RPQ is no longer required, but a CCA is required. Celerra boot LUNs 0-5 should reside on the CLARiiON Vault drives.
When LUNs in a Celerra storage group are greater than an HID of 15, the following rules apply to LUN migrations with regards to data LUNs:
(NOT RAID 5 -> RAID 3 or RAID 3 -> RAID5)
(FC allowed to FC), (ATA allowed to ATA)
(Source and target RAID groups)
(According to block count size, not MB size)
A supported alternative would be using IP-Replicator to copy the FS from Raid-5 luns to Raid-6 Luns
dynamox
9 Legend
•
20.4K Posts
0
October 26th, 2012 05:00
Peter,
what about VNX where data LUNs could reside in a pool (type: mixed, drives: mixed). If you have multiple pools can you use LUN Migrator to move datamover LUNs between them ?
Thanks