This post is more than 5 years old
26 Posts
0
2237
October 25th, 2012 08:00
Removing luns from R5 Storage pool adding to R6 Storage pool
I have a situation, I have an NS80g connected to a CX340. I have the need to rebuild a raid grp from a R5 archive to a R6. The luns in the R5 archive are assigned to the celerra, I need to migrate the luns to a R6 raid grp and rebuild the R5 archive to a R6 . If I do so the Celerra clarata_archive storage pool will shrink in size and the clarata_r6 will grow. I'm sure there is some disk clean up work that will need to be performed to refect the changes.
id inuse acl name
3 n 0 clar_r5_performance
4 y 0 clar_r5_economy
10 y 0 clarata_archive
18 y 0 clarata_r6
What if any commands need to be run to clean up any disk entries?
tocs_1T
26 Posts
0
October 31st, 2012 09:00
Gents,
I resolved the issue by migrating all archive luns on the CX340 from R5 archive to R6 archive. My NS80g now reflects no R5 archive, and all R5 archive capacity now shows up as R6 archive capacity. After a rescan my Nas_db is up to date. I can now delete the empty R5 archive storage pool. BTW I did expand a FS without issues!
tocs_1T
26 Posts
0
October 25th, 2012 08:00
So if I get through this activity without increasing any FS during the lun migrations I should be ok going forward?
What do you think the Celerra storage pool profiles will reflect and is there any corrective action required?
etaljic81
1K Posts
1
October 25th, 2012 08:00
You basically need to delete the clarata_archive pool, recreate those LUNs as RAID6 and grow the clarata_r6 from those new RAID6 LUNs, correct?
The first thing you need to do is migrate the filesystems created on the clarata_archive pool, delete the pool and delete the disks that are being used on that pool. At that point you can delete the LUNs associated with that pool, delete the RAID Group and re-create a new RAID6 RAID Group.
etaljic81
1K Posts
1
October 25th, 2012 08:00
Ah yes, sorry for the confusion. Thanks Rainer. You don't delete the pool; make sure you migrate the filesystems from the pool and delete the disks that are used on that pool.
tocs_1T
26 Posts
0
October 25th, 2012 08:00
Well,,,, I guess I left out the fact that I already successfully migrated several R5 archive celerra lun's on the backend (CX340) to a R6 raid grp. I did not consider migrating the associated file systems, I figured the celerra would not know the difference as long as the block size was like for like. So with that said do you forsee an issue after the last lun has been moved?
Rainer_EMC
4 Operator
•
8.6K Posts
0
October 25th, 2012 08:00
It is not supported to use LUN migration on LUNs that are used by the NAS side – especially when they have different storage profiles
AVM can get confused leading to problems when extending the file system or other activity
Rainer_EMC
4 Operator
•
8.6K Posts
1
October 25th, 2012 08:00
You never delete a system pool – you at most remove its members
Once the LUNs are unused see nas_disk -unbind
tocs_1T
26 Posts
0
October 25th, 2012 09:00
I looked at the file systems that were assciated with some of those luns that were already moved and found that they do reflect the new R6 lun/storage pool. Well I have no choice but to forge ahead and hope for the best considering I'm 90% into this activity... I will let you know the end result either way. thank you for you assistance...
I'm still couriois as to what the R5 archive storage pool will reflect...
dynamox
9 Legend
•
20.4K Posts
0
October 25th, 2012 09:00
after you are done, try to expand one of those file systems.
Rainer_EMC
4 Operator
•
8.6K Posts
0
October 25th, 2012 09:00
No – you can get problems afterwards when NAS_DB and actual configurations have conflicting info because you move behind LUNs behind the Celerra’s back
If you did this and file systems show up in the wrong pool and you have problem you need to work with customer service.
Alternative corrective action would be to move them back and do it the proper way on a file system level and not via LUN migration
Peter_EMC
674 Posts
0
October 25th, 2012 22:00
also take a look at this posting: https://community.emc.com/thread/132511