This post is more than 5 years old
27 Posts
0
8371
March 29th, 2012 13:00
Migrate filesystem to new storage pool while preserving shares & quotas
Is there a way to migrate filesystems to new storage pool while preserving share permissions & quotas? We need to move several filesystems with tree quotas and their related CIFS shares off of an old storage pool (whose backend storage consists of 1TB drives on our VMAX) to a new storage pool (whose backend storage consists of 2TB drives on our VMAX) so we can remove the aforementioned 1TB drives from our VMAX. We tried Celerra Replicator but when tested on a small filesystem we had to recreate all the shares to point to the new file system and it made the tree quotas invalid for that migrated filesystem. Is there a better way to do this without losing the shares and quotas?
Thanks!
Celerra NS-G8 running software version 6.0.41-3
fortec1
27 Posts
0
April 17th, 2012 10:00
Turns out that the quotas are filesystem dependant. So the only way to migrate the filesystem with shares AND quotas is to recreate the quotas on the new filesystem prior to setting up the replication. Below are the steps I used with a test filesystem:
Migrating Celerra Filesystems to new Storage Pool (Keeping Quota and Shares intact)
NOTE:
Current Filesystem Name: TESTFS
New (target) Filesystem Name will be TESTFS_NEW
Preferred DM/vdm: vdm01
New Storage Pool: symm_new_pool
GATHER INFORMATION:
Get Current FS Size information:
nas_fs -size TESTFS
nas_fs -info TESTFS
$ nas_fs -size TESTFS
total = 10044 avail = 4682 used = 5361 ( 53% ) (sizes in MB) ( blockcount = 20889600 )
volume: total = 10200 (sizes in MB) ( blockcount = 20889600 ) avail = 4683 used = 5517 ( 54% )
$ nas_fs -info TESTFS
id = 8070
name = TESTFS
acl = 0
in_use = True
type = uxfs
worm = off
volume = v12723
pool = symm_old_pool
member_of = root_avm_fs_group_21
rw_servers= server_2
ro_servers=
rw_vdms = vdm01
ro_vdms =
auto_ext = no,virtual_provision=no
deduplication = On
stor_devs = 000192602783-45B1,000192602783-45D1
disks = d12,d13
disk=d12 stor_dev=000192602783-45B1 addr=c16t1l6-117-1 server=server_2
disk=d13 stor_dev=000192602783-45D1 addr=c16t1l7-117-1 server=server_2
Get Current Tree Quota Information for FS:
nas_quotas -list -tree -fs TESTFS
nas_quotas -report -tree -fs TESTFS
$ nas_quotas -list -tree -fs TESTFS
+------------------------------------------------------------------------------+
| Quota trees for filesystem TESTFS mounted on /root_vdm_1/TESTFS:
+------+-----------------------------------------------------------------------+
|TreeId| Quota tree path (Comment) |
+------+-----------------------------------------------------------------------+
| 1 | /TESTFS (.Testing.) |
+------+-----------------------------------------------------------------------+
$ nas_quotas -report -tree -fs TESTFS
Report for tree quotas on filesystem TESTFS mounted on /root_vdm_1/TESTFS
+------------+-----------------------------------------------+-----------------------------------------------+
| Tree | Bytes Used (1K) | Files |
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
| | Used | Soft | Hard |Timeleft| Used | Soft | Hard |Timeleft|
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
|#1 | 5487328| 5767168| 6291456| | 589| 0| 0| |
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
CREATE TARGET FILESYSTEM
Create New storage:
nas_fs -name TESTFS_NEW -type uxfs -create size=10200M pool=symm_new_pool storage=000192602783 -auto_extend yes -vp no -hwm 90% -max_size 10240M -option slice=yes
Mount New Storage on appropriate DM/vdm:
server_mountpoint vdm01 -create /TESTFS_NEW
server_mount vdm01 TESTFS_NEW /TESTFS_NEW
$ server_mountpoint vdm01 -create /TESTFS_NEW
vdm01 : done
$ server_mount vdm01 TESTFS_NEW /TESTFS_NEW
vdm01 : done
Create Quota for New Storage:
nas_quotas -on -tree -fs TESTFS_NEW -path /TESTFS -comment ‘Testing’
nas_quotas -edit -tree -fs TESTFS_NEW -block 6291456:5767168 1
Mount NEW FS as Read Only (so it can be used for replication):
server_mount vdm01 -Force -option ro TESTFS_NEW /TESTFS_NEW
Verify NEW FS is Read Only:
nas_fs -info TESTFS_NEW
REPLICATION:
Create Replication:
nas_replicate -create TESTFS_REPLICATION -source -fs TESTFS -destination -fs TESTFS_NEW -interconnect loopback -max_time_out_of_sync 5 -overwrite_destination
List replication session:
nas_replicate -info -all
When Replication is done (Current Transfer is Full Copy = No), Switchover:
nas_replicate -switchover TESTFS_REPLICATION
Check FS status to make sure TESTFS is RO and TESTFS_NEW is RW:
nas_fs -info TESTFS
nas_fs -info TESTFS_NEW
Verify no checkpoints exist:
fs_ckpt TESTFS -list
fs_ckpt TESTFS_NEW -list
Delete Replication Session:
nas_replicate -delete TESTFS_REPLICATION -mode both
SWAP FILESYSTEMS
Unmount Filesystems:
server_umount ALL -perm /TESTFS
server_umount ALL -perm /TESTFS_NEW
Rename Filesystems:
nas_fs -rename TESTFS TESTFS_OLD
nas_fs -rename TESTFS_NEW TESTFS
Remove server2 mount for new FS left over from repl switchover:
server_mount server_2 -option RO accesspolicy=NATIVE TESTFS /TESTFS
server_umount server_2 -perm /TESTFS
Mount new FS on preferred vdm:
server_mount vdm01 accesspolicy=NATIVE TESTFS /TESTFS
Verify that the shares work by accessing them, and run nas_quotas -report -tree -fs TESTFS to verify that the Quota can see the new storage.
CLEANUP
(if original filesystem was on VDM, there will be an orphaned mount/mountpoint for the new filesystem for server_2. This process cleans up that orphaned mount/mountpoint)
Mount old FS to remove server_2 mount left over from repl switchover:
server_mount server_2 -option RO accesspolicy=NATIVE TESTFS_OLD /TESTFS_NEW
server_umount server_2 -perm /TESTFS_NEW
Delete old FS:
nas_fs -delete TESTFS_OLD -Force
dynamox
9 Legend
•
20.4K Posts
1
March 29th, 2012 13:00
take a look at this discussion, specifically the procedure that Christopher provided
https://community.emc.com/thread/130606
Rainer_EMC
4 Operator
•
8.6K Posts
0
March 30th, 2012 04:00
The shares and share ACL are stored in the CIFS server config – so they wont “move” with the file system
However in your case the solution is simple – after you are done copying with Replicator unmount both old and new fs
Then mount the new fs at the same mountpoint where the old fs was
Since the shares simply work on the path they will work again as before
Rainer
fortec1
27 Posts
0
April 2nd, 2012 14:00
Basically everything worked sor the CIFS shares, but I lost the quotas. It was still referencing the original filesystem. And when I deleted the old filesystem the Quota was removed also. How do I replicate/migrate the quotas for a filesystem?
fortec1
27 Posts
0
April 2nd, 2012 14:00
What about Tree Quotas? When I renamed the old FS the quota filesystem name changed with it.
InsaneGeek
46 Posts
0
April 17th, 2012 11:00
Hmmm... my tree quotas on the original tree quotas have come across on replication sessions (see below). A tree quota is basically a directory with special inode it's part of the filesystem so a block level copy should contain the tree quotas (nas_copy wouldn't though)
Maybe you just need to poke the quota database for it to register: Last year we purchased the content portion of another company that had a Celerra as well, we created a private VPN and replicated from their 5.x to our 6.x Celerra system and the tree quotas came across from their original filesystems (mixed CIFS & NFS environment). Originally the filesystems showed up with no quotas at all, I forget if it was nas_quota -check or nas_quota -quotadb command, but all of the sudden they then appeared after poking it. Note one of the odd things was that the values were way off (hardlimit of 1TB in a 4TB consumed filesystem) but I simply changed them and they were then fine, so I didn't have to file by file copy to have tree quotas going.
The process I just did on my array running 6.0.43-1 to replicate
Create 2x filesystems one in ATA pool another in EFD pool
$ nas_fs -name test_quota -type uxfs -create size=10G pool=clarata_r6 -auto_extend yes -vp yes -max_size 100G
$ nas_fs -name target_quota -type uxfs -create size=10G pool=clarefd_r10 -auto_extend yes -vp yes -max_size 100G
Mount filesystems
$ server_mount server_8 test_quota /test_quota
$ server_mount server_8 -option ro target_quota /target_quota
Create a quote on source (export and create 3x 200GB files)
$ nas_quotas -on -tree -fs test_quota -path /test1
$ nas_quotas -edit -tree -fs test_quota -block 1048576 -inode 50 1
$ nas_quotas -report -tree -fs test_quota
Report for tree quotas on filesystem test_quota mounted on /test_quota
+------------+-----------------------------------------------+-----------------------------------------------+
| Tree | Bytes Used (1K) | Files |
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
| | Used | Soft | Hard |Timeleft| Used | Soft | Hard |Timeleft|
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
|#1 | 614744| 0| 1048576| | 4| 0| 50| |
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
Replicate & Switchover
$ nas_replicate -create rep_quota -source -fs test_quota -destination -fs target_quota -interconnect id=80001 -max_time_out_of_sync 5 -overwrite_destination
$ nas_replicate -switchover rep_quota
Quotas automatically appeared on target filesystem
$ nas_quotas -report -tree -fs target_quota
Report for tree quotas on filesystem target_quota mounted on /target_quota
+------------+-----------------------------------------------+-----------------------------------------------+
| Tree | Bytes Used (1K) | Files |
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
| | Used | Soft | Hard |Timeleft| Used | Soft | Hard |Timeleft|
+------------+------------+------------+------------+--------+------------+------------+------------+--------+
|#1 | 614744| 0| 1048576| | 4| 0| 50| |
+------------+------------+------------+------------+--------+------------+------------+------------+--------+