Unsolved
This post is more than 5 years old
2 Intern
•
309 Posts
0
1525
October 14th, 2013 09:00
Rainfinity Issues With Gateway To VNX Migration
We need to migrate data from a gateway attached to CX3's over to a VNX5700. Our main issues is how to handle the Rainfinity device. We would like to handle this in one of two ways.
Upgrade the device from 7.4 to 9 and recall (re-hydrate) the data to eliminate the device all together.
Or
Not bother with the upgrade and somehow migrate the stubs along with the archived data.
Could someone please provide some advice or maybe a document on how to handle this device.
There is no forum for Rainfinity so I hope this one will work.
Thanks.
No Events found!
umichklewis
3 Apprentice
•
1.2K Posts
2
October 15th, 2013 05:00
VNX Replicator may be of some assistance here. First, what version of DART do you have installed on your gateway? If you're on the right version, you can replicate filesystems from the old gateway to your VNX. Replicator performs block-level replication, so the stubs in the filesystem get copied from one NAS to the other. If you re-configure the Rainfinity appliance (CTA) with credentials on the VNX, you can recall files from archive on the VNX.
One idea might be to use Replicator to replicate the filesytems, configure the CTA and recall all the data from archive, then get rid of the CTA. That might be one approach, if you don't want to use it any longer. Or you can simply continue to use it.
For more documentation, visit support.emc.com, click on Support By Product, and enter Cloud Tiering Appliance (CTA). You can find all of the CTA documentation here. One useful document for you might be BEST PRACTICES FOR FILE MIGRATION WITH EMC CLOUD TIERING APPLIANCE – CELERRA TO VNX. This covers the basics of a migration from Celerra to VNX.
Let us know if that helps!
Karl
Sue5
13 Posts
0
October 17th, 2013 08:00
Just an FYI if you decide to recall all the archived data. We just did that and hit a small snag. There was a bug where the recall tooks many many times the space the CTA indicated. The CTA was correct, it was a bug on how the Celerra calculated free space. We could reboot the data mover and see the free space. There is a patch for the space issue. Celerra/NAS patch 6.0.70.406.
DanPJ
2 Intern
•
309 Posts
0
October 18th, 2013 05:00
Looks like we are at 5.6.51-3.
Sue5
13 Posts
1
October 18th, 2013 06:00
We did not try any recalls at 5.6. We were running 6.0.70 but at a lower patch level when we tried our first full recall.
We ran a fresh stub-scanner on the CTA to figure out how much space was needed for the recall. Expanded our filesystem to have that much space (plus another 10% for free space). Then we issued: fs_dhsm -connection Celerra01_fs06 -info where Celerra01_fs06 is our filesystem. We used this to check if logging was one and to see the HSM connection.
We turned logging on it is wasn't with: fs_dhsm -modify Celerra01_fs06 -log on
Then we started the recall with: fs_dhsm -connection Celerra01_fs06 -delete -all -recall_policy yes
Then we watched the recalls for a while. After many hours the recall ran slowly along and the space was fine. Then about 5 ot 6 hours into the recall the space climbed hundreds of GB in under 30 minutes and filled the filesystem. We stopped the recall (or let it stop on its own if it totally fillled the filesystem).
Once we applied the patch we had no issues, the recalls all matched what the CTA said and the recalls ran smooth. (Bare in mind, depending how many orphans you have there may be mismatches on space between the CTA and what is recalled).
Once the recalls were all done we ran the stubscanner again to make sure all files were recalled.