This post is more than 5 years old
11 Posts
0
3582
February 16th, 2015 22:00
Configuration/Expansion Recoverpoint Cluster
Hi Experts,
I need help in Expansion of Recoverpoint Cluster (RP/EX). I have one 4 nodes Recoverpoint cluster (Version 4.1) on Production site connected to VMAX10k (with EX license) and one 4 Nodes cluster (Version 4.1) on DR site connected with VNX7600 ( with EX license).
There is one more RP/SE 2 Nodes cluster (Version 4.0) connected to VNX7500 on production site and 2 nodes cluster (version 4.0) connected to VNX5300 on DR site. I want to destroy this cluster (Because not in production) and want to expand these 2 nodes to my above mentioned 4 Nodes cluster. Also I want VNX7500 and VNX5300 to be configured behind my New 6 node cluster after expansion. I have purchased required EX licenses for VNX7500 and VNX5300.
Need help in Below mentioned points:-
1.) How can i graceful destroy my 2 Node cluster and expand my 4 Node cluster? what all downtime involved in this for my 4 Node production cluster during expansion?
2.) what all changes need to be done on VNX7500(Production Site) and VNX5300(DR site) for splitter configuration, so that i can replicate any LUN hosted VNX7500 to VNX5300 using my New 6 Node cluster.
3.) what is the maximum data single Recoverpoint can replicate ( Need approximate value). I am looking to replicate around 70TB using my 6 Node cluster.
I have some idea of all my questions technically. High level plan will also be helpful.
Thanks!
Idan
675 Posts
1
February 17th, 2015 00:00
Hi there,
1. In order to tear down a cluster, you will need to detach all RPAs of that cluster (in your case you have 2 clusters so all RPAs on those two clusters should be detached). This can be performed using the boxmgmt CLI and navigating to [4] Cluster operations -> [1] Detach RPA from cluster.
More importantly, make sure that no CGs are replicating/configured in the system and that for any case, the settings are backed-up. The latter can be accomplished by saving an output of the admin CLI command save_settings.
There should not be any downtime invloved on the 4 nodes RP clusters. RPAs can be added no-disruptively to existing clusters using Deployment Manager.
2. In terms of the changes to be done on those VNXs - first, after detaching all RPAs from those SE clusters, remove the RP Storage groups. as for the RPAs configured on those VNXs - make sure that zoning is accurate and as consequence that RPAs initiator registration is accurate. On the other 4 RPAs on that cluster, you would need to zone them to appropriate SP ports and register them as RecoverPoint appliance initiator type and failover mode 4 as well creating a SG for all RPAs. For more info, refer to the RecoverPoint Deploying VNX and CLARiiON Arrays and Splitter Technical Note.
3. 2 PB.
Some other comments:
1) I'm not sure which 4.1 version you are running but I would recommend upgrading to RP 4.1.1.1 as it it's the target for upgrades in the 4.1 family.
2) As for the High level process:
a. Detach all RPAs in the 2-node clusters including backup of the settings before that.
b. Zone the detached 2 RPAs in the production site to the appropriate FA ports on the VMAX-10K (keeping in mind that the goal is for all RPAs to see the same volumes), create child initiator groups for those two RPAs and add them to the parent IG for all RPAs (if you're not using cascaded IGs then simply add the RPA initiators to the relevant IG)
c. Make the required changes on the VNXs for the current 4 RPAs as per what's written above.
d. Add the two RPAs to the each cluster using Deployment Manager (each cluster will invlove a separate DM add RPA operation)
e. Add the EX licenses for the VNXs to the EX system and add the VNX splitters
3) Make to sure to size for appropriate number of RPAs to be able to sustain the required workload, capacity limitations are one thing but the workload can be orthogonal. For a proper sizing exercise, one would also need to take into account the required journal and replica performance to meet RPO requirements with the given production workload.
Hope that helps,
Idan
idan.kentor@emc.com
Devthechamp007
11 Posts
0
February 17th, 2015 03:00
Thank you so much Idan for your immediate help and Inputs.
Devthechamp007
11 Posts
0
February 18th, 2015 05:00
Just one question Idan.. you mentioned "RPAs can be added non-disruptively to existing clusters using Deployment Manager." Is deployment manager exists for RP/EX clusters also? or we need to add nodes using CLI. if DM exists, then do we need to select zoning as "Manual" during storage configuration wizard in deployment manager?
Devthechamp007
11 Posts
0
February 18th, 2015 05:00
I read some EMC document and also tried deployment wizard with "Add New RPAs Wizard". it shows storage configuration there. Not sure, am i doing something wrong?
Idan
675 Posts
0
February 18th, 2015 05:00
Hi,
DM can operate on all RP licensing editions, including EX. You should add the RPAs using the "Add New RPAs Wizard" in DM. There shouldn't be an option to select the zoning type as zoning is being done only with SE installs.
Regards,
Idan
Idan
675 Posts
0
February 18th, 2015 14:00
No, you are not doing anything wrong. There is a Storage Configuration step in the Add New RPAs Wizard in DM. In this step, You will be instructed to perform the zoning manually along with a list of WWNs of the RPAs.
Regards,
Idan
idan.kentor@emc.com