Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1078

July 27th, 2013 08:00

Recoverpoint 3.4 to 3.5 SP2 upgrade

just has emc remote folks do the upgrade, curious as they seem clueless, why do all my consistency groups display in an init state now that the upgrade is complete?  the way they explained it was that nothing should be impact the replication while the upgrade took place, but now i feel like the appliances are spending all kinds of time figuring out what needs to be replicated and its preventing anything new from getting synced.

sSo5lo3rRVegxqJDyBNSvTUs1.png

zLwcW

2 Intern

 • 

1.1K Posts

July 29th, 2013 06:00

This won't necessarily be the case. It's not unusual to see CGs in an init state during and after an upgrade. This is because the RPAs are upgraded in parallel pairs which means that CGs are moved onto other RPAs in the cluster. This process itself can cause a state of short init. Moreover, if these RPAs have to manage an existing workload plus the workload from the RPAs being upgraded, the combined throughput, which could be exacerbated with high throughput, could lead to the splitter backlog filling and the splitter will start marking incoming data using metadata. This leads to an init state as the marked data will have to be read from the array by the splitter and sent to the RPA.

1K Posts

July 28th, 2013 18:00

If everything was in Active state before the upgrade then it should be in the Active state after the upgrade. Sounds like the upgrade didn't go as expected. RP is comparing the source and target volumes and replicating changes, if any. That's what the init state does.

1K Posts

July 29th, 2013 06:00

Great point forshr. My assumption was that the init state has been there for a while. Thank you for the correction.

duhaas, can you check the Journal volume of a CG that has finished the init process? Curious if you see PITs in the Journals before the upgrade occurred.

117 Posts

August 9th, 2013 20:00

Richard makes a very good point which highlights the importance of proper sizing and capacity planning. 

With any solution there are sizing and capacity planning considerations for NDU process of upgrades. In the case or RecoverPoint, the NDU will vacate all CG's off one RPA at a time, effectively removing the workload off and removing the RPA pair( one at each site) from the resource pool for replication.  If you are familiar with vmware the have a similar process for patch management where by they vacate all vm's from the ESX host that is getting the upgrade.  In both cases you need to have n+1 capacity to handle the entire workload or there will be degradation in services since with one of your nodes is out of the resource pool you not must run all your workload on the remaining resources.

RP NDU will typically have short inits as workloads are transitioned from on RPA to another, but it is not the expected behavior to have long initialization as the result of an NDU unless it is a environment that is undersized.

-rick

No Events found!

Top