40 Posts
0
1181
August 13th, 2020 07:00
Is it possible to run two clone actions towards one clone pool in a workflow?
The reason to have two clone actions is that the backup data is kept for different purposes with different retention times. Normally the two clone actions have different schedules. But by chance on some specific dates, both clone actions should be run one after another. I tested such a workflow with one backup action followed by two clone actions. If the destination clone pool is the same, the second clone action keeps running forever and the workflow never finishes. If the two clone actions have different destination clone pools, it worked. Why?
We try to use only one clone pool as it seems this is a best practice for Data Domain DDBoost device, as DD Boost support a large number of concurrent sessions. I don’t know whether using more clone pools has any impact on performance. BTW, according to Networker documentation, with an old version of Networker like 8.1, a DD Boost devices supported 60 concurrent sessions or save streams. With Networker 18.2, the concurrent session number is increased to 120. But with Network 19.2, the number becomes 60 again. Does somebody know why?
bingo.1
2.4K Posts
0
August 13th, 2020 10:00
So you want to have 2 different retention times. Then it makes sense to use 2 different pools. This makes it also easier to identify the backups on the volumes. So I suggest you stay with 2 clone pools.
You can even run two clone jobs on the same backup simultaneously. This would reduce the number of workflows.
I cannot say why the default number of concurrent max sessions on a ddboost device has changed (with the programmer of the day?) but I just verified with NW 19.3 that you still can increase it to 120.
bingo.1
2.4K Posts
0
August 13th, 2020 12:00
Just name the clone pools for their retention time policy and use appropriate labels.
Honestly - my personal favorite is scripted cloning:
- You decouple cloning from a bigger action list
- You can run cloning when all your backups have finished.
- You have full control from the command line where you can also specify a specific retention date.
- You can modify scripts easily whenever you need to change parameters
May I convince you to use PowerShell for that purpose?
ChiXC
40 Posts
0
August 13th, 2020 12:00
Thanks for the reply and help. Yes, we can live with multiple clone pools. It just seems using minimum pools on a DD Boost device is a best practice. And I am not sure whether multiple pools would help in identifying backups. They are backups (clones) for the same hosts. Their retention dates will be anyway different because they are created on different dates. For browsable backups or recoverable backups, you should be able to see all backups for one specific client/VM regardless which pool they are in.
We do put two clone actions, in fact now we need to put three clone actions, in one workflow. Putting multiple clone actions in the same workflow will ensure they cover the same group. Otherwise we have to create multiple groups for the same client or use Save Set Query group, as one group can be only assigned to one workflow.
Regarding the max concurrent sessions on a DD Boost device, I don’t know what the default number is. I just read “DD Boost can run up to 120 concurrent sessions or save streams on each DD Boost device for backup and recovery.” in the document “Dell EMC NetWorker Version 18.1 Data Domain Boost Integration Guide”. But that number comes down again 60 in the same documents for 19.1.x and 19.3
ChiXC
40 Posts
0
August 13th, 2020 13:00
In fact, I like scripting too I am now testing a script to modify the action overrides in many workflow definitions in one run.
However, with Networker, how do you trigger the execution of a script after a backup action completes? Is there a way to set up before and after scripts? I would hate to decouple the work flow of backup /clone / clone actions and have to run a Windows scheduler for the script. It would make it very difficult for somebody else to understand what you are doing. I would like to see everything defined in one console, at least a reference to an external script. I know other backup software allows you set up before and after scripts for a backup job. Networker has Probe action that can be only executed on a client. Now with VMware backup using vProxy, you can no longer set up a Probe action
bingo.1
2.4K Posts
0
August 13th, 2020 23:00
I admit, you cannot do that with NW. You need an external scheduler to trigger the scripts. But think about the issue to decouple the processes.
Let's say you prepare a cloning script that runs on a daily base at the same time. While you query for clone candidate save sets, you could also ask for the number of instances and only add the save sets which have not been cloned yet. Even if the clone fails for whatever reason, the scripts will just run the clone the next day and will still provide reasonable save set lists. You do not even have to care.
But if a cloning action fails (especially for only a fraction of the save sets), then it will be much more work to repair this situation.
ChiXC
40 Posts
0
August 15th, 2020 18:00
Yes, you are absolutely correct. Handling failures is the most difficult part in backup and will need scripting in general. There are so many possibilities when something went wrong. Here is a similar incidence I saw before: a policy/workflow backs up and makes clones for a folder of VMs; the backup of all the VMs succeed and the clones were generated for all but one VM. What can you do with Networker Administration GUI: rerun the entire policy/workflow and generate extra backups and clones in order to correct the previous single clone failure? Some complex scripts would be needed here.
Anyway we digressed from the original topic of this thread. Thanks for your opinion and help.