Unsolved
1 Rookie
•
27 Posts
0
2869
February 1st, 2021 05:00
Networker concurrent cloning action
Hi,
I am relatively new to Networker, so I am pretty sure this question can be answered easily. I am running Networker 19.2.
My workflow backs up all my VMware VMs to a DataDomain daily. Once a month, we clone the latest backup to LTO tapes drives for offsite backups. So I have 2 pools (DataDomain and LTO7 tapes drives).
Currently, this workflow is configured as the following :
- First step is Backup daily to DataDomain.
- Second step is clone to tapes drives, but it is only set to "exec last friday every month". All other days of the month are set to Skip.
The problem that I am facing is that the cloning process takes 3-4 days and when it is cloning, it is not doing anything else. The workflow does not start again until the cloning process is finished. So my daily backups are not running until my cloning is finished. This is causing my daily backups to be skipped for a few days. I'm pretty sure I could run these actions concurrently since it is 2 different pools.
Looking at the Policy Action wizard for the cloning action, I found a Concurrent option. Reading the Admin guide, from what I understand if the Concurrent option is checked, the cloning action would start at the same time as the Backup action, which is not exactly what i want to achieve. I would like my cloning action to start after the last friday's backup, but just run concurrently/in the background so that my daily backups can run in the next days while the cloning is running.
Should the cloning action be part of another workflow to achieve this ?
Thank you,
Guillaume.
bingo.1
2.4K Posts
0
February 1st, 2021 06:00
My straight answer: You should decouple the processes.
I recommend scripted cloning. Just create an appropriate script wherein you can de-/refine all selection criteria to get the list of ssids and output parameters (for instance if you want to set another retention date). Then let this run via Task Manager/cron job whenever you want this to happen. Even if it has to be interrupted, you can restart it at any time if you just verified the number of existing save set copies.
guillaume.leonard
1 Rookie
•
27 Posts
0
February 1st, 2021 10:00
Thank you bingo.1.
Being pretty new in Networker, I wouldn't know where to start really for such a script. Would you happen to have an exemple ?
bingo.1
2.4K Posts
0
February 1st, 2021 13:00
First of all, you need to get used to NW. Sorry, but there is no other way. For this purpose may I suggest that you install NW on a (virtual) host. You do not need a license and additional hardware because:
For a trained person this will take about 15 mins - for a newbie calculate about 1 hour.
Next you must familiarize with the NW commands. You will find all information in the NW Command Line Reference. This doc is huge but in fact you need only a few commands to remember. In your case these are the two commands you need:
The script of course depends on the OS type you work with - Linux or Windows. For Windows, I recommend PowerShell. You can make the script pretty simple or very elegant - I leave this up to you . But the two core commands will always be the same:
mminfo -q "level=full,!incomplete,!suspect,copies=1,pool=DataDomain,savetime>=start_date,savetime<end_date" -r "ssid" > textfile
nsrclone -F -b TapePool -S -f textfile
Here is another way to specify the save time for a relative period:
mminfo -q "level=full,!incomplete,!suspect,copies=1,pool=DataDomain,savetime>='-1week'" -r "ssid" > textfile
nsrclone -F -b TapePool -S -f textfile
I encourage you to sit down and 'play' with NW - this is the best way to learn it.
If ready, prepare the OS to run the script automatically.
Have fun and good luck ...
Andy_Fahy
2 Posts
1
February 3rd, 2021 04:00
Hello Guillame
You can run the clone separate to the backup by creating a new 'Group' with a 'Save Set Query' that you define to choose all the data that is in your recent backup. Then clone that data that is found in the query.
Please search the section in the 19.2 Administration guide named "Protection groups for a cloning workflow"
All documentation for NetWorker can be found here.
https://www.dell.com/support/home/en-uk/product-support/product/networker/docs
Choose the correct version of Networker that you use and search for the administration guide.
You can remove the concurrent clone action in the original workflow and run the backup and clone separately.
Thus the backup can run daily and the long running clone will not hold up the original backup.
There is no need to create and run scripts.
bingo.1
2.4K Posts
0
February 3rd, 2021 08:00
MarcosCarraro
9 Posts
0
February 12th, 2021 03:00
@guillaume.leonard
Hi, you can create 1 workflow to 1 vm, this way if the workflow failed only this job broken, if you have more VM size is large the another jobs are not affect for the long time backup.
barry_beckers
393 Posts
0
March 10th, 2021 06:00
that is not really flexible nor scalable is it?
As Dell advises in their own documentation against using cloning right after the backup from within the same workflow, mainly in case of larger environments as then cloning would be occurring rather very ineffective as multiple workflows might be competing over the same clone devices/pools.
Depending on the complexity it is indeed better to set up cloning separately from the backup workflow (also to prevent any hang or long run while cloning to interfere with the next backup runs), either via a workflow and a saveset query, but for the more complex selection scripting might be better suited. Using a separate workflow might get you started quick enough, giving you more time to sort out if scripting might be required.
Also depending on the current setup, you might also want to re-evaluate the setup to simplify cloning, which might make it less error prone.
For example if you'd have to specifically select specific workflows/policies/backup actions to have cloned, you'd have to make sure that those are added in the workflows used for cloning. If it might be possible to backup the data to be cloned to a specific pool, you can simply have the query select all data from the last x-days in that pool, requiring to be cloned (I look back for 3 or 7 days, so that even if some clone jobs might have failed, that it still would try to pick up missed backups to clone, however that causes more strain on the backup server to query) and clone that.
This still requires of course to make sure that the backups actually are configured to use that specific backup pool, but then the clone workflow does not require any reconfiguration if you add any new workflows/backups to it as it simply would pick up everything in that pool not yet cloned since the last x-amount of days.
bingo.1
2.4K Posts
0
March 10th, 2021 14:00
It is ... but it is not flexible. For instance you cannot simply restart the clone action but only the whole workflow. That's why I recommend that you decouple cloning from backups. And you can start the clone jobs at any time giving you more flexibility if other tasks are more important or interrupt the daily business, like updates.
And before I setup a specific clone action (which is also possible) I personally prefer scripts which are also more flexible as the GUI does not provide all possible parameters.
guillaume.leonard
1 Rookie
•
27 Posts
0
July 5th, 2021 13:00
I have been revisiting this in the past days due to cloning failures and having to manually restart the entire workflow everytime I wanted to try cloning.
So far, I got most parts of the script done, but I am missing something and I was hoping to pick your brain about this.
We run full monthly backups every last Friday in Networker and dailies are incremental. Currently, the workflow in Networker is to run Full backups on Last Friday and start cloning right after.
The script is pretty much like bingo.1 posted; mminfo + nsrclone
My script would run in Windows Scheduler, but I am wondering how to start it properly; what the trigger would be. In Networker, backups are set to start on "Last Friday". But since they are full backups, they run until Saturday morning and then the cloning can start. Therefore I can't start my script on Last Friday, since backups will not be done.
In Windows Scheduler, I could setup the script to start on "Last Saturday" or "First Saturday", but this is not really flexible since the day after "Last Friday" could be "Last Saturday" or "First Saturday". Also, if the backup take too long, then the script won't work properly. I don't like the idea of having a start time set in stone.
I was thinking of something more dynamic and flexible that could use something like jobquery / nsradmin. The script could start on "Last Friday" and monitor Networker backups through jobquery (or something else that can monitor Networker jobs/workflows). Once the backups are done, then the script continues and generate the text file and then nsrclone.
I'm not very familiar with jobquery and I'm trying to look around for examples, but I thought I would post here at the same time to maybe get some help.
Thinking about it, another option could actually be to start the workflow using nsrworkflow into the script. A quick test seems to indicate that the nsrworkflow process does not exit until the backup has finished. So the script could wait for nsrworkflow to exit before continuing to mminfo and nsrclone.
FYI, when I mean "the script", it can be multiple scripts. I'm thinking about multiple Powershell scripts to separate at least mminfo and nsrclone. My goal is to have all the actions more flexible than using the NMC and easy to start on their own.
bingo.1
2.4K Posts
0
July 6th, 2021 01:00
I wonder why the posts are obviously not correctly ordered by their dates - but this is another story.
The key point here is that you define your mminfo command cleverly enough.
For instance, you could ask for save sets like: mminfo -q "full,savetime>-1week,complete,[valid]copies=1, ..." ....
The result would be that you can run the script for instance every day and it will still only clone the fulls which are not older than 1 week but still have not been cloned. This is also a good idea to be prepared in case of a clone failure.
guillaume.leonard
1 Rookie
•
27 Posts
0
July 6th, 2021 06:00
I agree with you bingo.1; I think I am over-complicating things.
Most of my environment is backed by the vProxy (VMware) and this seems to complicate things because even though the backup policy for VMs is set to backup incremental on a daily basis and full on Last Friday, all my VMs are backed up as full. After reading the Networker VMware Integration Guide (p.77), it is mentioned that when the target device is a Data Domain, a synthetic full backup is created every time :
Note: Since the backup is performed to Data Domain, the resulting backup on the target device is a new full backup because NetWorker uses Data Domain virtual synthetics technology to create a synthetic full backup.
When querying using mminfo, there doesn't seem to be a way to query for synthetic full or traditional full backups. The flag syntheticfull for mminfo returns synthetic full and traditional full as well. This causes this issue that, in case that mminfo query for savetime>-1week, then it will returns all dailies for the vProxy.
I could use 2 mminfo queries : 1 for VMs with savetime >-1day and 1 for physicals with savetime>-1week. The only hickup there could be doing that is it would be possible that the VMs SSIDs and physicals SSIDs don't have the same day for backups. If I run mminfo+nsrclone manually on the Monday morning following Last Friday, then my physicals backups date would be Last Friday but my VMs backups date would be Yesterday's (Sunday). Technically, this shouldn't be an issue. We're just used to have a clone with the same date for all the backups together.