40 Posts
0
1750
December 28th, 2020 08:00
When creating a DDBoost device, which storage node should be associated with the new device?
In our Networker environment, we have one Networker server, two storage nodes at two sites and two Data Domain systems at two sites. When creating DDBoost devices in the Data Domain devices, which storage node, the Networker server which is also a storage node, the storage node at the same site or the storage node at the remote site, should be associated to the new DDBoost device? According to Networker on-line help, “By default, Networker associates all devices with the Networker server.” But based on a consideration of workload balancing and a possible DR situation, we usually associate the storage node server at the same site to a new DDBoost device.
This setting works without any issue until we want to migrate the savesets by using ‘nsrstage’ from one volume on a DDBoost device to another volume on another DDBoost device of the same pool or a different pool. Strange thing happens, the migration works if the destination volume device is associated with the Networker server or one of the two storage nodes. But if the destination volume device is associated with another storage node, the staging operation failed with the error messages:
175297:nsrstage: Unable to set up the direct save with server 'NetworkerServer': no matching devices for save of client `NetworkerServer'; check storage nodes, devices or pools.
11390:nsrd: no matching devices for save of client `NetworkerServer'; check storage nodes, devices or pools
So is it a good practice to associate a storage node, instead of the Networker server, to a DDBoost device?
BTW, we considered the option of temporarily changing the associated storage node of the volume device just for the staging, and found a threat of changing storage node in this forum (https://www.dell.com/community/NetWorker/Change-the-Networker-Storage-Node-that-handles-a-DDBoost-Storage/m-p/7170066#M61611). But I am not sure I fully understand the change process and I don’t know if it is safe to do so.
bingo.1
2.4K Posts
0
December 28th, 2020 15:00
As the NW server today has a lot of administrative/reporting tasks to fulfill, it is in fact best practice to associate DDBoost devices with the remote storage node(s). The only rule you have to follow is that the backups of the NW internal databases (especially the bootstrap) must be lead to a local device at the NW server. This is the most obvious reason for your error messages.
A second criteria is that you may have associated devices to certain pools (probably from an older installation). This is possible but normally unnecessary if you ensure that the volumes have been labeled to the correct pools. Selecting more criteria than necessary is most likely the reason that the filter (in fact you set up such one) is to narrow so that not all criteria are matching properly.
BTW - if you are backing up the bootstrap to a DDBoost device, may I suggest that you clone the most recent version to a local File Type Device (FTD). This is much easier to configure whenever you must run a disaster recovery of your NW server and you do not have to care about all the connection attributes.
ChiXC
40 Posts
0
December 29th, 2020 11:00
Thanks bingo.1 for your response. The volume from which we want to migrate doesn’t contain any Networker system data, including bootstrap. Our local backup and remote clone volumes used by the workflows/actions in the policy ‘Server Protection’ are dedicated ones which are associated to the Networker server as the storage node. The volumes from which we want to migrate contain only backup or clone data from regular backup clients on which the file-system based backup agent is used, not even image-based backups are involved.
I am not sure I fully understand your second criteria. I tested using a destination volume in a different pool and in the same pool as that of the source volume, but ended with the same result: it worked for the destination volume which is associated with the Networker server or the storage node at the same site where the Networker server is located, and failed when the volume is associated with the remote storage node.
The reason why we want to migrate the data to different volumes is that we would like to separate the backup data and clone data to the DDBoost devices on different DDBoost storage units. So far different volumes on the same DDBoost storage unit are used for backup data and clone data. Someone told me that using different storage units/Mtrees helps improving performance by increasing concurrent data streams. In addition, we would like to enable Data Domain Retention Lock just on the storage units/Mtrees for the clone data to simulate the tape off-site protection. We could just create new volumes in new pools and leave the current data in the old volumes/pools until they expire. But it would be great if we can migrate them into the new volumes and keep using the same pools.
bingo.1
2.4K Posts
0
December 30th, 2020 02:00
The key issue is, that we can just try to answer your question generally with the few facts you gave us.
The origin of your trouble might be a configuration error (but we don't know these details), it might also be a NW version issue (which we also do not know).
One thing that you definitively should look at is the server's option "Disable RPS Clone" which caused problems in the past and still might be responsible.
The second issue I want to suggest is that you run "nsrclone -v ..." for a single save set where you can define all other necessary attributes/parameters as required. Verify the output - it might tell you more.
If you migrate to the same pool, make sure that you set your source volumes to "read-only".
As you can see, I am a big fan of scripted cloning. I have also tested that along with the DD retention lock feature. Please remember, that such save sets must be saved in a separate pool.
Unfortunately, I had to discover, that the attribute 'ddrltype' will only be set by a workflow or an action, but not when you manually save/clone the save set. This at least was the case with NW 19.3.x - it might have changed with NW 19.4 - please fellow contributors, could you comment on this?
ChiXC
40 Posts
0
December 30th, 2020 14:00
bingo.1
2.4K Posts
0
December 31st, 2020 00:00
Sorry, but your statement needs some correction.
First, I would never use "nsrstage" - about 10 years ago I have been told that the development for this command will be discontinued. I don't know the current status but since then "nsrclone -m" worked fine for me.
Second, the migration with "nsrclone -m" will work even for the same pool, see below. I just verified that again with NW 19.3.0.1/Windows.
Unfortunately, the program did not let me paste the command line output. The error message was: "You used a bad word, "P i p e" in the body of your post. Please clean up the body before posting.".
Very bad - I do not find this funny
So please apologize this 'short' response.
ChiXC
40 Posts
0
December 31st, 2020 15:00
Wow! I didn’t know that ‘nsrclone –m’ can be used to migrate savesets to a backup pool. In Networker Command Reference Guide, the explanation of the option ‘-b’ of ‘nsrclone (1m)’ is
“Specifies the media pool to which the destination clones should be sent. The pool may be any pool currently glistered with nsrd(1m) that has its status set to clone.”
I tested to migrate backup savesets to a volume in another backup pool and got mixed results. It worked for the savesets created by regular backup agents; but failed for the savesets created by VMware vProxies. The VM backups are synthetic fulls. It seems ‘nsrclone –m’ will launch a ‘nsrrecopy’ job to move the data. The similar error “175297:nsrrecopy: Unable to set up the direct save with server 'NetworkerServer': Timed out.” occurred again. Is there any special setting to allow ‘nsrclone –m’ be used to migrate VM backup savesets as well? Thanks
ChiXC
40 Posts
0
January 16th, 2021 20:00
Although I still don’t fully understand why assigning different storage node to a DDBoost device would impact whether savesets can be migrated into the device by using ‘nsrstage’ or ‘nsrclone –m’, I think I should close this old threat by marking bingo.1’s answer as the solution as it did answer the question about which storage node should be assigned. Thanks bingo.1.