Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1340

July 10th, 2019 05:00

Storage Node OS change

We have a Networker 18.2.0.1 install with 1 central server and 2 Storage Nodes.  The central server is now running Windows Server 2016, but the storage nodes are still running Windows 2008R2.

The storage nodes are currently physical and serving double duty for both DataDomain and physical tape libraries and as part of the process we intend to seperate the DD hosting to a virtual machine.

 

This morning I tested creating a new device on the existing Datadomain from a new Windows 2019 VM, which worked and I was able to carry out a backup to it, but caused all of the DD devices on the existing Storage node to flag as 'Suspected'.

I've tried to confirm if a DataDomain appliance can only be accessed by a single Storage Node, but my google skills have failed me.  Can anyone confirm this, or should it have worked but maybe I've done something wrong?

 

Also any general advice on how I might tackle this upgrade/migration would be very welcome.

I previously upgraded/migrated the central server by building a new VM, copying the networker folders, and renaming and ReIPing everything the same but was sure there was a less drastic way for the storage nodes.

June 10th, 2020 07:00

Fast forward several months....

 

I've since upgraded the firmware on our DD appliances from 5.7 to 6.2 (via 6.1) successfully, and upgraded Networker from 18.2 to 19.2

 

I tried this same process again today, and it worked immediately, with no suspect devices flagged - so it's now just a process of some further testing, switching backups to new storage nodes and waiting out the retention times

I strongly suspect it was the very old DD OS that was the source of the original issue, but I'm very happy the issue is fixed regardless of the cause.

2.4K Posts

July 10th, 2019 05:00

I do not know hat exactly you have done but yes, I can con firm that a single DD can be used by multiple storage nodes. We have a pretty similar setup.

But did you install a 2019 Storage Node or a server? - And do not forget that a mixed environment of Windows 2016 & 2008R2 might cause problems if there is a Windows 2019 SN installed which must run NW 19.1.

BTW - the media and/or the save sets may be set to 'suspect' - I doubt that such status is also valid for the device.

 

 

July 10th, 2019 06:00

Thanks for the speedy reply,

I installed the Networker 18.2.0.1 Storage Node components on a Windows 2019 Server - I've double checked the support matrix and this does show as a supported configuration.  (I really miss the web app compatibility guide for the latest versions...)

 

Are you using Secure Multi Tenancy to achieve the multiple Storage Nodes to a single DD?

I haven't configured that but I'm reading up about it now in case that's what I'm missing - It looks straightforward enough, but don't want to use it if it's uneccessary.

 

I didn't fully look into why the devices had flagged as suspect but it appeared to me that it was the new storage node locking out access.  Once I unmounted and disabled the new device I'd created I was able to unmount, disable, enable and mount the existing devices to clear the suspect flags and get backups rolling again.

 

Thanks for your insight, it's much appreciated.

2.4K Posts

July 10th, 2019 06:00

You are right with respect to the supported config.

In our environments we have separate data zones - sorry, no multi tenancy. So all SNs are controlled to the same NW server.

 

4 Operator

 • 

1.3K Posts

July 10th, 2019 20:00

Multi-tenancy is not really needed for you but it does no harm either. I would consider multi-tenancy only in case I wanted to use a different mtree/SU as by default NetWorker tends to store everything under the same mtree/SU. But there is no real advantage when it comes to performance.

The devices are marked as suspect usually in case the storage node is not communicating with the backup server so its more likely a connectivity issue.

July 11th, 2019 07:00

@bingo.1 

Ahh, that might make some sense then.

 

When I look at our Datadomain the MTree is named for our sole data zone, I wonder if having multiple datazones is the key difference why it isn't working for me as I'd like

July 11th, 2019 07:00

@crazyrov 

Both storage nodes definitely have full access to the DD Appliance without any connectivity issues, it' was when both storage nodes had devices active on the same DD that the Suspect Flag got waved.

Unfortunately, it was the devices for our Live backups that were suspect, rather than my new test device so I had to roll back fairly quickly which gave little time for fault finding.

 

This week then got insanely busy unexpectedly so I haven't had a chance to investigate much further, but both yours and Bingo's input has been most helpful.  Hopefully I can get to the bottom of it.

 

cheers!

No Events found!

Top