This post is more than 5 years old
1 Rookie
•
17 Posts
0
4892
December 27th, 2016 12:00
Networker 8.2.1.6 issue with backing up the backup server itself
Good day,
I have been handed over a test environment to test updating to the later versions of Networker. Reading over the release notes for 8.2.1.8, I started by trying a bootstrap and it failed miserably. So, for now I am first trying to at least get a backup of the server itself. That too is failing. Here are the details of our setup at this time:
Networker Version is 8.2.1.6
Running on RHEL 6.6
I have set up the client, we'll call it lee.com with following settings:
Virtualization - None
Index Management Browse and Retention is Month
Archive Management File inactivity threshold and File inactivity alert threshold is set at 0
Checkpoint Restart - None
Backup:
-
- Scheduled Backup is checked
- Client direct is checked
- Block based backup is not checked
- Directive is Unix standard directives
- Save set is All (coincidentally, I cannot browse for any other save sets)
- Group is set to "Lee_Test"
- Pool is empty
- Schedule is "Full Every Wednesday"
- Backup renamed directories is not checked.
Tab Apps & Modules, nothing is checked or completed other than No proxy backup
Globals (1 of 2) tab is using Parallelism of 12 and Priority of 500 with ave session distribution as "max sessions"
Globals (2 of 2) tab is all empty other than the Storage Nodes which is set to dd01Test, dd02Test and nsrserverhost
I have the group Lee_Test set up as:
Setup - Start time is 0:00 and is Enabled
Clones are not selected
No Output is configured
Snapshots are not enabled
Client Overrides have the following settings:
-
- Interval: 24:00
- Force incremental is checked
- Browse policy is Month
- Retention policy is Month
- File inactivity threshold and File inactivity alert threshold are set to 0
Probe is not selected
Configuration is set up with the following settings:
-
- Autorestart is disabled
- Restart window is 12:00
- Success threshold is Warning
- Client retires is 1
- Client retry delay is 0
- Inactivity timeout is 0
- Savegrp parallelism is 4
- Options are Manual restart, Verify synthetic full and Revert to full when synthetic full fails.
We currently have 2 DataDomain devices attached to the backup server as well. We will call them dd01test and dd02test
When I start the backup, it appears to run fine. There are 4 save sets that load in the currently running and the rest are in the Waiting to Run section of the backup details and they all fail with the similar errors (sanitized) like this:
86704:save: Successfully established DDCL session for save-set ID '3949114980' (server.com:/opt/oralocal).
Termination request was sent to job 96038 as requested; Reason given: Inactive
Unable to find any full backups of the save set 'server.com:/opt/oralocal' in the media database. Performing a full backup.
server.com:/opt/oralocal: retried 1 times.
server.com:/opt/oralocal aborted, inactivity timeout has been reached.
And the whole time nothing is saved. The rate (KB/S) never changes, there are no messages in the Devices section about any writing being started.
I made one change by adding in the 2nd storage node to the client and coincidentlly, did see some writing done on dd01test storage node, however, that was done rather quickly. I restarted the backup after I made the change and right now the save set is sitting at 7% complete with the 1st 4 save sets currently running and the remaining 7 save sets waiting to run. The backup has now been running for over 40 minutes. However, I have been trying to figure this out almost all day, so any assistance would be greatly appreciated.
I don't want to start an upgrade path if I cannot recover to the last known good point.
Sincerely,
Lee
events found

