This post is more than 5 years old

1 Rookie

 • 

107 Posts

5579

September 10th, 2015 10:00

Isilon Snapshots from other clusters.

Just want to validate what I already assume:

If I have data on cluster A which has snapshots from 6 months ago stored there and I migrate to cluster B, if I migrate the snapshots off A to B, I should still be able to restore as far back as 6 months on B from the snapshots taken on A.  Correct?

1.  Cluster A has 500TB of data, 6 months of snaps.

2.  SyncIQ data AND snapshots to Cluster B.

3.  Snap-restore request is created for cluster B from 4 months ago.

I shouldn't need to go back to cluster A to retrieve the snaps, I should be able to get it restored locally on cluster B.  Correct?

Haven't had this scenario occur before but it did just happen the other day and we pulled it from cluster A because the snaps were not replicated to cluster B.  So if I synciq the snaps from A to B (which is all the same data pointing to the same files) I should have no problem restoring files from snaps as far back as 6 months, correct?

104 Posts

September 15th, 2015 13:00

Brian_Coulombe_Disney,

Spot on, this would be a RFE (request for enhancement). I would recommend working with your sales team to have one opened.

104 Posts

September 10th, 2015 13:00

Brian_Coulombe_Disney,

No, you will not be able to simply move snapshots from cluster A to Cluster B and use snap-restore (snaprevert) to restore snapshots from a different cluster.

A few points as to why snap-revert only works on the directory the snapshot was taken, so having a different directory name Eg: Cluster A /ifs/data/sync_source and Cluster B /ifs/data/sync_target. will simply not allow a snapshot revert to work as the target path for the revert does not match that of the snapshot.

Also as snapshots use LIN's and block references for restoring previous versions these will not line up between the two clusters.

Eg: /ifs/data/sync_source on cluster A will not have the same LIN as /ifs/data/sync_source on cluster B.

9 Legend

 • 

20.4K Posts

September 10th, 2015 13:00

Brian,

are you migrating from cluster A to cluster B for good ..or this is some kind of DR exercise ?

1 Rookie

 • 

107 Posts

September 11th, 2015 04:00

@Shane:  If we're migrating off one cluster to another and the paths remain -- and the snapshots come with the synciq data, I couldn't think of a logical reason why it wouldn't work.  Snapshot is referring to the same data, same paths, different cluster.

@Dynamox:  Yes, migrating from cluster A to cluster B.  I don't want to have to keep cluster A around for 6 months just to hold snapshots when the cluster needs to be decommissioned.


Appreciate the feedback.

9 Legend

 • 

20.4K Posts

September 11th, 2015 07:00

how many of these "monthly" snapshots do you have that you need to preserve ?  Do you have dedupe license on cluster B ?

104 Posts

September 11th, 2015 07:00

Brian_Coulombe_Disney,

Same data, same paths still the block references and LINs will not match up as stated in the above explanation.

Moving files from Cluster A to Cluster B using SyncIQ, does not guarantee they will be the same LIN or written to the same blocks.

Therefore referencing a LIN in a snapshot that was taken on Cluster A will not equate to the same LIN on Cluster B.


Also you can't write files to /ifs/.snapshot as it's a virtual directory (read-only).

1 Rookie

 • 

107 Posts

September 11th, 2015 09:00

@Shane:  So in essence, what you're saying is whenever you are upgrading (migrating) to a new cluster, you lose your snapshot history on the new cluster and MUST keep your old cluster around until they expire.  Not what I wanted to hear....  That's a feature enhancement request right there!  Also, /ifs/.snapshot not being able to be migrated is not good either.  You'd think with SyncIQ it could be configured to write to the same LIN/Block on the target cluster *IF* it's a brand new cluster without existing data.  Would solve the problem, no?

@Dynamox:  We cannot use dedupe right now so that's not an option.  Wish we could.

(also, why are my posts being moderated?!)

104 Posts

September 11th, 2015 14:00

Dynamox,

I apologize try again.

9 Legend

 • 

20.4K Posts

September 11th, 2015 14:00

Shane,

can't open that link, is that customer viewable ? I am getting some weird Salesforce error message.

104 Posts

September 11th, 2015 14:00

Brian_Coulombe_Disney

Yes, the way that this cluster was migrated using SyncIQ is not the way to maintain your snapshots.

Since you a retiring the old nodes the following process should have been used, as this would have maintained all the snapshots.

https://support.emc.com/kb/16450

1 Rookie

 • 

107 Posts

September 14th, 2015 10:00

I cannot view it either.

Object type not accessible. Please check permissions and make sure the object is not in development mode: Insufficient access rights: you cannot access draft articles..

104 Posts

September 14th, 2015 14:00

Brian_Coulombe_Disney,

It should be customer viewable, the request was made a while ago. For now I will supply the text version of the articles content, it's a little sloppy as it didn't come over from the .pdf as well as I expected. However the link should be up and working  in the next few days, as I just validated that it had been approved.

                                        KNOWLEDGE BASE

Knowledge Base Article: 000016450

OneFS 6.5 and later: How to replace nodes in a cluster with nodes of a different type using

SmartPools (000016450)

Version:0

Audience: Level 30 = Customers                                                              Article Type: How To

Last Published: Mon Oct 06 18:32:51 GMT 2014                                               Validation Status: In Technical Review

Guide:
                        Introduction
This article explains how to replace old nodes in a cluster with new nodes of a different type. The phases to do this are:

1.  Join the new nodes to the cluster.

2.  Migrate the data from the old nodes to the newly added nodes.

3.  Migrate client connections from the old nodes to the new nodes.

4.  Smartfail and remove the old nodes from the cluster.
This article applies to OneFS 6.5 and later.

Requisite tools or skills:

The following prerequisites must be met:
All new nodes are a different node type from the existing cluster nodes.
Three or more nodes are required for each SmartPool.
A valid SmartPools license (trial or permanent) is required.
A sufficient number of internal InfiniBand IP addresses is available for all new nodes.
Adequate disk space is available on the new disk pool/node pool to receive the existing data from the old nodes. Note: In OneFS 7.0 the "disk pool" terminology was renamed to "node pool".
The cluster is running at least the minimum supported version for all existing nodes and the nodes to be added. For more information, see the Isilon Supportability and Compatibility Guide.
All new nodes must be racked, connected on the InfiniBand network and powered on.
Onsite Verification Test (OVT) has completed on all new nodes without errors

Resolution:
                        Procedure
Phase 1: Join the new nodes to the cluster

Join the new nodes to the existing cluster through the OneFS web administration interface, command-line interface, or front panel. The new nodes are added to a new, separate disk pool/node pool. For additional information, refer to the

OneFS Web Administration Guide.

Phase 2: Migrate the data from the old nodes to the newly added nodes
Follow the procedure for your version of OneFS:
OneFS 7.1
OneFS 7.0
OneFS 6.5
OneFS 7.1

1.  Log in to the OneFS web administration interface.

2.  Click File System Management > SmartPools > File Pool Policies.

3.  In the File Pool Policies section, click Modify defaults in the Actions column for the Default policy.

4.  In the Default File Pool Protection Settings section, choose the pool that contains the new nodes from the  Data storage target drop-down list.

5.  Click Submit.

6.  The SmartPools job automatically starts at 10 PM of the given day to start migrating data from the existing nodes to the newly added nodes.


To manually start the SmartPools job, do the following:

1.  Click Cluster Management > Job Operations > Job Types.

2.  Under the Actions column for the SmartPools job type, select Start Job from the More drop-down list.

3.  In the Start a Job window, click Start Job.

4.  Click Cluster Management > Job Operations > Job Summary to confirm that the SmartPools job is running and to monitor the status of the data migration. If the SmartPools job is no longer listed, click Job Events to confirm that the job completed successfully.

5.  After the job completes, all data can be accessed on the node pool that contains the new nodes.

OneFS 7.0

1.  Log in to the OneFS web administration interface.

2.  Click File System Management > SmartPools > File Pool Policies >Settings.

3.  In the Default File Pool Protection Settings section, choose the pool that contains the new nodes from the Data storage target drop-down list.

4.  Click Submit.

5.  The SmartPools job automatically starts at 10 PM of the given day to start migrating data from the old nodes to the newly added nodes.


To manually start the SmartPools job, do the following:

1.  Click Cluster Management > Operations.

2.  Under the Running Jobs section, click Start Job.

3.  In the Start Job section, choose SmartPools from the Job drop-down list.

4.  Click Start.


5.  Click Cluster Management > Operations > Operations Summary to confirm that the SmartPools job is running and to monitor the status of the data migration.

6.  Review the Recent Job History section on the Operations Summary page to confirm that the SmartPools job completes successfully.

7.  After the job completes, all data can be accessed on the node pool that contains the new nodes.

    OneFS 6.5

1.  Log in to the OneFS web administration interface.

2.  Click File System > SmartPools > File Pool Policies.

3.  In the Actions column for the Default policy, click Modify defaults.

4.  In the Default File Pool Policy Settings section, under Default Protection Settings, select the pool that contains the new nodes from the Data pool drop-down list.

5.  Click Submit.

6.  The SmartPools job automatically starts at 10 PM of the given day to start migrating data from the old nodes to the newly added nodes.


To manually start the SmartPools job, do the following:

1.  Click Cluster > Operations > Summary.


2.  Under the Running Jobs section, click Start job.


3.  Select SmartPools from the Job drop-down list.


4.  Click Start.


5.  Click Cluster > Operations > Summary to monitor the status of the data migration and to confirm that the SmartPools job is running, or has recently completed.

6.  Review the Recent Job History section on the Summary page to confirm that the SmartPools job completes successfully.

7.  After the job completes, all data can be accessed on the disk pool that contains the new nodes.


Phase 3: Migrate client connections from the old nodes to the new nodes.
If the existing IP address pool is large enough to accommodate the new nodes, suspend the old nodes in SmartConnect so that new client connections only go to the new nodes:

1.  Open an SSH connection on any node in the cluster and log on using the "root" account.

2.  Run the following command to suspend a node from SmartConnect, where is the logical node number of the node to suspend. This command prevents new connections to the node and does not interrupt existing connected clients.
# isi networks modify pool --sc-suspend-node

If the existing IP address pool is not large enough to accommodate the new nodes, suspend each old node from SmartConnect and reallocate its IP address to a new node once client connections to the old node have been terminated.

1.  Open an SSH connection on any node in the cluster and log on using the "root" account.

2.  Run the following command, where is the number of one of the nodes to suspend:
# isi networks modify pool --sc-suspend-node

3.  In the OneFS web administration interface, do one of the following, depending on your version of OneFS :
In OneFS 7.0 and later: Click Cluster Management > Network Configuration.
In OneFS 6.5: Click Cluster > Networking.

4.  Under the Subnets section, select the subnet where the old nodes are located.

5.  In the IP Address Pools section, click Edit next to the Pool members section of the pool where the old nodes are located.

6.  From the Interfaces in current pool box, drag the old node's interface to the Available interfaces box.

7.  From the Available interfaces box, drag the desired interface of one of the new nodes to the  Interfaces in current pool box.

8.  Click Submit. The new node should pick up the next available IP address.

9.  Repeat this procedure for all remaining old and new nodes.


Phase 4: Smartfail and remove the old nodes from the cluster
NOTE: Smartfail only one node at a time.


1.  In the OneFS web administration interface, do one of the following, depending on your version of OneFS:
In OneFS 7.0 and later: Click Cluster Management > Hardware Configuration > Remove Nodes.
In OneFS 6.5: Click Cluster > Cluster Management > Remove Node.

2.  Click the radio button next to an old node and click Submit to begin the Smartfail process and removal from the cluster. Once the node is successfully removed, it is removed from the list of nodes on the main cluster status page.

3.  Once the FlexProtect job completes and the node is removed from the cluster, repeat the previous steps for the remaining old nodes.

Product: Isilon NL-Series, Isilon X-Series, Isilon S-Series, Isilon OneFS,Isilon OneFS6.5.0,Isilon OneFS6.5.1,Isilon OneFS6.5.2,Isilon OneFS6.5.3,Isilon OneFS6.5.5.0, Isilon, Isilon SmartPools

External Source:                                 Primus

Primus/Webtop solution ID:                       emc14003419

1 Rookie

 • 

107 Posts

September 15th, 2015 04:00

Hi Shane,

Just adding nodes, moving data to the new pool and then smartfailing out the old nodes is the IDEAL way to go.  In this case we had a new cluster being used for a different purpose which then became "Multi-Tenant", so three new nodes were added to create another pool of data for NAS vs the other nodes used as a backup target.  We could not bring down the NAS cluster (we can never just shut down NAS, we cannot take any outages) so SyncIQ was the only option.

Trust me, if that was an option for us, I wouldn't have needed to come here and post this!

(PS: I have many times merged old and new nodes and smartfailed MANY a nodes, but we really do need a way to be able to migrate snapshots with the source data that doesn't rely 100% on merge/smartfail.  We should be able to do this with SyncIQ so I digress back to the same statement:  "Feature Enhancement?")

4 Operator

 • 

1.2K Posts

September 15th, 2015 21:00

As a practical solution for the time being, have you considered "poor man's snapshots"

via some fancy rsync juggling with --compare-dest DIR and --link-dest DIR ?

-- Peter

1 Rookie

 • 

107 Posts

September 16th, 2015 04:00

Peter,


I haven't done that yet but if you have the steps to do it, would love to get your take on how to do it.  Sometimes I have to juggle with whatever options are available because technically we cannot have any downtime.

Appreciate it!

0 events found

No Events found!

Top