Start a Conversation

Unsolved

This post is more than 5 years old

BD

673

January 14th, 2014 11:00

Network Configuration for iSCSI using NS120 datamovers

Anyone have a best practice document for network configuration of the NS120 datamovers?

We want to re-purpose our NS120 for DR use.

The Cx4-120 "half" has FCoE connections, the Celerra "half" has two datamovers, each with two copper 1GB (cge0 and cge1) and two fiber 10Gb (fx0, fx1) connections.

I would like to understand what is the best practice for configuring the Celerra "half".

We would like to utilize the copper connections only.

Datamovers will be split over two switches for HA.


January 14th, 2014 21:00

Please consider moving this question as-is (no need to recreate) to the proper forum for maximum visibility.  Questions written to the users' own "Discussions" space don't get the same amount of attention and can go unanswered for a long time.

You can do so by selecting "Move" under ACTIONS along the upper-right.  Then search for and select: "Celerra Support Forum".

Celerra Support Forum

DAndrys wrote:

We want to re-purpose our NS120 for DR use.

Quick question, you mention this will be used for "DR".  What will be the source?  Another Celerra or maybe VNX?  If so I just want to remind you that if you are doing block storage from the SP's on the source array, there isn't any array-based solution that lets you replicate to a peer array's data movers (even iSCSI).  Unless maybe you have a VNXe presenting iSCSI storage that you want to replicate or planning to use some host-based replication or maybe storage virtualization solution?

Anyone have a best practice document for network configuration of the NS120 datamovers?

When you ask about the "network configuration" I am assuming your question is whether or not to do any link aggregation for the 2x cge ports on the active data mover.  No, the absolute best practice for iSCSI targets is not to aggregate the component devices (cge in this case).  Instead present them as individual paths/interfaces (individually assigned a unique IP) to the host and let the host-based MPIO solution manage load balancing and path failover.

For instance, with Windows, iSCSI is best served by multiple paths and letting host-based multipathing solutions such as PowerPath (configured for Celerra) or via MC/S (Multiple Connections per Session) configured from MS (Software) iSCSI initiator manage the load balancing and failover versus doing so at the switch level via link aggregation.

No Events found!

Top