Start a Conversation

Unsolved

This post is more than 5 years old

848

July 2nd, 2012 09:00

NX4, Network Design question

I have an existing NX4, serving CIFS to Windows Clients (Significant use of CIFS in the environment) and NFS to a VMware environment. (about 30 guests across 3 hosts)

The current NX4 design has 2 cge ports configured with LACP supporting CIFS in one VLAN and the other 2 cge ports configured with LACP supporting NFS in a different VLAN. All switch ports are access ports. Jumbo Frames are enabled on the NFS ports.

I now have to introduce iSCSI into the picture, as the customer is virtualizing Exchange, and NFS datastores with Exchange VM's are not supported.

I have available network ports on the ESX side to dedicate to iSCSI traffic, so I can dedicate them to vmkernal ports, attach the ESXi software iSCSI initiator, enable MPIO etc etc, so no problem there.

My challenge is how to best introduce iSCSI networking from the NX4 side.  

Ideally, I'd have (2) NX4 network ports not configured in a LACP to do end to end native MPIO. In order to do that, I would have to move my NFS traffic over to the CIFS LACP, change the CIFS switch ports to a TRUNK port, and have CIFS and NFS traffic traverse the same NX4 physical links..   Clearly, this will push the utilization if these network links up, and I am not sure of the impact of enabling JUMBO frames on these ports will have on CIFS traffic that is not using JUMBO frames. So not ideal

The other option for me would be to combine NFS and iSCSI traffic on the current NFS LACP connection. I  would add an iSCSI target with multiple IP's, and still use VMWare native NMP on the ESX side, but it would be LACP on the NX4 side, so again not ideal.. But no JUMBO frame issues, and I think I would have a better balance of network.

Are there other options that I should consider? What would be the preferred option in your opinion?

Second question, on the NX4, iSCSI being served from the datamovers, with either approach, can the iSCSI connections reside in the same subnet or is it required that they be in seperate subnets or does it not matter?

Thanks

Jim

296 Posts

July 11th, 2012 02:00

Hi Jim,

Let me answer your second question first, The iSCSI connections can reside on same or different subnets and that would not make any difference unless the connectivity is fine.

I found the second option to implement the iSCSI as more usefull but not sure what exactly you mean when you say "I  would add an iSCSI target with multiple IP's, and still use VMWare native NMP on the ESX side, but it would be LACP on the NX4 side, so again not ideal"

As far as i believe the LACP would provide you High availability on the iSCSI connections here. but not sure about VMware native NMP on ESX side.

Sameer Kulkarni

35 Posts

July 12th, 2012 07:00

Sameer,

The recommendation for iSCSI implementation is not to use LACP connections, but rather, use single network connections in conjunction with MPIO for high availability.

The recommendation for NFS implementations is to use LACP for high availability.

So when doing both, and using the same pair of NICS, is it acceptable to use a non LACP connection on the ESX side with MPIO, and a LACP connection on the NX4 side with multiple IP's would be the way I would rephase teh question for clarification..

I have been advised by EMC through other channels to NOT mix iSCSI and NFS traffic on the same NICS for performance reasons, and a better approach is to mix the CIFS and NFS traffic.

That has me worried from a network utilization perspective, so I have a new Plan.

We are going to move all of the VM stuff off NFS, and do it all on iSCSI, and eliminate NFS.. At the end, we will have 2 NICS doing CIFS, and the other 2, with Jumbo Frames and MPIO..

A lot of work, but with Storage vMotion, not so bad

Jim

296 Posts

July 12th, 2012 08:00

Thanks for the update Jim.

No Events found!

Top