Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

71350

February 24th, 2015 01:00

Dell MD3620i connect to vmware - best practices

Hello Community,


I've purchased a Dell MD3620i with 2 x ports 10Gbase-T Ethernet on each controller (2 x controllers).
My vmware environment consists of 2 x ESXi hosts (each with 2ports x 1Gbase-T) and an HP Lefthand storage( also 1Gbase-T). The switches I have are Cisco3750 that have only 1Gbase-T Ethernets.
I'm going to replace this HP Storage with a DELL storage.
As I have never worked with DELL storages, I need your help to answer my questions:

1. What is teh best practices to connect vmware hosts to the Dell MD3620i ?
2. what is the process to create a LUNs?
3. Can I create multiply LUNs on only one disk group? or is the best practice to create one LUN on one disk group?
4. How to set iSCSI 10GBase-T ports working on 1Gbps switch?
5. Is the best practice to connect the Dell MD3620i directly to the vmware Hosts without switch?
6. The old iscsi on HP storage is in a different network, can I do vmotion to move all virtual machines from one iSCSI network to another and then change the iSCSI IP addresses on vmware hosts without virtual machines interruption?
7. Can I bundle two iSCSI ports to one 2Gbps interface and conenct to the switch? I'm using two switches, so I want connect each controller to each switch by bounding their interfaces to 2Gbps. My Question is, would be controller switched over to another controller if the Ethernet link falls on the switch?(in case one switch is rebooting)


tahnks in advanse!

4 Operator

 • 

9.3K Posts

April 4th, 2015 17:00

TCP/IP basics: A computer cannot connect to 2 different (isolated) networks (e.g. 2 directly-attached cables between the server and a SAN's iSCSI port) that share the same subnet.

Data corruption is highly unlikely if you were to share the same vlan for iSCSI, however, performance and overall reliability would be impacted.

With a MD3620i, here are a few setup scenarios using the factory default subnets (and for direct-attached setups I had to add 4 additional subnets):

Single switch (not recommended as the switch becomes your single point of failure):

Controller 0:

iSCSI port 0: 192.168.130.101

iSCSI port 1: 192.168.131.101

iSCSI port 2: 192.168.132.101

iSCSI port 4: 192.168.133.101

Controller 1:

iSCSI port 0: 192.168.130.102

iSCSI port 1: 192.168.131.102

iSCSI port 2: 192.168.132.102

iSCSI port 4: 192.168.133.102

Server 1:

iSCSI NIC 0: 192.168.130.110

iSCSI NIC 1: 192.168.131.110

iSCSI NIC 2: 192.168.132.110

iSCSI NIC 3: 192.168.133.110

Server 2:

All ports plug into that 1 switch (obviously).

If you only want to use 2 NICs for iSCSI, have server 1 use the 130 and 131 subnet, and server 2 use 132 and 133, server 3 then uses 130 and 131 again. This spreads the IO load between the iSCSI ports on the SAN.

Dual switches (one VLAN for all the iSCSI ports on that switch though):

NOTE: Do NOT link the switches together. This helps prevent issues that occur on one switch from affecting the other switch.

Controller 0:

iSCSI port 0: 192.168.130.101 -> To Switch 1

iSCSI port 1: 192.168.131.101 -> To Switch 2

iSCSI port 2: 192.168.132.101 -> To Switch 1

iSCSI port 4: 192.168.133.101 -> To Switch 2

Controller 1:

iSCSI port 0: 192.168.130.102 -> To Switch 1

iSCSI port 1: 192.168.131.102 -> To Switch 2

iSCSI port 2: 192.168.132.102 -> To Switch 1

iSCSI port 4: 192.168.133.102 -> To Switch 2

Server 1:

iSCSI NIC 0: 192.168.130.110 -> To Switch 1

iSCSI NIC 1: 192.168.131.110 -> To Switch 2

iSCSI NIC 2: 192.168.132.110 -> To Switch 1

iSCSI NIC 3: 192.168.133.110 -> To Switch 2

Server 2:

Same note about using just 2 NICs per server for iSCSI. In this setup each server will still use both switches so that a switch failure should not take any of your servers' iSCSI connectivity down.

Quad switches (or 2 VLANs on each of the 2 switches above):

iSCSI port 0: 192.168.130.101 -> To Switch 1

iSCSI port 1: 192.168.131.101 -> To Switch 2

iSCSI port 2: 192.168.132.101 -> To Switch 3

iSCSI port 4: 192.168.133.101 -> To Switch 4

Controller 1:

iSCSI port 0: 192.168.130.102 -> To Switch 1

iSCSI port 1: 192.168.131.102 -> To Switch 2

iSCSI port 2: 192.168.132.102 -> To Switch 3

iSCSI port 4: 192.168.133.102 -> To Switch 4

Server 1:

iSCSI NIC 0: 192.168.130.110 -> To Switch 1

iSCSI NIC 1: 192.168.131.110 -> To Switch 2

iSCSI NIC 2: 192.168.132.110 -> To Switch 3

iSCSI NIC 3: 192.168.133.110 -> To Switch 4

Server 2:

In this case using 2 NICs per server means the first server uses the first 2 switches and the second server uses the second set of switches.

Direct attach:

iSCSI port 0: 192.168.130.101 -> To server iSCSI NIC 1 (on an example IP of 192.168.130.110)

iSCSI port 1: 192.168.131.101 -> To server iSCSI NIC 2 (on an example IP of 192.168.131.110)

iSCSI port 2: 192.168.132.101 -> To server iSCSI NIC 3 (on an example IP of 192.168.132.110)

iSCSI port 4: 192.168.133.101 -> To server iSCSI NIC 4 (on an example IP of 192.168.133.110)

Controller 1:

iSCSI port 0: 192.168.134.102 -> To server iSCSI NIC 5 (on an example IP of 192.168.134.110)

iSCSI port 1: 192.168.135.102 -> To server iSCSI NIC 6 (on an example IP of 192.168.135.110)

iSCSI port 2: 192.168.136.102 -> To server iSCSI NIC 7 (on an example IP of 192.168.136.110)

iSCSI port 4: 192.168.137.102 -> To server iSCSI NIC 8 (on an example IP of 192.168.137.110)

I left controller 1 on the "102" IPs for easier future changing back to just 4 subnets.

Moderator

 • 

7.6K Posts

February 27th, 2015 09:00

Hello 4000dell,

Here are the answers to your questions:

  1. Here is a link to the deployment guide for the MD3620i & if you look on page 20 it shows how you will want to connect the MD3620i to your switches. http://downloads.dell.com/Manuals/Common/powervault-md3600i_Deployment%20Guide_en-us.pdf
  2. You will use the MDSM (Modular Disk Storage Manger) to create virtual disk & manage your MD3620i. Here is a link to the Administrators guide and if you look on page 63 it explains how to create them. http://downloads.dell.com/Manuals/Common/powervault-md3200_Administrator%20Guide_en-us.pdf
  3. You can do either or. It really depends on what raid type that you will be using and what is the virtual disk going to be containing.
  4. By default the MD3620i ports are set for auto sensing so if your switch is only running 1GB it should adjust. But if you wanted to set the ports to 1gb then you should be able to do so in MDSM.
  5. You can do either direct connect or use switches. The one thing that you will need to make sure is that you install the VMware plugin. Here is a link to the plugin & the doc on how to install it. http://downloads.dell.com/Manuals/Common/powervault-md3200i_User%27s%20Guide4_en-us.pdf

Plug in http://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=Y2YY0&fileId=3365313701&osCode=XI55&productCode=powervault-md3620i&languageCode=EN&categoryId=AP

6. As long as you are able to see the storage on the system you can move the VM over.

7. No the MD3620i doesn’t do NIC teaming on the iSCSI ports. All iSCSI ports on the MD3620i should be in there own vlan as shown in the deployment guide on page 59 of the deployment guide.

Please let us know if you have any other questions.

23 Posts

March 3rd, 2015 06:00

it seems iSCSI is not working in different broadcast doamins.(kb.vmware.com/.../search.do;cmd=displayKC&externalId=2017084)

So I can't move data arond.

I have a question to the item 7:

7. No the MD3620i doesn’t do NIC teaming on the iSCSI ports. All iSCSI ports on the MD3620i should be in there own vlan as shown in the deployment guide on page 59 of the deployment guide.

must I really put all connections in different vlans? why I can't do it all in one vlan and in teh same subnet?

Thanks

Moderator

 • 

7.6K Posts

March 4th, 2015 11:00

Hello 4000dell,

You have to put each port into it’s own Vlan as if you don’t then you will get data corruption. If you were to put all ports in the same Vlan you will get really bad performance and you will have fragment data as well as it would not have checked to make sure that all the packets made it.  If you have one our Equallogic systems this is different as you can put all ports into a single Vlan.

Please let us know if you have any other questions.

23 Posts

March 6th, 2015 13:00

no, I don't have an Equallogic system. I have Dell 3620i.

Last question:

Even if I will connect SAN directly(without switch) to my two ESXi hosts, should I use different IP segments for each connection?

for Example

ESXi1-Port1 -10.0.1.0/24- SAN-Controller1-Port1

ESXi1-Port2 -10.0.2.0/24- SAN-Controller2-Port1

ESXi2-Port1 -10.0.3.0/24- SAN-Controller1-Port2

ESXi2-Port2 -10.0.4.0/24- SAN-Controller2-Port2

or for direct connections, can I put all ports on the same subnet?

ESXi1-Port1 -10.0.1.0/24- SAN-Controller1-Port1

ESXi1-Port2 -10.0.1.0/24- SAN-Controller2-Port1

ESXi2-Port1 -10.0.1.0/24- SAN-Controller1-Port2

ESXi2-Port2 -10.0.1.0/24- SAN-Controller2-Port2

which option of these two would be better?

Moderator

 • 

7.6K Posts

March 10th, 2015 11:00

Hello 4000dell,

If you are using a switch or direct connecting your MD3620i to your host all ports on the MD3620i should be in their own vlan and subnet.  If you put all connections in the same vlan then you will have some performance issues and my get data corruption.

Please let us know if you have any other questions.

23 Posts

March 20th, 2015 08:00

Hello DELL-Sam L,

I'm still confused after reading vmware KB about different subnets by connecting to the storage using iSCSI:

kb.vmware.com/.../search.do;cmd=displayKC&externalId=2017084

if I'll use different subnets, will esxi work correctly?

Thanks

Moderator

 • 

7.6K Posts

March 23rd, 2015 09:00

Hello 4000dell,

Yes with putting the MD iSCSI connections in different subnets still will work correctly.  What you need to make sure of is that you also install the MD VMware plugin as well.  

Please let us know if you have any other questions.

23 Posts

March 25th, 2015 05:00

Ok, I understand how to use it with a switch.

But I think I will use it without switch, what configuration should  I apply for IP addresses on storage and vmware esx hosts. Just need to confirm:

without switch - esx connected directly to SAN

ESXi1-Port1 -10.0.1.0/24- SAN-Controller1-Port1

ESXi1-Port2 -10.0.2.0/24- SAN-Controller2-Port1

ESXi2-Port1 -10.0.3.0/24- SAN-Controller1-Port2

ESXi2-Port2 -10.0.4.0/24- SAN-Controller2-Port2

Is this correct?

Moderator

 • 

7.6K Posts

March 27th, 2015 12:00

Hello 4000dell,

The setup is the same as if you were using a switch or not.  So you can use the deployment guide and configure your ESXi host as if you were going to connect it to a switch.

Please let us know if you have any other questions.

23 Posts

March 29th, 2015 08:00

The deploying guide shows to use only two subnets:

ESXi1-Port1 -192.168.130.0/24- SAN-Controller1-Port1

ESXi1-Port2 -192.168.131.0/24- SAN-Controller2-Port2

ESXi2-Port1 -192.168.130.0/24- SAN-Controller2-Port1

ESXi2-Port2 -192.168.131.0/24- SAN-Controller1-Port2

would be the configuration above correct for direct attached ESXi servers?

another question I had, I'm currently have a 1Gbps ports on my ESXi servers, but I'm planning to upgrade them to 10Gbps.

Can I switch DELL to 10 Gbps after configuring LUNs and installing virtual machines? Will LUNs not break? because at the same time I will replug teh ports on ESXi servers.

Moderator

 • 

7.6K Posts

March 30th, 2015 12:00

Hello 4000dell,

So the setup you have listed is correct if you are going to use only 2 connections from each controller to your ESXi host.  

The SFP in the Controllers of your MD3620F are 8gb so you can upgrade your ESXi host to 10gb and get the better performance.  However the SFP’s are not able to be upgraded in your MD3620F.  When you upgrade your ESXi Host controllers to 10GB you will need to go back into MDSM and redo the host mappings so that the new HBA is seen & configured.  If you don’t then no connections will used from your MD3620F until that has been done.

Please let us know if you have any other questions.

23 Posts

April 1st, 2015 06:00

Hello DELL-Sam L,

Probably I started the conversation wrong, and didn't mentioned what Dell I have.

So I have a MD3620i that has only 10gb LAN cards. I don't use FiberChannel at all.

That was my question, if I start to use 1Gbps configurtion now, would I be able to switch to 10Gbps after upgrading my ESXi servers with newer LAN card without losing any data on Dell storage?

Thanks

Moderator

 • 

7.6K Posts

April 2nd, 2015 10:00

Hello 4000dell,

Yes you can change the HBA’s in your server from 1GB to 10GB & you will not lose any data.  What you are going to have to do is to configure the new HBA’s so that the MD will see the new connections.  Since all you are changing is the connections to the virtual disk & not changing the virtual disk then all the data will still be there.  

Please let us know if you have any other questions.

No Events found!

Top