Unsolved

1 Rookie

 • 

3 Posts

1550

January 18th, 2023 02:00

Vmware - Configuration iSCSI on Dell 650xs and ME5024 [many screens]

I read a bit (in fact a lot :]), googled a bit, set up a very simple environment based on 2 servers and a disk array by trial and error. Everything works as part of the tests, but I am not convinced of the correctness of the entire configuration, especially in terms of performance after disconnecting 1 controller for testing.

Here is my simple setup and environment description.

  • 2 hosts Dell 650xs (2x LAN 1GB port 1 – managment, port 2 – temporarily unused, 2x25GB – to the disk array, 2x10GB SFP+ to the switch for the VM network)
  • 1 disk array Dell ME5024 (SSD)
  • Connection disk array <-> direct mode hosts – I have no switch between (I guess it's called professionally DAC) by iSCSI (SFP28 cable 25GB/s)
    • 1 volume, 1 disk group (linear for better performence) raid-6,
    • No chap config (because in DAC mode so there is no need to secure and authorize IMO)
  • Vmware 8
  • VLAN: 1; env. about 15-20 vm (win&linux) / 100+ users
  • Ignore link down status for port 2 (10GB/s) on hosts (damaged cables await replacement)

 

Connections:

Host1

  • Port1 -> ME5024 A0
  • Port2 -> ME5024 B0

Host2

  • Port1 -> ME5024 A1
  • Port2 -> ME5024 B1

============================

  1. In your opinion, should the disk array-servers connection be on links A0, A1 or A0, A3? or some other better combination?
  2. Does the configuration have any weaknesses or bugs?
    • Even though I have set ComboFrames to 8900 as suggested by Dell, I have the vm array read and write performance tests as below. Can you get more?
    • When one of the links to the disk array is disconnected, the performance drops dramatically (transfer from 2GB/s to 200 MB/s) even though one link is still a 25GB/s card. Can it be improved somehow?
  3. Is the Gateway or IP in the ME link configuration OK? It works, but did I play it right? I read somewhere in the documentation for the previous version that to take advantage of multipathing, the configuration of the interfaces should be on separate networks (here 10 and 11 in the third octet).
  4. What is the correct path: When configuring VMware, should you first install/configure iSCIS on hosts after installing vmware on hosts, then install vcenter and add hosts, or install vcenter on your local host DS first, then add hosts and then configure (iSCSI) array connection in vcenter?

 

According to the saying "One picture is worth a thousand words", below are some screenshots.

YoungAdmin_0-1674037583594.png

 

YoungAdmin_1-1674037628272.png

 

YoungAdmin_2-1674037628318.png

 

 

YoungAdmin_3-1674037628344.png

 

 

YoungAdmin_4-1674037628371.png

 

 

YoungAdmin_5-1674037628422.png

 

 

YoungAdmin_6-1674037628458.png

 

 

 

YoungAdmin_7-1674037628497.png

 

 

 

YoungAdmin_8-1674037628532.png

 

 

 

1 Rookie

 • 

3 Posts

January 18th, 2023 02:00

More images (screens) comming soon...

1 Message

January 18th, 2023 02:00

YoungAdm_0-1674039162017.png

 

 

YoungAdm_1-1674039162046.png

YoungAdm_2-1674039162086.png

YoungAdm_3-1674039162126.png

YoungAdm_4-1674039162177.png

YoungAdm_5-1674039162220.png

 

 

Storage:

YoungAdm_6-1674039162245.png

YoungAdm_7-1674039162318.png
YoungAdm_8-1674039162366.png

 

YoungAdm_9-1674039162392.png

 

1 Message

January 18th, 2023 02:00

 

NIC:

YAdmin_0-1674038423682.png

YAdmin_1-1674038423707.png

 

VMK:

YAdmin_2-1674038423735.png

YAdmin_3-1674038423793.png

YAdmin_4-1674038423853.png

YAdmin_5-1674038423914.png

 

vSwitch

YAdmin_6-1674038423938.png

YAdmin_7-1674038423989.png

YAdmin_8-1674038424040.png

YAdmin_9-1674038424091.png

 

(14 more screens comming soon)

1 Message

January 18th, 2023 03:00

YoungA_0-1674040301454.png

YoungA_1-1674040301490.png

 

Two 25GB/s links UP

YoungA_2-1674040301686.png

 

On only one link 25GB/s transfer drop from 2GB/s to 200MB/s (source and target ME5024)

YoungA_3-1674040301697.png

 

That's all.. screens Please help.

Moderator

 • 

4.7K Posts

January 18th, 2023 07:00

Hello YoungAdmin,

 

Thank you for your post.  I would let you know that Initial Deployment is not supported on the forum. 

We have Deployment Services you can contract if needed if you contact Sales to get a quote.

 

Performance tuning is also not supported. Anyone in the community is welcome to share their experience.

 

I'll share a few observations:

Jumbo frames depends on the usage, sometimes good or bad

     Performance is based on configuration of storage and testing methods 

When one of the links to the disk array is disconnected the performance drops dramatically

     That will happen. disk groups/volumes only live on 1 controller at a time

With direct connect, you are setup fine

     Best way to setup and configure the hosts for VMWare is to install VMware then vCenter then configure iSCSI

interface is much easier to use

Supplemental:

Best Practices guide for all settings including MPIO?  https://dell.to/3ZEL9OJ

 

You may already have these links but I want to provide them for you:

PowerEdge R650xs support page : https://dell.to/3ZOgDlh

Dell EMC PowerVault ME5024 support page : https://dell.to/3iLGPMK

 

1 Rookie

 • 

3 Posts

January 18th, 2023 13:00

Thanks for the reply and links.
In response, I will add that this is not Initial Deployment, but only testing the environment to assess the possibilities and rules for replacing the operation of the production environment. I want to know how much I can gain and build from such equipment.If the test results turn out to be unsatisfactory, the equipment will go to the lab and not for production.
The production environment is advanced, but here I have simplified the limits of possibility and performance for testing purposes.
Of course I count on comments and responses from the community

 

Countering your answers:
It seems to me that the controllers in this disk array are not Active-Active, but I could be wrong. However, if they are Active-Passive, then on one 25Gb/s link, we can theoretically get about 3100 GB/s of transfer - that's what my CristalDiskMark tests show (I got similar values when copying from a partition to a 2Gb/s (write) to partition where the source and destination are the same location, i.e. the disk array, so there is simultaneous reading and writing). Going forward…. In view of the above, why does the transfer drop so drastically after switching to the spare controller, if we still have one link with a bandwidth of up to 25Gb/s?
However, if these controllers work in Active-Active mode, then in theory I should get a minimum of 5GB/s ....(with multipath) and in the case of link loss, half as much. Or more likely… one of the parameters is set incorrectly.
Let me remind you that in my simple tests, the loss of a link and "automatic reconnection" to the second one was associated with a decrease in speed not by 50% but by 90%.

What do you mean by using "Jumbo frames depends on the usage, sometimes good or bad"? Can you expand on that?
ComboFrames in my case showed an increase in speed: MTU=1500 gave 2200 MB/s R/W and MTU=9800 3000MB/s

Based on this documentation: Dell EMC PowerVault ME4 Series and VMware vSphere
Physical port selection
In a system configured to use all FC or all iSCSI, but where only two ports are needed, use ports 0 and 2 or ports 1 and 3 to ensure better I/O balance on the front end. This is because ports 0 and 1 share a converged network controller chip, and ports 2 and 3 share a separate converged network controller chip.

Does the same apply to my test setup and will changing the ports improve performance?

Moderator

 • 

4.1K Posts

January 18th, 2023 21:00

Hi @YoungAdmin,

 

The downside of jumbo frame is that it must be supported by all interfaces (all network components in the data path). Any other than that, I don't see any issues. But you mentioned that your MTU is already at 9800, that's  jumbo frame enabled. 

 

When there are 2 controllers installed, redundancy settings is set automatically to Active-Active, page 7. https://dell.to/3QRM8XR

 

Top