This post is more than 5 years old

1 Rookie

 • 

41 Posts

34718

September 8th, 2014 03:00

6100XV vs 6210X: Diffrence in write speed/latency

Hi,

In november 2011 we purchased an 6100XV and recently an 6210X has been racked to be placed within the same group as the old unit. For testing purposes both units are running in their own group for now untill I'm happy with what I see.

The setup:
As storage we've a PS6100XV (4x 1Gbit + MGT) and an 6210X (2x 10Gbit (copper or fiber) + MGT) both running FW 7.07 and connected to our iSCSI switched.

Our storage network consist of two M6348's placed in our M1000E chassis. The switches are stacked and running version 5.1.3.7.

The processing power for our vSphere cluster is coming from our M1000E chassis with 4 blades (M710/M820). Each blade is running vSphere 5.5 (1892794) with MEM 1.2 installed and is connected with 4x 1 Gbit to the internal M6348 ports using 4 paths.


The 'problem':
When dd-ing 2 GB of data from /dev/zero to disk within the VM I see diffrent results. Please note that I'm using the same VMware host, network etc only diffrent equallogic's:
    6100XV    : 101 MB/s
    6210X    : 239 MB/s


Looking in esxtop I see the following on my iSCSI ports during the 2GB write:
6100XV:
    Latency raises to 7 ms (seen by SanHQ as well)
    We see and equally spread load across all 4 links of ~256 Mbit

Esxtop - network
   PORT-ID              USED-BY  TEAM-PNIC DNAME              PKTTX/s  MbTX/s    PKTRX/s  MbRX/s %DRPTX %DRPRX
  50331658                 vmk2     vmnic4 vSwitch1           2182.96  256.54    3934.29    4.20   0.00   0.00
  50331659                 vmk3     vmnic5 vSwitch1           2249.15  265.84    4087.83    4.37   0.00   0.00
  50331660                 vmk4     vmnic8 vSwitch1           2182.39  269.51    4053.12    4.44   0.00   0.00
  50331661                 vmk5     vmnic9 vSwitch1           2155.30  267.10    4032.52    4.43   0.00   0.00

Esxtop - virtual disk
     GID VMNAME           VDEVNAME NVDISK   CMDS/s  READS/s WRITES/s MBREAD/s MBWRTN/s LAT/rd LAT/wr
    7115 mail38                  -      2   576.02    69.81   506.21     1.06   118.68  22.27   7.01

6210X:
    Latency raises to 74 ms (seen by SanHQ as well)
    We see and pretty close maxed out load across both links of 900+ Mbit

Esxtop - network
   PORT-ID              USED-BY  TEAM-PNIC DNAME              PKTTX/s  MbTX/s    PKTRX/s  MbRX/s %DRPTX %DRPRX
  50331658                 vmk2     vmnic4 vSwitch1          10995.86  918.20   16286.85   14.44   0.00   0.00
  50331659                 vmk3     vmnic5 vSwitch1          11036.87  945.63   16856.19   11.83   0.00   0.00
  50331660                 vmk4     vmnic8 vSwitch1            247.96    8.55     251.77    3.59   0.00   0.00
  50331661                 vmk5     vmnic9 vSwitch1            258.45    8.35     251.77    4.51   0.00   0.00

Esxtop - virtual disk
     GID VMNAME           VDEVNAME NVDISK   CMDS/s  READS/s WRITES/s MBREAD/s MBWRTN/s LAT/rd LAT/wr
 3477742 eql02bench01            -      1   436.24     2.15   434.09     0.01   216.48  11.68  74.16


As the 6210X only has two active ports it only utilises two paths but as you can see it can combined both links to a 2 Gbit connection where the 6100XV can not combined all four links to a 4 Gbit connection. Besides the bandwidth 'issue' I'm seeing a huge diffrene in latency during the writes between both units.


Questions:
1) What the diffrence between both units that causes the one to be limited to 1 Gbit and the other not
2) Why is the write latency so high? Both controllers are in the green so write back should be active.

Regards,

- Henk

1 Rookie

 • 

41 Posts

September 9th, 2014 03:00

Both units are running raid-10, both VM's are using 4K blocksize on their FS, the EQL's are using the default blocksizes (afaik 4K).

I'm aware of the major diffrences between the units I don't expect them to be equal in performance. I do however expect that the new unit has the better hardware and shouldn't have a 70 ms latency when writing with ~238 MB/ps to it. I'm under the assumption that such latency spikes could be bad for overall performance when we take the 6210 into production.

As far I know I'm following all current best practices:

vmware 5.5

jumbo frames

removed the storage heartbeat VMK's during last VMware updates

using Dell Multipath Extension Module (MEM 1.2)

Delayed ACK is disabled

Large receive offload is disabled

Storage I/O control isn't used

Meanwhile I've opened an case (900308914) referring to this topic as well. I'll keep the thread updated as the case moves forward.

1 Rookie

 • 

41 Posts

September 10th, 2014 09:00

Hi Don,

I don't have a Windows machine which runs on the 6210 so IOmeter isn't an option. I've played a bit with bonnie++ which produced the high write latency in the first placed. As I could reproduce the latency spike with the 'dd' command I haven't used it anymore.


The 'dd' command has been run with a blocksize of 1M and 4k, both yield the same results (200+ MB/ps and 60+ ms write latency).


The 200+ MB/ps is actually achieved and visible in SanHQ as well:

https://www.dropbox.com/s/u3hsytgxti18vil/Selection_081.png?dl=0



1 Rookie

 • 

41 Posts

September 11th, 2014 04:00

Hi Don,

We're mostly a Linux shop (Ubuntu) but yesterday I quickly installed a Windows machine, when we've a virtual env why not make use of it right?

After some tinkering I managed to get a nice seqential write IO using 4, 32, 64 and 256 KB blocks and (offcourse) you're right; the high latency was gone:

 4KB yielded 150 MB/ps troughput with < 1.0ms latency

32KB yielded 215 MB/ps troughput with < 1.0ms latency

64KB yielded 215 MB/ps troughput with < 1.0ms latency

256KB yielded 219 MB/ps troughput with  65  ms latency

I know that bigger chunks of data results in higher latency but I didn't realise that so much as I didn't experience the same 'issue' on both EqualLogic's. Just because I'm curious; do you have any hypothesis to what could cause the diffrence?

The latency thing being clear I've only 1 question remaining:

Why is the 6210 (connected at multiple 1 Gbit ports) not limited to 1 Gbit troughput as the 6100 is. Which (as you mentioned) is awkard behaviour in our case?

- Henk

1 Rookie

 • 

41 Posts

September 15th, 2014 01:00

I've been out of office for a couple of days so I hadn't tested it yet but:

You're right again. Doing the same test on the 6100 yields in a 1Gbit+ write speed, to be exact:

 4KB yielded   76 MB/ps troughput with < 3ms latency

64KB yielded 168 MB/ps troughput with < 3ms latency

I think I'm starting to like the 6210... very much ;)

Thank you for the help. I hope that Dell is aware off the very helpfull information you're providing here, if not please sent me your managers address, I assume you can find my info trough the case number (SR900308914) ;)

No Events found!

Top