Start a Conversation

Unsolved

This post is more than 5 years old

128145

June 21st, 2012 10:00

Equallogic - W2K8 latency with IO larger than 64KB

Hello,

I was wondering if anybody at all has experienced an issue like this. EQL firmware tested was 5.2.2 and now 5.2.4 I have tried with both a physical and virtual machine with the same results.

Client Windows 7 64bit

Server - W2K* standard R2

EQL PS6100XV

Whenever  copying a large  file such as an DVD iso file I experience around 100ms latency wand the queue jumps to 30-40. IO size is 128kb in SANHQ. Throughput is around 100MB/s

If I set the Max io disk size under ESX to 64kb from the default of 32MB it is basically 0 latency with the same throughput.

I have been searching around where to set this setting with W2K8. Is this to be expected with W2k8 and the latency. I guess SMB2 will send larger IO sizes and is this overwhelming the EQL?

Thank you

7 Technologist

 • 

729 Posts

June 21st, 2012 12:00

As far as I know the Array can handle the SMB2 I/O traffic and keep up.  Typically we see the bottle neck on the Host, NIC/HBA or switch (drivers, cache, settings, etc.)

I don’t know of any way to change the Max IO disk size in Windows either, not sure if that is even an option in the OS or if it would be a NIC feature.

You can check or try the following:

Switches and NIC’s - enable flow control; Switches = receive on, NIC’s Send = on (or transmit enabled)

You can also try:

netsh int tcp show global

(Take a screen shot or copy for reference to reset back if needed)

netsh interface tcp set global autotuning=disabled

To verify that it is dsabled:

netsh interface tcp show global

To set back to the default Windows behavior:

netsh interface tcp set global autotuning=normal

netsh interface tcp set global chimney = disable

To set it back:

netsh int tcp set global chimney=enabled (to set enabled mode)

netsh int tcp set global chimney=automatic to set automatic mode (available only in Windows Server 2008 R2 and Windows 7)

or

netsh int tcp set global chimney=default

Also not sure if you took a look at this link:

www.speedguide.net/.../windows-7-vista-2008-tweaks-2574

Disable Jumbo (at least on your NIC) and then add it back in once you are satisfied the performance is where it should be.

-joe

11 Posts

June 21st, 2012 13:00

Hi have tried those suggestions. It is going throough a pair of stacked Force10 S25n´s from a DL2200 with 24 gb or ran and 8 cores. Single gigabit iscsi connection. I had to set the diskmaxiosize in ESX to 64kb or our latencies would  go through the roof on large se quential writes. The equallogic seems to struggle a bit with anything over 64kb io. 2003 and linux are fine. Is the latency that we are seeing to be expected?

7 Technologist

 • 

729 Posts

June 21st, 2012 14:00

For the Force10 S25N:

Minimum firmware is 7.7.2.0. Version  8.2.1.0  had (or has DHCP issues, so for now use 7.8.1.0. or higher than 8.2.1.0).

Flow control should be tweaked as follows: "flowcontrol rx on tx on threshold 2047 2013 2013" for all ports that connect to the array(s).

What NIC/HBA are you using?

-joe

11 Posts

June 22nd, 2012 07:00

Thanks Joe. The nic is a Broadcom BCM5716c with the latest driver. The S25N switches are at firmware 8.3.2.0. The switch ports are configured like this. T ried it again and still getting the latency.

interface GigabitEthernet 0/1

no ip address

mtu 9252

switchport

flowcontrol rx on tx on threshold 2047 2013 2013

spanning-tree rstp edge-port

no shutdown

7 Technologist

 • 

729 Posts

June 22nd, 2012 09:00

Take a look at the attached file.

-joe

1 Attachment

7 Technologist

 • 

729 Posts

June 22nd, 2012 09:00

Here is aditional information I have on the BC NIC, you can try these setting as well.

 

-joe

1 Attachment

11 Posts

June 25th, 2012 07:00

Thanks Joe. I will check out those documents. My situation must be unique, I assume that nobody else is getting these type of latencies with a W2K8 file transfer.

7 Technologist

 • 

729 Posts

June 26th, 2012 12:00

If you are still stuck on this, please open a support case, and ask to be esclated to the EqualLogic Performance experts.

-joe

11 Posts

June 26th, 2012 13:00

Thanks Joe. I have a case open and I´m waiting to hear back.

11 Posts

July 3rd, 2012 08:00

Thanks Don. There are 3 iscsi connection fromthe ESX server with mem 1.10 working. Ive just been looking everywhere to find some comparable stats ont he  file copy between a windows 7 and 2008 r2 machine and the associated latencies to compare it against. Now generally we do have many large sequential writes so it would probably not be an issue. Esx is throwing off some latency warding for those data stores when the transfer is happening.

11 Posts

July 3rd, 2012 13:00

I am on ESX 5.0.0 721882. I have tried those recommended settings and still have the latency.

11 Posts

July 3rd, 2012 13:00

No problem :)  

vmkiscsid --dump-db | grep Delayed | more

iSCSI MASTER Database opened. (0xffd71008)

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

  `node.conn[0].iscsi.DelayedAck`='0'

~ #

vmkiscsid --dump-db | grep login_timeout  | more

iSCSI MASTER Database opened. (0xffd71008)

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `discovery.sendtargets.timeo.login_timeout`='5'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

  `node.conn[0].timeo.login_timeout`='60'

ive also opened a ticket with Force10 so they can review the logs but will have to wait to hear ba ck.

11 Posts

July 3rd, 2012 13:00

There are 3 physical nics connecting to the iscsi network on this host. I have gone through everything here and still i´m experiencing the 100ms latency. The san itself is still in a testing phase and pretty much usused. This vm has it´s own volume and one vmdk. Mem is working and showing 3 active paths for I/O and I can verify in vcenter that those 3 nics are being utilitzed evenly. Perhaps this is just an expencted latency.

11 Posts

July 3rd, 2012 14:00

I had Force10 look through the configs and they verified that they were ok. The case is  SR# 857701220. I have tested with wmdk, rdm, physical box and raw lun through giuest iscsi initiator with the latest HIT and multipathing and all are relatively close with the results with the 2008 R2 file transfers. I have provided those logs and sent them in with the case

11 Posts

July 4th, 2012 07:00

I was wondering if anybody could post a screenshot of what their SANHQ graph looks like when transfering a large file from Win 7 to a 2008R2 vm. Thank you.

No Events found!

Top