Start a Conversation

Unsolved

This post is more than 5 years old

7643

January 3rd, 2014 08:00

EQL Array Performance Single Host

Hello,


We have a 3 member array running the 6.0.4 firmware.  (Members: PS5000, PS6000, PS6000-SSD-RAID10)  All links on all members are 1Gbps and set to use Jumbo frame and they're all connected via a VLAN directly connected to a 4507E with sup 7E.  The Hosts and the arrays are both directly connected to the switch and flow control is enabled on the switch. 

We have 2 ESXi 4.1 servers each running a single VM for our ERP system.  We're not getting as good of performance out of the systems  as we think we should be so I've looked into tuning some of the Kernel parameters in Linux to help.  One thing I've noticed is that I can't seem to get great disk performance. 

We use the MEM and have verified every best practice we can find.  (Disable LRO, disable delayed ACK, etc.)   We're using the Software iSCSI initiator and I see that the LUNs in question both have 4 paths.  The volumes are load balanced across the array. 

Is there anything I'm missing?  My goal is to get the best performance out of a Single VM running on a Single Host. 


(We have 6 total hosts that connect to the array members.  Three of the other hosts serve about 10-15 VM's running on other volumes but they have very little use compared to other hosts.)

Thank you!

6 Posts

January 3rd, 2014 10:00

Hello,

Thank you for the response.

We'll look into upgrading but our experiences with upgrading have been less than positive.  The failover times are just too long in most cases. 

We have the 4748 line cards which are non-blocking.  To further minimize any chance of competing buffers we also have the connects spread out across the switch. 


I reviewed the guide you linked to and we've set all the settings according to best practices.

We use 2 VMDK's and have the second using the paravirtual controller. 

I've got the Read Ahead set to 16384 right now, that number seemed to work well on our test server.  I'll some testing with 8192 to see if it helps.


Our plan is to upgrade the servers to 5.1U1 but we haven't had time to take a maintenance window to do it.  Our 4.1 servers were recently patched. 

We have SANHQ installed and monitor it often.  The problem we're seeing is we don't see a lot of IOPS (~1500)  or high throughput when the server is doing large reads. The SAN seems fairly quite most of the time.


Thank you.

No Events found!

Top