Start a Conversation

Unsolved

This post is more than 5 years old

8370

September 20th, 2011 16:00

MD3620i with VMware 4.1, directly connected ESXi servers

We've recently purchased an MD3620i with (20x) 300GB 10K SAS (Production disk group) and (4x) 1TB 7.2K NL-SAS (IT/Test/WDS disk group). The storage array is directly connected to (2x) R610's, each with (2x) dual port Intel 10GBaseT cards. Each server has two connections to the MD3620i, one to each storage controller (two IP subnets, with each SC having an IP on both nets). ESXi 4.1 is currently set to use MRU for the path config.

I was just wondering if RoundRobin was possible for better MPIO and if anyone had any other suggestions for tweaking the performance of this kind of setup. (We would have preferred 10GBaseT switches in-between but the cost was prohibitive.)

Thanks in advance for any insight.

2 Intern

 • 

847 Posts

September 21st, 2011 13:00

"We've recently purchased an MD3620i with (20x) 300GB 10K SAS (Production disk group) and (4x) 1TB 7.2K NL-SAS (IT/Test/WDS disk group). The storage array is directly connected to (2x) R610's, each with (2x) dual port Intel 10GBaseT cards. Each server has two connections to the MD3620i, one to each storage controller (two IP subnets, with each SC having an IP on both nets). ESXi 4.1 is currently set to use MRU for the path config.

I was just wondering if RoundRobin was possible for better MPIO and if anyone had any other suggestions for tweaking the performance of this kind of setup. (We would have preferred 10GBaseT switches in-between but the cost was prohibitive.)

Thanks in advance for any insight."
You will find as you add VM's using all sorts of different luns that Round Robin actually ends up being pretty darn good MPIO. I'm not sure it even matters with 10GB though. Pretty sure at that point even with one path only your already able to push the controllers/drives stoutly. The drives are you bottle neck here. 10K and 7.2K drives and 10GB, interesting config to say the least. Any SSD on this one?

2 Posts

September 21st, 2011 15:00

"You will find as you add VM's using all sorts of different luns that Round Robin actually ends up being pretty darn good MPIO. I'm not sure it even matters with 10GB though. Pretty sure at that point even with one path only your already able to push the controllers/drives stoutly. The drives are you bottle neck here. 10K and 7.2K drives and 10GB, interesting config to say the least. Any SSD on this one?"
Thanks for the info! It's nice to able to chat with other Dell storage users; I'm glad I happened across this forum. :)

We have approx. 14 VM's running across our cluster at present, mostly a mixture of Windows 2003/8 R2 with some XP/7 test. We're also trialing a vSphere5 vCSA. Performance has been very impressive compared to our old EMC AX4-5i (esp during high load situations such as multi-VM auto-start). 10Gbit cat6a interconnects also do a great vMotion evac job when dropping an ESXi into Maintenance mode - each VM migration occurs in less than 10 seconds.

The 7.2K drive group is (mainly) for pushing WDS images down to clients and for IT to test server upgrades. We needed capacity rather than out-right performance (hence NL-SAS to keep costs down) and the basic idea behind it was to avoid any needless repeated thrashing of the production disk group.

"Dell 149GB Solid State Drive SAS 2.5in HotPlug Hard Drive [$4,599.00]"
Back when I read the end of that line, any chance of even a RAID-1 SSD config was consigned to fantasy-land I'm sad to say. :(

2 Intern

 • 

847 Posts

September 22nd, 2011 09:00

I hear you on cost.... We had to shelf any idea of 10GB because of cost ourselves.
No Events found!

Top