Unsolved
This post is more than 5 years old
2 Posts
0
6866
March 1st, 2013 08:00
MD3600f poor performance
Hi,
first of, our setup description:
We have a MD3600f array with 12 2TB 7.2K NL SAS disks. Each controller of the array is connected through two 8Gbit ports to the same switch. The client is R410 server with QLogic QLE2562 FC HBA, that is connected to the same switch with its two ports. The server uses SLES 11 SP1 operating system, multipath is configured using scsi_dh_rdac according to the DELL docs and is apparently configured right.
There are two volume groups on the array: RAID10 consisting of 4drives and RAID5 consisting of 6 drives, one drive is hot spare, the other one is not used atm. There is just one volume on each vg spanning all its space. Each volume is owned by a different controller.
Now the problem:
The problem is that both of the volumes are really slow. We achieve around 60MB/s on the RAID5 volume and 40MB/s on the RAID10 volume when reading the raw device sequentially using 512k blocks in sync, non-direct mode using fio benchmark. When reading both devices simultaneously, we only achieve 90MB/s. Dynamic cache prefetch is I would expect to get numbers around 400MB/s for the RAID10 and the same for the RAID5. After all, a single disk should be able to do around 100MB/s!
We also confirmed that the problem is not in the interconnect as we were able to achieve 800MB/s (which seems to be a limit for one controller) when repeatedly reading first 200MB (which got cached on the controller) from each volume in direct mode, and 1600MB/s when using both volumes, as we used both controllers in this case.
It makes me think that the problem is either in the hard drives or in the controller when using the hard drives. Is such a poor speed expected or I should be able to achieve much more? Any advice is much appreciated!
We do not have the high performance license.
Best regards
Jiri Horky
JiriHorky
2 Posts
0
March 1st, 2013 13:00
Hi Don,
here we go:
tfrontend1:~ # blockdev --getra /dev/mapper/raid10
1024
tfrontend1:~ # blockdev --getra /dev/mapper/raid5
1024
rr_min_io is 100, I tried to changed it to 1000 with no performance differcce.
I should also mention that when I use /dev/sdX device directly, I get the same poor speed, so I would rule out multipath as the cause of the troubles.
Regards
Jiri Horky