Start a Conversation

Unsolved

This post is more than 5 years old

1236

April 22nd, 2012 12:00

SRDF performance problems

Dear all,

recently we've been confronted with a problem with an Oracle database (11) running on AIX connected to an IBM DS8K array that has to be migrated to Symm DMX4. It is very sensitive to response times - should not exceed 4ms. Another must is the synchronous replication. In the current state on DS8K array it performs brilliant with RT of 2 ms. Our first tests for migration however revealed unacceptable performance on DMX - about 6 ms. In split SRDF pairs mode the there is 3 times improvement.

We thought that the reason might be the one outstanding IO per queue and the serializetion done by the SRDF. Therefore we tested peresenting directly hypers to the OS and using LVM on OS level rather than using Meta devices. We didn't get conclusive results though, despite experimenting with differen LVM stripe sizes.

Our observation is that the majority of that critical DB transactions are 8KB. And the problem with this DB is the logging - the writes.

Has anyone bumped into such problems? Is such RT of 3 ms feasible at all on DMX? Any best practices?

Thanks in advance

1.3K Posts

April 22nd, 2012 14:00

RDF will add about 1ms to the writes along with the latency of the speed of write.

What stripe sizes did you try?  How many devices and FA paths?  What code level on the DMX?

You should get one of the local EMC performance gurus involved to help out.

1.3K Posts

April 22nd, 2012 14:00

BTW, non-RDF writes should be about 300us, for a smallish write.  So expect a bit over 1ms with no distance with RDF for writes less than 64k.

108 Posts

April 23rd, 2012 02:00

How SRDF links are available? Try to set up two more links.

2 Posts

April 23rd, 2012 03:00

Hi, thanks for your answers. So, to clarify additionally:

- we have escalated to EMC but so far they are failing to resolve the problem and explain the poor performance of the SRDF. And we are desperately trying to find a workaround since the investment in DMX4 is already made in huge volumes.

- the configuration of the DMX - RAID5 7+1, 450GB FC drives, 18GB hypers.

- the stripe sizes we tried on LVM varied from 8K to 256K. But again - not consistent behavior.

- there is 32Gbps DWDM pipe between the sites and the utilization barely reaches 5%. The distance is 40km.

- the code is 5773-175

1.3K Posts

April 23rd, 2012 04:00

You did not say how many front end paths you have, or what path manager you are using.   We have seen some path managers that send several hundred or thousands of IO down one path before switching to another.  In this case you may want a lower number, such as one if using 64k writes.

No Events found!

Top