Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

38121

March 30th, 2012 13:00

Re-transmissions only during replication

I assume I am getting re-transmissions during replication because the network between my orginating group and the target group is not all Jumbo frames/Flow control etc?

I get a steady rate of between 1.1 and 1.3% during the replication....if it goes beyond an hour (rare).  The replication does finish with out issue or it does not say it did not.

I am just wondering if there is something I can do to prevent this beyond making sure the whole path (iSCSI VLAN, Data Center LAN, WAN between sites, recieving side Data Center LAN, recieving side iSCSI VLAN) is set per EQL recomendations (jumbo frames, flow control, unicast....etc)??

 

Thanks for any help!

12 Posts

April 2nd, 2012 07:00

During replication only I get these emails below.  Once replication is done the re-transmissions stop.  Also replication does finish with out issue.

This message was generated by the Dell EqualLogic SAN HeadQuarters service (version 2.1.100.5884) running on DHQSVRDPM001.dierbergs.net

The following conditions on Group EQL-HQ (172.24.250.10) have generated an alert notification:

Caution conditions:

• 4/2/2012 3:02:26 AM to 4/2/2012 4:02:27 AM

o Member EQ5 TCP retransmit percentage of 1.5%. If this trend persists for an hour or more, this could be indicative of a problem on the member's SAN network, resulting in an e-mail notification.

 TCP retransmit rate greater than 1% should be investigated. Check the network connections and switch settings.

o TCP outbound packet counts for polling period: 27,939,774

o TCP retransmit packet counts for polling period: 434,638

o eth0 send rate for polling period: 10.4 MB/sec

o eth1 send rate for polling period: 1.9 MB/sec

o eth2 send rate for polling period: 0 KB/sec

o Member EQ2 TCP retransmit percentage of 1.7%. If this trend persists for an hour or more, this could be indicative of a problem on the member's SAN network, resulting in an e-mail notification.

 TCP retransmit rate greater than 1% should be investigated. Check the network connections and switch settings.

 Condition already generated an e-mail message. If the condition persists, additional messages will be sent approximately every 6 hours.

o TCP outbound packet counts for polling period: 38,449,077

o TCP retransmit packet counts for polling period: 655,128

o eth0 send rate for polling period: 5.9 MB/sec

o eth1 send rate for polling period: 9.7 MB/sec

o eth2 send rate for polling period: 0 KB/sec

12 Posts

April 2nd, 2012 07:00

The DR site is about 15 miles away.  We have a dedicated 200 meg circuit for replication (Charter Fiber).  I have a route setup that forces traffic through this cirucuit so I must ping from a member of the primary site to the secondary site or it will go over a differen cirucuit.  When I do I get the following...

----172.19.250.10 PING Statistics----

20 packets transmitted, 0 packets received, 100.0% packet loss

DR-EQL> ping 172.19.250.11

PING 172.19.250.11 (172.19.250.11): 56 data bytes

64 bytes from 172.19.250.11: icmp_seq=0 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=1 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=2 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=3 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=4 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=5 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=6 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=7 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=8 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=9 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=10 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=11 ttl=255 time=0.000 ms

64 bytes from 172.19.250.11: icmp_seq=12 ttl=255 time=0.000 ms

^C

----172.19.250.11 PING Statistics----

13 packets transmitted, 13 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 0.000/0.000/0.000/0.000 ms

Everything is on 5.2.2.

Thanks,

-Chris

12 Posts

April 2nd, 2012 07:00

DR-EQL> p

3601:39:DR-EQ2:netmgtd: 2-Apr-2012 08:55:23.833937:rcc_util.c:753:INFO:25.2.9:CLI: Login to account grpadmin succeeded, using local authentication. User privilege is group-admin.

ing "-s 1500 172.19.250.11"

PING 172.19.250.11 (172.19.250.11): 1500 data bytes

1508 bytes from 172.19.250.11: icmp_seq=0 ttl=255 time=0.000 ms

1508 bytes from 172.19.250.11: icmp_seq=1 ttl=255 time=0.000 ms

1508 bytes from 172.19.250.11: icmp_seq=2 ttl=255 time=0.000 ms

1508 bytes from 172.19.250.11: icmp_seq=3 ttl=255 time=0.000 ms

1508 bytes from 172.19.250.11: icmp_seq=4 ttl=255 time=0.000 ms

1508 bytes from 172.19.250.11: icmp_seq=5 ttl=255 time=0.000 ms

1508 bytes from 172.19.250.11: icmp_seq=6 ttl=255 time=0.000 ms

1508 bytes from 172.19.250.11: icmp_seq=7 ttl=255 time=0.000 ms

^C

----172.19.250.11 PING Statistics----

8 packets transmitted, 8 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 0.000/0.000/0.000/0.000 ms

DR-EQL> ping "-s 9000 172.19.250.11"

PING 172.19.250.11 (172.19.250.11): 9000 data bytes

9008 bytes from 172.19.250.11: icmp_seq=0 ttl=255 time=0.000 ms

9008 bytes from 172.19.250.11: icmp_seq=1 ttl=255 time=0.000 ms

9008 bytes from 172.19.250.11: icmp_seq=2 ttl=255 time=0.000 ms

9008 bytes from 172.19.250.11: icmp_seq=3 ttl=255 time=0.000 ms

9008 bytes from 172.19.250.11: icmp_seq=4 ttl=255 time=0.000 ms

9008 bytes from 172.19.250.11: icmp_seq=5 ttl=255 time=0.000 ms

9008 bytes from 172.19.250.11: icmp_seq=6 ttl=255 time=0.000 ms

9008 bytes from 172.19.250.11: icmp_seq=7 ttl=255 time=0.000 ms

9008 bytes from 172.19.250.11: icmp_seq=8 ttl=255 time=0.000 ms

^C

----172.19.250.11 PING Statistics----

9 packets transmitted, 9 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 0.000/0.000/0.000/0.000 ms

DR-EQL>

12 Posts

April 5th, 2012 07:00

Thanks for you help I opened up a case and sent them all my logs.

1 Message

November 15th, 2012 14:00

we do have an WAN accelerator between the sites ,

the situatio is the same as defined above

so Reiverbed WAN accelerator will effect

No Events found!

Top