* Recurring trace from tcp_fragment()
@ 2015-05-29 19:21 Grant Zhang
2015-05-29 19:46 ` Neal Cardwell
0 siblings, 1 reply; 12+ messages in thread
From: Grant Zhang @ 2015-05-29 19:21 UTC (permalink / raw)
To: netdev
We have multiple machines running into the following trace repeatedly. The trace shows up every couple of seconds on our production machines.
May 29 18:14:04 cache-fra1230 kernel:[3080455.796143] WARNING: CPU: 7 PID: 0 at net/ipv4/tcp_output.c:1082 tcp_fragment+0x2e4/0x2f0()
May 29 18:14:04 cache-fra1230 kernel:[3080455.796144] Modules linked in: xt_TEE xt_dscp xt_DSCP macvlan gpio_ich x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel microcode ipmi_watchdog ipmi_devintf sb_edac edac_core lpc_ich mfd_core tpm_tis tpm ipmi_si ipmi_msghandler igb i2c_algo_bit ixgbe ptp isci libsas pps_core mdio
May 29 18:14:04 cache-fra1230 kernel:[3080455.796165] CPU: 7 PID: 0 Comm: swapper/7 Tainted: G W 3.14.39-fastly #6
May 29 18:14:04 cache-fra1230 kernel:[3080455.796166] Hardware name: Supermicro X9DRi-LN4+/X9DR3-LN4+/X9DRi-LN4+/X9DR3-LN4+, BIOS 3.0 07/05/2013
May 29 18:14:04 cache-fra1230 kernel:[3080455.796168] 000000000000043a ffff885effce38f8 ffffffff81718716 0000000000000007
May 29 18:14:04 cache-fra1230 kernel:[3080455.796170] 0000000000000000 ffff885effce3938 ffffffff8106b54c ffff885e00000001
May 29 18:14:04 cache-fra1230 kernel:[3080455.796172] ffff8864910c5c00 0000000000000144 ffff882400e2f000 0000000000000400
May 29 18:14:04 cache-fra1230 kernel:[3080455.796174] Call Trace:
May 29 18:14:04 cache-fra1230 kernel:[3080455.796175] <IRQ> [<ffffffff81718716>] dump_stack+0x46/0x58
May 29 18:14:04 cache-fra1230 kernel:[3080455.796184] [<ffffffff8106b54c>] warn_slowpath_common+0x8c/0xc0
May 29 18:14:04 cache-fra1230 kernel:[3080455.796186] [<ffffffff8106b59a>] warn_slowpath_null+0x1a/0x20
May 29 18:14:04 cache-fra1230 kernel:[3080455.796188] [<ffffffff81678174>] tcp_fragment+0x2e4/0x2f0
May 29 18:14:04 cache-fra1230 kernel:[3080455.796191] [<ffffffff8166edeb>] tcp_mark_head_lost+0xeb/0x290
May 29 18:14:04 cache-fra1230 kernel:[3080455.796193] [<ffffffff8166fe28>] tcp_update_scoreboard+0x58/0x90
May 29 18:14:04 cache-fra1230 kernel:[3080455.796195] [<ffffffff8167422d>] tcp_fastretrans_alert+0x75d/0xb30
May 29 18:14:04 cache-fra1230 kernel:[3080455.796197] [<ffffffff816750d5>] tcp_ack+0xa15/0xf50
May 29 18:14:04 cache-fra1230 kernel:[3080455.796199] [<ffffffff816765db>] tcp_rcv_state_process+0x25b/0xd60
May 29 18:14:04 cache-fra1230 kernel:[3080455.796202] [<ffffffff8167f7e0>] tcp_v4_do_rcv+0x230/0x490
May 29 18:14:04 cache-fra1230 kernel:[3080455.796206] [<ffffffff8165cbb0>] ? ip_rcv_finish+0x380/0x380
May 29 18:14:04 cache-fra1230 kernel:[3080455.796208] [<ffffffff81681a93>] tcp_v4_rcv+0x803/0x850
May 29 18:14:04 cache-fra1230 kernel:[3080455.796210] [<ffffffff8165cbb0>] ? ip_rcv_finish+0x380/0x380
May 29 18:14:04 cache-fra1230 kernel:[3080455.796214] [<ffffffff8163c61d>] ? nf_hook_slow+0x7d/0x150
May 29 18:14:04 cache-fra1230 kernel:[3080455.796216] [<ffffffff8165cbb0>] ? ip_rcv_finish+0x380/0x380
May 29 18:14:04 cache-fra1230 kernel:[3080455.796219] [<ffffffff8165cc58>] ip_local_deliver_finish+0xa8/0x220
May 29 18:14:04 cache-fra1230 kernel:[3080455.796221] [<ffffffff8165cf5b>] ip_local_deliver+0x4b/0x90
May 29 18:14:04 cache-fra1230 kernel:[3080455.796223] [<ffffffff8165c951>] ip_rcv_finish+0x121/0x380
May 29 18:14:04 cache-fra1230 kernel:[3080455.796225] [<ffffffff8165d226>] ip_rcv+0x286/0x380
May 29 18:14:04 cache-fra1230 kernel:[3080455.796228] [<ffffffff8160c1d2>] __netif_receive_skb_core+0x512/0x640
May 29 18:14:04 cache-fra1230 kernel:[3080455.796230] [<ffffffff8160c321>] __netif_receive_skb+0x21/0x70
May 29 18:14:04 cache-fra1230 kernel:[3080455.796232] [<ffffffff8160c40b>] process_backlog+0x9b/0x170
May 29 18:14:04 cache-fra1230 kernel:[3080455.796234] [<ffffffff8160c851>] net_rx_action+0x111/0x210
May 29 18:14:04 cache-fra1230 kernel:[3080455.796237] [<ffffffff8106ff6f>] __do_softirq+0xef/0x2e0
May 29 18:14:04 cache-fra1230 kernel:[3080455.796239] [<ffffffff81070335>] irq_exit+0x55/0x60
May 29 18:14:04 cache-fra1230 kernel:[3080455.796243] [<ffffffff81729127>] do_IRQ+0x67/0x110
May 29 18:14:04 cache-fra1230 kernel:[3080455.796246] [<ffffffff8171edaa>] common_interrupt+0x6a/0x6a
May 29 18:14:04 cache-fra1230 kernel:[3080455.796246] <EOI> [<ffffffff8100b8c0>] ? default_idle+0x20/0xe0
May 29 18:14:04 cache-fra1230 kernel:[3080455.796253] [<ffffffff8100c07f>] arch_cpu_idle+0xf/0x20
May 29 18:14:04 cache-fra1230 kernel:[3080455.796256] [<ffffffff810b5f60>] cpu_startup_entry+0x80/0x240
May 29 18:14:04 cache-fra1230 kernel:[3080455.796260] [<ffffffff810c9846>] ? clockevents_config_and_register+0x26/0x30
May 29 18:14:04 cache-fra1230 kernel:[3080455.796264] [<ffffffff8102d480>] start_secondary+0x190/0x1f0
May 29 18:14:04 cache-fra1230 kernel:[3080455.796265] ---[ end trace 707a3e5aca13730c ]---
We have seen the trace on both 3.10 and 3.14 kernels. My google search also indicates the very issue on 3.16 kernel:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=771224
Sorry I haven't tried with the lastest upstream kernel--we only see the traces in production env and we stay a bit behind the latest kernel in production.
Using probe_events, I can see that tcp_fragment() is asked to fragment a skb which is already smaller than "len"(passed in by the caller tcp_mark_head_lost()), and as a result tcp_fragment() returns with -EINVAL at line 1071.
<idle>-0 [005] d.s. 1267970.050938: myprobe: (tcp_fragment+0x0/0x2f0) skb_len=0x150 len=0x200 mss_now=0x200
<idle>-0 [005] dNs. 1267970.051069: myretprobe: (tcp_mark_head_lost+0xeb/0x290 <- tcp_fragment) arg1=0xffffffea
<idle>-0 [005] dNs. 1267970.051097: myprobe: (tcp_fragment+0x0/0x2f0) skb_len=0x150 len=0x400 mss_now=0x200
<idle>-0 [005] dNs. 1267970.051183: myretprobe: (tcp_mark_head_lost+0xeb/0x290 <- tcp_fragment) arg1=0xffffffea
1061 int tcp_fragment(struct sock *sk, struct sk_buff *skb, u32 len,
1062 unsigned int mss_now)
1063 {
1064 struct tcp_sock *tp = tcp_sk(sk);
1065 struct sk_buff *buff;
1066 int nsize, old_factor;
1067 int nlen;
1068 u8 flags;
1069
1070 if (WARN_ON(len > skb->len))
1071 return -EINVAL;
1072
The system otherwise runs fine though. Digging a bit into git log, it seems the original BUG_ON was added way back:
commit 3c05d92ed49f644d1f5a960fa48637d63b946016
Date: Wed Sep 14 20:50:35 2005 -0700
[TCP]: Compute in_sacked properly when we split up a TSO frame.
The problem is that the SACK fragmenting code may incorrectly call
tcp_fragment() with a length larger than the skb->len. This happens
when the skb on the transmit queue completely falls to the LHS of the
SACK.
And add a BUG() check to tcp_fragment() so we can spot this kind of
error more quickly in the future.
---------------------------- net/ipv4/tcp_output.c ----------------------------
index c10e443..b018e31 100644
@@ -435,6 +435,8 @@ int tcp_fragment(struct sock *sk, struct sk_buff *skb, u32 len, unsigned int mss
int nsize, old_factor;
u16 flags;
+ BUG_ON(len >= skb->len);
+
nsize = skb_headlen(skb) - len;
if (nsize < 0)
nsize = 0;
My questions are:
1. is this trace still related to the SACK fragmenting code?
2. If this trace is benign, is it OK to remove or rate-limit the kernel message?
Thanks for any inputs,
Grant
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-05-29 19:21 Recurring trace from tcp_fragment() Grant Zhang
@ 2015-05-29 19:46 ` Neal Cardwell
2015-05-29 19:53 ` Grant Zhang
0 siblings, 1 reply; 12+ messages in thread
From: Neal Cardwell @ 2015-05-29 19:46 UTC (permalink / raw)
To: Grant Zhang; +Cc: Netdev, Yuchung Cheng, Eric Dumazet
On Fri, May 29, 2015 at 3:21 PM, Grant Zhang <gzhang@fastly.com> wrote:
> We have multiple machines running into the following trace repeatedly. The trace shows up every couple of seconds on our production machines.
>
...
> May 29 18:14:04 cache-fra1230 kernel:[3080455.796188] [<ffffffff81678174>] tcp_fragment+0x2e4/0x2f0
> May 29 18:14:04 cache-fra1230 kernel:[3080455.796191] [<ffffffff8166edeb>] tcp_mark_head_lost+0xeb/0x290
> May 29 18:14:04 cache-fra1230 kernel:[3080455.796193] [<ffffffff8166fe28>] tcp_update_scoreboard+0x58/0x90
> May 29 18:14:04 cache-fra1230 kernel:[3080455.796195] [<ffffffff8167422d>] tcp_fastretrans_alert+0x75d/0xb30
> May 29 18:14:04 cache-fra1230 kernel:[3080455.796197] [<ffffffff816750d5>] tcp_ack+0xa15/0xf50
I'm glad you are seeing this so often. :-) We see this warning too,
but only very occasionally.
Our team has a proposed fix, but we have so far been unable to test it
thoroughly since it happens so rarely in our environment. Would you be
willing to test a small patch aimed at fixing this warning?
neal
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-05-29 19:46 ` Neal Cardwell
@ 2015-05-29 19:53 ` Grant Zhang
2015-05-30 17:29 ` Neal Cardwell
0 siblings, 1 reply; 12+ messages in thread
From: Grant Zhang @ 2015-05-29 19:53 UTC (permalink / raw)
To: Neal Cardwell; +Cc: Netdev, Yuchung Cheng, Eric Dumazet
Hi Neal,
I will be more happy to test the patch. Please send it my way.
Thanks,
Grant
> On May 29, 2015, at 12:46 PM, Neal Cardwell <ncardwell@google.com> wrote:
>
> On Fri, May 29, 2015 at 3:21 PM, Grant Zhang <gzhang@fastly.com> wrote:
>> We have multiple machines running into the following trace repeatedly. The trace shows up every couple of seconds on our production machines.
>>
> ...
>> May 29 18:14:04 cache-fra1230 kernel:[3080455.796188] [<ffffffff81678174>] tcp_fragment+0x2e4/0x2f0
>> May 29 18:14:04 cache-fra1230 kernel:[3080455.796191] [<ffffffff8166edeb>] tcp_mark_head_lost+0xeb/0x290
>> May 29 18:14:04 cache-fra1230 kernel:[3080455.796193] [<ffffffff8166fe28>] tcp_update_scoreboard+0x58/0x90
>> May 29 18:14:04 cache-fra1230 kernel:[3080455.796195] [<ffffffff8167422d>] tcp_fastretrans_alert+0x75d/0xb30
>> May 29 18:14:04 cache-fra1230 kernel:[3080455.796197] [<ffffffff816750d5>] tcp_ack+0xa15/0xf50
>
> I'm glad you are seeing this so often. :-) We see this warning too,
> but only very occasionally.
>
> Our team has a proposed fix, but we have so far been unable to test it
> thoroughly since it happens so rarely in our environment. Would you be
> willing to test a small patch aimed at fixing this warning?
>
> neal
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-05-29 19:53 ` Grant Zhang
@ 2015-05-30 17:29 ` Neal Cardwell
2015-05-30 18:52 ` Grant Zhang
0 siblings, 1 reply; 12+ messages in thread
From: Neal Cardwell @ 2015-05-30 17:29 UTC (permalink / raw)
To: Grant Zhang; +Cc: Netdev, Yuchung Cheng, Eric Dumazet
[-- Attachment #1: Type: text/plain, Size: 800 bytes --]
On Fri, May 29, 2015 at 3:53 PM, Grant Zhang <gzhang@fastly.com> wrote:
> Hi Neal,
>
> I will be more happy to test the patch. Please send it my way.
Great. Thank you so much for being willing to do this. Attached is a
patch for testing. I generated it and tested it relative to Linux
v3.14.39, since your stack trace seemed to suggest that you were
seeing this on some variant of v3.14.39. (Newer kernels would need a
slightly different patch, since the reneging code path has changed a
little since 3.14.)
Can you please try it out and see if it makes that warning go away?
Also, I would be interested in seeing the value of your
TcpExtTCPSACKReneging counter, and some sense of how fast that value
is increasing, on a machine that's seeing this issue:
nstat -z -a | grep Reneg
Thanks!
neal
[-- Attachment #2: 0001-RFC-for-tests-on-v3.14.39-tcp-resegment-skbs-that-we.patch --]
[-- Type: application/octet-stream, Size: 4569 bytes --]
From 28f179e004adcaa2397c77882836fd7111ef61aa Mon Sep 17 00:00:00 2001
From: Neal Cardwell <ncardwell@google.com>
Date: Fri, 29 May 2015 20:05:23 -0400
Subject: [PATCH] [RFC for tests on v3.14.39] tcp: resegment skbs that we mark
un-SACKed due to reneging
[This patch is for Linux v3.14.39 and is for testing a proposed fix
for the issue reported in the netdev thread "Recurring trace from
tcp_fragment()" from May 29, 2015. A slightly different patch would be
needed for more recent kernels.]
If we are removing a SACK mark due to reneging then we should check to
see if the pcount needs to be sanitized, since tcp_shifted_skb()
can join together SACKed skbs in a way that makes their pcount
unrepresentative of the length of the packet.
This is aimed at fixing scenarios like the one where
tcp_mark_head_lost() calls tcp_fragment() and we fire the following
warning:
if (WARN_ON(len > skb->len))
return -EINVAL;
Here is a theory as to how this could happen...
Suppose the MSS=1000, for simplicity.
(1) send packet A, 1001 bytes, pcount 2
(2) send packet B, 1001 bytes, pcount 2
(3) receive SACK for A
(4) receive SACK for A and B, shift B onto A.
When we shift B onto A, tcp_shifted_skb() just adds the pcounts of A
and B, so now A's pcount is 2+2=4. But its skb->len is 1001+1001 =
2002 bytes. Now normally we would expect an skb with a pcount of 4 to
have somewhere between 3*MSS+1byte and 4*MSS (between 3001 and 4000
bytes). And tcp_mark_head_lost() and tcp_match_skb_to_sack()
implicitly assume this.
Suppose there is then SACK reneging, and we remove the SACKed bit from
this weird skb A with pcount 4 and skb->len 2002. Then we get more
SACKs for packets beyond A, and the loss-marking rules say we should
be able to mark 3 packets starting at A as lost. Then we try to chop
3MSS worth of bytes off of packet A, which only has 2.002MSS of data.
And the warning fires.
Signed-off-by: Neal Cardwell <ncardwell@google.com>
---
include/net/tcp.h | 2 ++
net/ipv4/tcp_input.c | 9 ++++++++-
net/ipv4/tcp_output.c | 14 ++++++++++++++
3 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 1f0d847..4464312 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -543,6 +543,8 @@ void tcp_xmit_retransmit_queue(struct sock *);
void tcp_simple_retransmit(struct sock *);
int tcp_trim_head(struct sock *, struct sk_buff *, u32);
int tcp_fragment(struct sock *, struct sk_buff *, u32, unsigned int);
+int tcp_reset_skb_tso_segs(struct sock *sk, struct sk_buff *skb,
+ unsigned int mss_now);
void tcp_send_probe0(struct sock *);
void tcp_send_partial(struct sock *);
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 2291791..804713b 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1915,6 +1915,7 @@ void tcp_enter_loss(struct sock *sk, int how)
struct tcp_sock *tp = tcp_sk(sk);
struct sk_buff *skb;
bool new_recovery = false;
+ bool was_sacked;
/* Reduce ssthresh if it has not yet been made inside this window. */
if (icsk->icsk_ca_state <= TCP_CA_Disorder ||
@@ -1949,11 +1950,17 @@ void tcp_enter_loss(struct sock *sk, int how)
tp->undo_marker = 0;
TCP_SKB_CB(skb)->sacked &= (~TCPCB_TAGBITS)|TCPCB_SACKED_ACKED;
- if (!(TCP_SKB_CB(skb)->sacked&TCPCB_SACKED_ACKED) || how) {
+ was_sacked = TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED;
+ if (!was_sacked || how) {
TCP_SKB_CB(skb)->sacked &= ~TCPCB_SACKED_ACKED;
TCP_SKB_CB(skb)->sacked |= TCPCB_LOST;
tp->lost_out += tcp_skb_pcount(skb);
tp->retransmit_high = TCP_SKB_CB(skb)->end_seq;
+
+ /* Clean up weird pcounts from tcp_shifted_skb(). */
+ if (was_sacked)
+ tcp_reset_skb_tso_segs(sk, skb,
+ tcp_current_mss(sk));
}
}
tcp_verify_left_out(tp);
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 96f64e5..74c8757 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1501,6 +1501,20 @@ static int tcp_init_tso_segs(const struct sock *sk, struct sk_buff *skb,
return tso_segs;
}
+/* Recompute the TSO segmentation for an skb that was already sent. */
+int tcp_reset_skb_tso_segs(struct sock *sk, struct sk_buff *skb,
+ unsigned int mss_now)
+{
+ int oldpcount = tcp_skb_pcount(skb);
+
+ if (skb_unclone(skb, GFP_ATOMIC))
+ return -ENOMEM;
+
+ tcp_set_skb_tso_segs(sk, skb, mss_now);
+ tcp_adjust_pcount(sk, skb, oldpcount - tcp_skb_pcount(skb));
+
+ return 0;
+}
/* Return true if the Nagle test allows this packet to be
* sent now.
--
2.2.0.rc0.207.ga3a616c
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-05-30 17:29 ` Neal Cardwell
@ 2015-05-30 18:52 ` Grant Zhang
2015-05-30 23:08 ` Neal Cardwell
0 siblings, 1 reply; 12+ messages in thread
From: Grant Zhang @ 2015-05-30 18:52 UTC (permalink / raw)
To: Neal Cardwell; +Cc: Netdev, Yuchung Cheng, Eric Dumazet
[-- Attachment #1: Type: text/plain, Size: 227 bytes --]
Thank you Neal. Most likely I will test the patch on Monday and report
back the result.
As for the TcpExtTCPSACKReneging counter, attached is the captured
counter value on a 1-second interval for 10 minutes.
Thanks,
Grant
[-- Attachment #2: reneg.log --]
[-- Type: application/octet-stream, Size: 33055 bytes --]
TcpExtTCPSACKReneging 3029149 0.0
TcpExtTCPSACKReneging 3029149 0.0
TcpExtTCPSACKReneging 3029150 0.0
TcpExtTCPSACKReneging 3029151 0.0
TcpExtTCPSACKReneging 3029151 0.0
TcpExtTCPSACKReneging 3029152 0.0
TcpExtTCPSACKReneging 3029152 0.0
TcpExtTCPSACKReneging 3029152 0.0
TcpExtTCPSACKReneging 3029155 0.0
TcpExtTCPSACKReneging 3029155 0.0
TcpExtTCPSACKReneging 3029156 0.0
TcpExtTCPSACKReneging 3029157 0.0
TcpExtTCPSACKReneging 3029157 0.0
TcpExtTCPSACKReneging 3029159 0.0
TcpExtTCPSACKReneging 3029160 0.0
TcpExtTCPSACKReneging 3029160 0.0
TcpExtTCPSACKReneging 3029160 0.0
TcpExtTCPSACKReneging 3029160 0.0
TcpExtTCPSACKReneging 3029160 0.0
TcpExtTCPSACKReneging 3029160 0.0
TcpExtTCPSACKReneging 3029160 0.0
TcpExtTCPSACKReneging 3029161 0.0
TcpExtTCPSACKReneging 3029164 0.0
TcpExtTCPSACKReneging 3029165 0.0
TcpExtTCPSACKReneging 3029166 0.0
TcpExtTCPSACKReneging 3029166 0.0
TcpExtTCPSACKReneging 3029168 0.0
TcpExtTCPSACKReneging 3029168 0.0
TcpExtTCPSACKReneging 3029169 0.0
TcpExtTCPSACKReneging 3029169 0.0
TcpExtTCPSACKReneging 3029170 0.0
TcpExtTCPSACKReneging 3029170 0.0
TcpExtTCPSACKReneging 3029172 0.0
TcpExtTCPSACKReneging 3029172 0.0
TcpExtTCPSACKReneging 3029173 0.0
TcpExtTCPSACKReneging 3029173 0.0
TcpExtTCPSACKReneging 3029173 0.0
TcpExtTCPSACKReneging 3029173 0.0
TcpExtTCPSACKReneging 3029177 0.0
TcpExtTCPSACKReneging 3029179 0.0
TcpExtTCPSACKReneging 3029179 0.0
TcpExtTCPSACKReneging 3029181 0.0
TcpExtTCPSACKReneging 3029182 0.0
TcpExtTCPSACKReneging 3029182 0.0
TcpExtTCPSACKReneging 3029185 0.0
TcpExtTCPSACKReneging 3029185 0.0
TcpExtTCPSACKReneging 3029185 0.0
TcpExtTCPSACKReneging 3029186 0.0
TcpExtTCPSACKReneging 3029186 0.0
TcpExtTCPSACKReneging 3029186 0.0
TcpExtTCPSACKReneging 3029187 0.0
TcpExtTCPSACKReneging 3029187 0.0
TcpExtTCPSACKReneging 3029190 0.0
TcpExtTCPSACKReneging 3029190 0.0
TcpExtTCPSACKReneging 3029191 0.0
TcpExtTCPSACKReneging 3029192 0.0
TcpExtTCPSACKReneging 3029194 0.0
TcpExtTCPSACKReneging 3029194 0.0
TcpExtTCPSACKReneging 3029195 0.0
TcpExtTCPSACKReneging 3029198 0.0
TcpExtTCPSACKReneging 3029199 0.0
TcpExtTCPSACKReneging 3029204 0.0
TcpExtTCPSACKReneging 3029208 0.0
TcpExtTCPSACKReneging 3029211 0.0
TcpExtTCPSACKReneging 3029214 0.0
TcpExtTCPSACKReneging 3029219 0.0
TcpExtTCPSACKReneging 3029226 0.0
TcpExtTCPSACKReneging 3029230 0.0
TcpExtTCPSACKReneging 3029233 0.0
TcpExtTCPSACKReneging 3029237 0.0
TcpExtTCPSACKReneging 3029240 0.0
TcpExtTCPSACKReneging 3029243 0.0
TcpExtTCPSACKReneging 3029243 0.0
TcpExtTCPSACKReneging 3029244 0.0
TcpExtTCPSACKReneging 3029246 0.0
TcpExtTCPSACKReneging 3029247 0.0
TcpExtTCPSACKReneging 3029248 0.0
TcpExtTCPSACKReneging 3029252 0.0
TcpExtTCPSACKReneging 3029252 0.0
TcpExtTCPSACKReneging 3029252 0.0
TcpExtTCPSACKReneging 3029253 0.0
TcpExtTCPSACKReneging 3029255 0.0
TcpExtTCPSACKReneging 3029256 0.0
TcpExtTCPSACKReneging 3029257 0.0
TcpExtTCPSACKReneging 3029258 0.0
TcpExtTCPSACKReneging 3029259 0.0
TcpExtTCPSACKReneging 3029259 0.0
TcpExtTCPSACKReneging 3029259 0.0
TcpExtTCPSACKReneging 3029259 0.0
TcpExtTCPSACKReneging 3029259 0.0
TcpExtTCPSACKReneging 3029259 0.0
TcpExtTCPSACKReneging 3029259 0.0
TcpExtTCPSACKReneging 3029261 0.0
TcpExtTCPSACKReneging 3029261 0.0
TcpExtTCPSACKReneging 3029263 0.0
TcpExtTCPSACKReneging 3029264 0.0
TcpExtTCPSACKReneging 3029265 0.0
TcpExtTCPSACKReneging 3029265 0.0
TcpExtTCPSACKReneging 3029267 0.0
TcpExtTCPSACKReneging 3029268 0.0
TcpExtTCPSACKReneging 3029271 0.0
TcpExtTCPSACKReneging 3029271 0.0
TcpExtTCPSACKReneging 3029273 0.0
TcpExtTCPSACKReneging 3029273 0.0
TcpExtTCPSACKReneging 3029274 0.0
TcpExtTCPSACKReneging 3029275 0.0
TcpExtTCPSACKReneging 3029275 0.0
TcpExtTCPSACKReneging 3029275 0.0
TcpExtTCPSACKReneging 3029275 0.0
TcpExtTCPSACKReneging 3029275 0.0
TcpExtTCPSACKReneging 3029276 0.0
TcpExtTCPSACKReneging 3029276 0.0
TcpExtTCPSACKReneging 3029277 0.0
TcpExtTCPSACKReneging 3029277 0.0
TcpExtTCPSACKReneging 3029277 0.0
TcpExtTCPSACKReneging 3029277 0.0
TcpExtTCPSACKReneging 3029278 0.0
TcpExtTCPSACKReneging 3029278 0.0
TcpExtTCPSACKReneging 3029279 0.0
TcpExtTCPSACKReneging 3029280 0.0
TcpExtTCPSACKReneging 3029281 0.0
TcpExtTCPSACKReneging 3029281 0.0
TcpExtTCPSACKReneging 3029281 0.0
TcpExtTCPSACKReneging 3029282 0.0
TcpExtTCPSACKReneging 3029284 0.0
TcpExtTCPSACKReneging 3029285 0.0
TcpExtTCPSACKReneging 3029287 0.0
TcpExtTCPSACKReneging 3029287 0.0
TcpExtTCPSACKReneging 3029288 0.0
TcpExtTCPSACKReneging 3029288 0.0
TcpExtTCPSACKReneging 3029288 0.0
TcpExtTCPSACKReneging 3029289 0.0
TcpExtTCPSACKReneging 3029290 0.0
TcpExtTCPSACKReneging 3029294 0.0
TcpExtTCPSACKReneging 3029294 0.0
TcpExtTCPSACKReneging 3029295 0.0
TcpExtTCPSACKReneging 3029297 0.0
TcpExtTCPSACKReneging 3029298 0.0
TcpExtTCPSACKReneging 3029299 0.0
TcpExtTCPSACKReneging 3029301 0.0
TcpExtTCPSACKReneging 3029303 0.0
TcpExtTCPSACKReneging 3029306 0.0
TcpExtTCPSACKReneging 3029308 0.0
TcpExtTCPSACKReneging 3029309 0.0
TcpExtTCPSACKReneging 3029310 0.0
TcpExtTCPSACKReneging 3029313 0.0
TcpExtTCPSACKReneging 3029314 0.0
TcpExtTCPSACKReneging 3029315 0.0
TcpExtTCPSACKReneging 3029315 0.0
TcpExtTCPSACKReneging 3029318 0.0
TcpExtTCPSACKReneging 3029318 0.0
TcpExtTCPSACKReneging 3029319 0.0
TcpExtTCPSACKReneging 3029320 0.0
TcpExtTCPSACKReneging 3029321 0.0
TcpExtTCPSACKReneging 3029324 0.0
TcpExtTCPSACKReneging 3029324 0.0
TcpExtTCPSACKReneging 3029324 0.0
TcpExtTCPSACKReneging 3029324 0.0
TcpExtTCPSACKReneging 3029324 0.0
TcpExtTCPSACKReneging 3029324 0.0
TcpExtTCPSACKReneging 3029324 0.0
TcpExtTCPSACKReneging 3029324 0.0
TcpExtTCPSACKReneging 3029325 0.0
TcpExtTCPSACKReneging 3029327 0.0
TcpExtTCPSACKReneging 3029330 0.0
TcpExtTCPSACKReneging 3029332 0.0
TcpExtTCPSACKReneging 3029332 0.0
TcpExtTCPSACKReneging 3029332 0.0
TcpExtTCPSACKReneging 3029332 0.0
TcpExtTCPSACKReneging 3029334 0.0
TcpExtTCPSACKReneging 3029338 0.0
TcpExtTCPSACKReneging 3029341 0.0
TcpExtTCPSACKReneging 3029343 0.0
TcpExtTCPSACKReneging 3029347 0.0
TcpExtTCPSACKReneging 3029350 0.0
TcpExtTCPSACKReneging 3029353 0.0
TcpExtTCPSACKReneging 3029355 0.0
TcpExtTCPSACKReneging 3029355 0.0
TcpExtTCPSACKReneging 3029357 0.0
TcpExtTCPSACKReneging 3029357 0.0
TcpExtTCPSACKReneging 3029357 0.0
TcpExtTCPSACKReneging 3029358 0.0
TcpExtTCPSACKReneging 3029358 0.0
TcpExtTCPSACKReneging 3029361 0.0
TcpExtTCPSACKReneging 3029365 0.0
TcpExtTCPSACKReneging 3029366 0.0
TcpExtTCPSACKReneging 3029367 0.0
TcpExtTCPSACKReneging 3029369 0.0
TcpExtTCPSACKReneging 3029369 0.0
TcpExtTCPSACKReneging 3029369 0.0
TcpExtTCPSACKReneging 3029371 0.0
TcpExtTCPSACKReneging 3029372 0.0
TcpExtTCPSACKReneging 3029375 0.0
TcpExtTCPSACKReneging 3029377 0.0
TcpExtTCPSACKReneging 3029377 0.0
TcpExtTCPSACKReneging 3029378 0.0
TcpExtTCPSACKReneging 3029379 0.0
TcpExtTCPSACKReneging 3029381 0.0
TcpExtTCPSACKReneging 3029384 0.0
TcpExtTCPSACKReneging 3029385 0.0
TcpExtTCPSACKReneging 3029388 0.0
TcpExtTCPSACKReneging 3029390 0.0
TcpExtTCPSACKReneging 3029390 0.0
TcpExtTCPSACKReneging 3029393 0.0
TcpExtTCPSACKReneging 3029395 0.0
TcpExtTCPSACKReneging 3029395 0.0
TcpExtTCPSACKReneging 3029397 0.0
TcpExtTCPSACKReneging 3029397 0.0
TcpExtTCPSACKReneging 3029397 0.0
TcpExtTCPSACKReneging 3029397 0.0
TcpExtTCPSACKReneging 3029399 0.0
TcpExtTCPSACKReneging 3029404 0.0
TcpExtTCPSACKReneging 3029405 0.0
TcpExtTCPSACKReneging 3029405 0.0
TcpExtTCPSACKReneging 3029406 0.0
TcpExtTCPSACKReneging 3029408 0.0
TcpExtTCPSACKReneging 3029410 0.0
TcpExtTCPSACKReneging 3029411 0.0
TcpExtTCPSACKReneging 3029413 0.0
TcpExtTCPSACKReneging 3029413 0.0
TcpExtTCPSACKReneging 3029415 0.0
TcpExtTCPSACKReneging 3029417 0.0
TcpExtTCPSACKReneging 3029418 0.0
TcpExtTCPSACKReneging 3029418 0.0
TcpExtTCPSACKReneging 3029421 0.0
TcpExtTCPSACKReneging 3029423 0.0
TcpExtTCPSACKReneging 3029424 0.0
TcpExtTCPSACKReneging 3029425 0.0
TcpExtTCPSACKReneging 3029425 0.0
TcpExtTCPSACKReneging 3029426 0.0
TcpExtTCPSACKReneging 3029426 0.0
TcpExtTCPSACKReneging 3029426 0.0
TcpExtTCPSACKReneging 3029427 0.0
TcpExtTCPSACKReneging 3029427 0.0
TcpExtTCPSACKReneging 3029428 0.0
TcpExtTCPSACKReneging 3029429 0.0
TcpExtTCPSACKReneging 3029430 0.0
TcpExtTCPSACKReneging 3029431 0.0
TcpExtTCPSACKReneging 3029431 0.0
TcpExtTCPSACKReneging 3029433 0.0
TcpExtTCPSACKReneging 3029434 0.0
TcpExtTCPSACKReneging 3029437 0.0
TcpExtTCPSACKReneging 3029438 0.0
TcpExtTCPSACKReneging 3029438 0.0
TcpExtTCPSACKReneging 3029438 0.0
TcpExtTCPSACKReneging 3029439 0.0
TcpExtTCPSACKReneging 3029441 0.0
TcpExtTCPSACKReneging 3029441 0.0
TcpExtTCPSACKReneging 3029443 0.0
TcpExtTCPSACKReneging 3029443 0.0
TcpExtTCPSACKReneging 3029444 0.0
TcpExtTCPSACKReneging 3029445 0.0
TcpExtTCPSACKReneging 3029446 0.0
TcpExtTCPSACKReneging 3029448 0.0
TcpExtTCPSACKReneging 3029449 0.0
TcpExtTCPSACKReneging 3029450 0.0
TcpExtTCPSACKReneging 3029450 0.0
TcpExtTCPSACKReneging 3029452 0.0
TcpExtTCPSACKReneging 3029453 0.0
TcpExtTCPSACKReneging 3029453 0.0
TcpExtTCPSACKReneging 3029455 0.0
TcpExtTCPSACKReneging 3029455 0.0
TcpExtTCPSACKReneging 3029457 0.0
TcpExtTCPSACKReneging 3029458 0.0
TcpExtTCPSACKReneging 3029460 0.0
TcpExtTCPSACKReneging 3029462 0.0
TcpExtTCPSACKReneging 3029462 0.0
TcpExtTCPSACKReneging 3029465 0.0
TcpExtTCPSACKReneging 3029473 0.0
TcpExtTCPSACKReneging 3029483 0.0
TcpExtTCPSACKReneging 3029485 0.0
TcpExtTCPSACKReneging 3029485 0.0
TcpExtTCPSACKReneging 3029486 0.0
TcpExtTCPSACKReneging 3029486 0.0
TcpExtTCPSACKReneging 3029487 0.0
TcpExtTCPSACKReneging 3029487 0.0
TcpExtTCPSACKReneging 3029488 0.0
TcpExtTCPSACKReneging 3029493 0.0
TcpExtTCPSACKReneging 3029495 0.0
TcpExtTCPSACKReneging 3029496 0.0
TcpExtTCPSACKReneging 3029497 0.0
TcpExtTCPSACKReneging 3029499 0.0
TcpExtTCPSACKReneging 3029501 0.0
TcpExtTCPSACKReneging 3029504 0.0
TcpExtTCPSACKReneging 3029505 0.0
TcpExtTCPSACKReneging 3029506 0.0
TcpExtTCPSACKReneging 3029511 0.0
TcpExtTCPSACKReneging 3029512 0.0
TcpExtTCPSACKReneging 3029514 0.0
TcpExtTCPSACKReneging 3029515 0.0
TcpExtTCPSACKReneging 3029517 0.0
TcpExtTCPSACKReneging 3029521 0.0
TcpExtTCPSACKReneging 3029525 0.0
TcpExtTCPSACKReneging 3029528 0.0
TcpExtTCPSACKReneging 3029530 0.0
TcpExtTCPSACKReneging 3029532 0.0
TcpExtTCPSACKReneging 3029534 0.0
TcpExtTCPSACKReneging 3029534 0.0
TcpExtTCPSACKReneging 3029536 0.0
TcpExtTCPSACKReneging 3029537 0.0
TcpExtTCPSACKReneging 3029539 0.0
TcpExtTCPSACKReneging 3029544 0.0
TcpExtTCPSACKReneging 3029547 0.0
TcpExtTCPSACKReneging 3029548 0.0
TcpExtTCPSACKReneging 3029553 0.0
TcpExtTCPSACKReneging 3029554 0.0
TcpExtTCPSACKReneging 3029557 0.0
TcpExtTCPSACKReneging 3029557 0.0
TcpExtTCPSACKReneging 3029558 0.0
TcpExtTCPSACKReneging 3029560 0.0
TcpExtTCPSACKReneging 3029560 0.0
TcpExtTCPSACKReneging 3029562 0.0
TcpExtTCPSACKReneging 3029562 0.0
TcpExtTCPSACKReneging 3029562 0.0
TcpExtTCPSACKReneging 3029563 0.0
TcpExtTCPSACKReneging 3029564 0.0
TcpExtTCPSACKReneging 3029565 0.0
TcpExtTCPSACKReneging 3029567 0.0
TcpExtTCPSACKReneging 3029568 0.0
TcpExtTCPSACKReneging 3029575 0.0
TcpExtTCPSACKReneging 3029575 0.0
TcpExtTCPSACKReneging 3029575 0.0
TcpExtTCPSACKReneging 3029575 0.0
TcpExtTCPSACKReneging 3029575 0.0
TcpExtTCPSACKReneging 3029575 0.0
TcpExtTCPSACKReneging 3029576 0.0
TcpExtTCPSACKReneging 3029577 0.0
TcpExtTCPSACKReneging 3029580 0.0
TcpExtTCPSACKReneging 3029581 0.0
TcpExtTCPSACKReneging 3029581 0.0
TcpExtTCPSACKReneging 3029581 0.0
TcpExtTCPSACKReneging 3029582 0.0
TcpExtTCPSACKReneging 3029582 0.0
TcpExtTCPSACKReneging 3029582 0.0
TcpExtTCPSACKReneging 3029582 0.0
TcpExtTCPSACKReneging 3029583 0.0
TcpExtTCPSACKReneging 3029584 0.0
TcpExtTCPSACKReneging 3029584 0.0
TcpExtTCPSACKReneging 3029584 0.0
TcpExtTCPSACKReneging 3029586 0.0
TcpExtTCPSACKReneging 3029587 0.0
TcpExtTCPSACKReneging 3029588 0.0
TcpExtTCPSACKReneging 3029589 0.0
TcpExtTCPSACKReneging 3029591 0.0
TcpExtTCPSACKReneging 3029592 0.0
TcpExtTCPSACKReneging 3029592 0.0
TcpExtTCPSACKReneging 3029592 0.0
TcpExtTCPSACKReneging 3029594 0.0
TcpExtTCPSACKReneging 3029596 0.0
TcpExtTCPSACKReneging 3029597 0.0
TcpExtTCPSACKReneging 3029598 0.0
TcpExtTCPSACKReneging 3029598 0.0
TcpExtTCPSACKReneging 3029598 0.0
TcpExtTCPSACKReneging 3029598 0.0
TcpExtTCPSACKReneging 3029598 0.0
TcpExtTCPSACKReneging 3029601 0.0
TcpExtTCPSACKReneging 3029604 0.0
TcpExtTCPSACKReneging 3029605 0.0
TcpExtTCPSACKReneging 3029605 0.0
TcpExtTCPSACKReneging 3029605 0.0
TcpExtTCPSACKReneging 3029606 0.0
TcpExtTCPSACKReneging 3029607 0.0
TcpExtTCPSACKReneging 3029609 0.0
TcpExtTCPSACKReneging 3029609 0.0
TcpExtTCPSACKReneging 3029610 0.0
TcpExtTCPSACKReneging 3029610 0.0
TcpExtTCPSACKReneging 3029612 0.0
TcpExtTCPSACKReneging 3029612 0.0
TcpExtTCPSACKReneging 3029612 0.0
TcpExtTCPSACKReneging 3029612 0.0
TcpExtTCPSACKReneging 3029613 0.0
TcpExtTCPSACKReneging 3029613 0.0
TcpExtTCPSACKReneging 3029614 0.0
TcpExtTCPSACKReneging 3029615 0.0
TcpExtTCPSACKReneging 3029617 0.0
TcpExtTCPSACKReneging 3029617 0.0
TcpExtTCPSACKReneging 3029618 0.0
TcpExtTCPSACKReneging 3029620 0.0
TcpExtTCPSACKReneging 3029620 0.0
TcpExtTCPSACKReneging 3029621 0.0
TcpExtTCPSACKReneging 3029625 0.0
TcpExtTCPSACKReneging 3029625 0.0
TcpExtTCPSACKReneging 3029625 0.0
TcpExtTCPSACKReneging 3029625 0.0
TcpExtTCPSACKReneging 3029626 0.0
TcpExtTCPSACKReneging 3029627 0.0
TcpExtTCPSACKReneging 3029629 0.0
TcpExtTCPSACKReneging 3029631 0.0
TcpExtTCPSACKReneging 3029632 0.0
TcpExtTCPSACKReneging 3029633 0.0
TcpExtTCPSACKReneging 3029633 0.0
TcpExtTCPSACKReneging 3029634 0.0
TcpExtTCPSACKReneging 3029634 0.0
TcpExtTCPSACKReneging 3029635 0.0
TcpExtTCPSACKReneging 3029643 0.0
TcpExtTCPSACKReneging 3029645 0.0
TcpExtTCPSACKReneging 3029646 0.0
TcpExtTCPSACKReneging 3029646 0.0
TcpExtTCPSACKReneging 3029647 0.0
TcpExtTCPSACKReneging 3029656 0.0
TcpExtTCPSACKReneging 3029664 0.0
TcpExtTCPSACKReneging 3029667 0.0
TcpExtTCPSACKReneging 3029669 0.0
TcpExtTCPSACKReneging 3029671 0.0
TcpExtTCPSACKReneging 3029673 0.0
TcpExtTCPSACKReneging 3029678 0.0
TcpExtTCPSACKReneging 3029680 0.0
TcpExtTCPSACKReneging 3029683 0.0
TcpExtTCPSACKReneging 3029683 0.0
TcpExtTCPSACKReneging 3029686 0.0
TcpExtTCPSACKReneging 3029691 0.0
TcpExtTCPSACKReneging 3029694 0.0
TcpExtTCPSACKReneging 3029699 0.0
TcpExtTCPSACKReneging 3029705 0.0
TcpExtTCPSACKReneging 3029706 0.0
TcpExtTCPSACKReneging 3029708 0.0
TcpExtTCPSACKReneging 3029710 0.0
TcpExtTCPSACKReneging 3029712 0.0
TcpExtTCPSACKReneging 3029713 0.0
TcpExtTCPSACKReneging 3029718 0.0
TcpExtTCPSACKReneging 3029720 0.0
TcpExtTCPSACKReneging 3029722 0.0
TcpExtTCPSACKReneging 3029723 0.0
TcpExtTCPSACKReneging 3029724 0.0
TcpExtTCPSACKReneging 3029726 0.0
TcpExtTCPSACKReneging 3029730 0.0
TcpExtTCPSACKReneging 3029732 0.0
TcpExtTCPSACKReneging 3029736 0.0
TcpExtTCPSACKReneging 3029738 0.0
TcpExtTCPSACKReneging 3029745 0.0
TcpExtTCPSACKReneging 3029745 0.0
TcpExtTCPSACKReneging 3029745 0.0
TcpExtTCPSACKReneging 3029746 0.0
TcpExtTCPSACKReneging 3029747 0.0
TcpExtTCPSACKReneging 3029747 0.0
TcpExtTCPSACKReneging 3029748 0.0
TcpExtTCPSACKReneging 3029753 0.0
TcpExtTCPSACKReneging 3029753 0.0
TcpExtTCPSACKReneging 3029753 0.0
TcpExtTCPSACKReneging 3029753 0.0
TcpExtTCPSACKReneging 3029754 0.0
TcpExtTCPSACKReneging 3029755 0.0
TcpExtTCPSACKReneging 3029756 0.0
TcpExtTCPSACKReneging 3029758 0.0
TcpExtTCPSACKReneging 3029759 0.0
TcpExtTCPSACKReneging 3029762 0.0
TcpExtTCPSACKReneging 3029762 0.0
TcpExtTCPSACKReneging 3029762 0.0
TcpExtTCPSACKReneging 3029764 0.0
TcpExtTCPSACKReneging 3029764 0.0
TcpExtTCPSACKReneging 3029766 0.0
TcpExtTCPSACKReneging 3029768 0.0
TcpExtTCPSACKReneging 3029771 0.0
TcpExtTCPSACKReneging 3029772 0.0
TcpExtTCPSACKReneging 3029773 0.0
TcpExtTCPSACKReneging 3029776 0.0
TcpExtTCPSACKReneging 3029779 0.0
TcpExtTCPSACKReneging 3029780 0.0
TcpExtTCPSACKReneging 3029782 0.0
TcpExtTCPSACKReneging 3029786 0.0
TcpExtTCPSACKReneging 3029788 0.0
TcpExtTCPSACKReneging 3029791 0.0
TcpExtTCPSACKReneging 3029794 0.0
TcpExtTCPSACKReneging 3029795 0.0
TcpExtTCPSACKReneging 3029796 0.0
TcpExtTCPSACKReneging 3029798 0.0
TcpExtTCPSACKReneging 3029800 0.0
TcpExtTCPSACKReneging 3029802 0.0
TcpExtTCPSACKReneging 3029802 0.0
TcpExtTCPSACKReneging 3029804 0.0
TcpExtTCPSACKReneging 3029804 0.0
TcpExtTCPSACKReneging 3029807 0.0
TcpExtTCPSACKReneging 3029810 0.0
TcpExtTCPSACKReneging 3029810 0.0
TcpExtTCPSACKReneging 3029813 0.0
TcpExtTCPSACKReneging 3029818 0.0
TcpExtTCPSACKReneging 3029819 0.0
TcpExtTCPSACKReneging 3029824 0.0
TcpExtTCPSACKReneging 3029826 0.0
TcpExtTCPSACKReneging 3029827 0.0
TcpExtTCPSACKReneging 3029832 0.0
TcpExtTCPSACKReneging 3029840 0.0
TcpExtTCPSACKReneging 3029842 0.0
TcpExtTCPSACKReneging 3029843 0.0
TcpExtTCPSACKReneging 3029843 0.0
TcpExtTCPSACKReneging 3029845 0.0
TcpExtTCPSACKReneging 3029848 0.0
TcpExtTCPSACKReneging 3029849 0.0
TcpExtTCPSACKReneging 3029852 0.0
TcpExtTCPSACKReneging 3029856 0.0
TcpExtTCPSACKReneging 3029859 0.0
TcpExtTCPSACKReneging 3029860 0.0
TcpExtTCPSACKReneging 3029862 0.0
TcpExtTCPSACKReneging 3029863 0.0
TcpExtTCPSACKReneging 3029865 0.0
TcpExtTCPSACKReneging 3029867 0.0
TcpExtTCPSACKReneging 3029868 0.0
TcpExtTCPSACKReneging 3029868 0.0
TcpExtTCPSACKReneging 3029869 0.0
TcpExtTCPSACKReneging 3029870 0.0
TcpExtTCPSACKReneging 3029871 0.0
TcpExtTCPSACKReneging 3029875 0.0
TcpExtTCPSACKReneging 3029877 0.0
TcpExtTCPSACKReneging 3029879 0.0
TcpExtTCPSACKReneging 3029882 0.0
TcpExtTCPSACKReneging 3029885 0.0
TcpExtTCPSACKReneging 3029887 0.0
TcpExtTCPSACKReneging 3029893 0.0
TcpExtTCPSACKReneging 3029895 0.0
TcpExtTCPSACKReneging 3029897 0.0
TcpExtTCPSACKReneging 3029898 0.0
TcpExtTCPSACKReneging 3029899 0.0
TcpExtTCPSACKReneging 3029899 0.0
TcpExtTCPSACKReneging 3029902 0.0
TcpExtTCPSACKReneging 3029902 0.0
TcpExtTCPSACKReneging 3029905 0.0
TcpExtTCPSACKReneging 3029907 0.0
TcpExtTCPSACKReneging 3029907 0.0
TcpExtTCPSACKReneging 3029908 0.0
TcpExtTCPSACKReneging 3029910 0.0
TcpExtTCPSACKReneging 3029911 0.0
TcpExtTCPSACKReneging 3029913 0.0
TcpExtTCPSACKReneging 3029915 0.0
TcpExtTCPSACKReneging 3029919 0.0
TcpExtTCPSACKReneging 3029922 0.0
TcpExtTCPSACKReneging 3029924 0.0
TcpExtTCPSACKReneging 3029925 0.0
TcpExtTCPSACKReneging 3029926 0.0
TcpExtTCPSACKReneging 3029927 0.0
TcpExtTCPSACKReneging 3029930 0.0
TcpExtTCPSACKReneging 3029933 0.0
TcpExtTCPSACKReneging 3029934 0.0
TcpExtTCPSACKReneging 3029934 0.0
TcpExtTCPSACKReneging 3029936 0.0
TcpExtTCPSACKReneging 3029937 0.0
TcpExtTCPSACKReneging 3029938 0.0
TcpExtTCPSACKReneging 3029939 0.0
TcpExtTCPSACKReneging 3029939 0.0
TcpExtTCPSACKReneging 3029940 0.0
TcpExtTCPSACKReneging 3029940 0.0
TcpExtTCPSACKReneging 3029940 0.0
TcpExtTCPSACKReneging 3029940 0.0
TcpExtTCPSACKReneging 3029940 0.0
TcpExtTCPSACKReneging 3029941 0.0
TcpExtTCPSACKReneging 3029941 0.0
TcpExtTCPSACKReneging 3029941 0.0
TcpExtTCPSACKReneging 3029942 0.0
TcpExtTCPSACKReneging 3029943 0.0
TcpExtTCPSACKReneging 3029944 0.0
TcpExtTCPSACKReneging 3029944 0.0
TcpExtTCPSACKReneging 3029945 0.0
TcpExtTCPSACKReneging 3029946 0.0
TcpExtTCPSACKReneging 3029946 0.0
TcpExtTCPSACKReneging 3029948 0.0
TcpExtTCPSACKReneging 3029950 0.0
TcpExtTCPSACKReneging 3029950 0.0
TcpExtTCPSACKReneging 3029950 0.0
TcpExtTCPSACKReneging 3029950 0.0
TcpExtTCPSACKReneging 3029951 0.0
TcpExtTCPSACKReneging 3029951 0.0
TcpExtTCPSACKReneging 3029954 0.0
TcpExtTCPSACKReneging 3029957 0.0
TcpExtTCPSACKReneging 3029958 0.0
TcpExtTCPSACKReneging 3029959 0.0
TcpExtTCPSACKReneging 3029960 0.0
TcpExtTCPSACKReneging 3029960 0.0
TcpExtTCPSACKReneging 3029961 0.0
TcpExtTCPSACKReneging 3029963 0.0
TcpExtTCPSACKReneging 3029965 0.0
TcpExtTCPSACKReneging 3029967 0.0
TcpExtTCPSACKReneging 3029968 0.0
TcpExtTCPSACKReneging 3029969 0.0
TcpExtTCPSACKReneging 3029970 0.0
TcpExtTCPSACKReneging 3029971 0.0
TcpExtTCPSACKReneging 3029972 0.0
TcpExtTCPSACKReneging 3029972 0.0
TcpExtTCPSACKReneging 3029973 0.0
TcpExtTCPSACKReneging 3029975 0.0
TcpExtTCPSACKReneging 3029975 0.0
TcpExtTCPSACKReneging 3029976 0.0
TcpExtTCPSACKReneging 3029976 0.0
TcpExtTCPSACKReneging 3029977 0.0
TcpExtTCPSACKReneging 3029977 0.0
TcpExtTCPSACKReneging 3029977 0.0
TcpExtTCPSACKReneging 3029978 0.0
TcpExtTCPSACKReneging 3029978 0.0
TcpExtTCPSACKReneging 3029981 0.0
TcpExtTCPSACKReneging 3029981 0.0
TcpExtTCPSACKReneging 3029981 0.0
TcpExtTCPSACKReneging 3029982 0.0
TcpExtTCPSACKReneging 3029984 0.0
TcpExtTCPSACKReneging 3029986 0.0
TcpExtTCPSACKReneging 3029988 0.0
TcpExtTCPSACKReneging 3029990 0.0
TcpExtTCPSACKReneging 3029992 0.0
TcpExtTCPSACKReneging 3029993 0.0
TcpExtTCPSACKReneging 3029994 0.0
TcpExtTCPSACKReneging 3029996 0.0
TcpExtTCPSACKReneging 3029997 0.0
TcpExtTCPSACKReneging 3029998 0.0
TcpExtTCPSACKReneging 3029998 0.0
[-- Attachment #3: Type: text/plain, Size: 993 bytes --]
> On May 30, 2015, at 10:29 AM, Neal Cardwell <ncardwell@google.com> wrote:
>
> On Fri, May 29, 2015 at 3:53 PM, Grant Zhang <gzhang@fastly.com> wrote:
>> Hi Neal,
>>
>> I will be more happy to test the patch. Please send it my way.
>
> Great. Thank you so much for being willing to do this. Attached is a
> patch for testing. I generated it and tested it relative to Linux
> v3.14.39, since your stack trace seemed to suggest that you were
> seeing this on some variant of v3.14.39. (Newer kernels would need a
> slightly different patch, since the reneging code path has changed a
> little since 3.14.)
>
> Can you please try it out and see if it makes that warning go away?
>
> Also, I would be interested in seeing the value of your
> TcpExtTCPSACKReneging counter, and some sense of how fast that value
> is increasing, on a machine that's seeing this issue:
> nstat -z -a | grep Reneg
>
> Thanks!
>
> neal
> <0001-RFC-for-tests-on-v3.14.39-tcp-resegment-skbs-that-we.patch>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-05-30 18:52 ` Grant Zhang
@ 2015-05-30 23:08 ` Neal Cardwell
2015-06-04 16:35 ` Grant Zhang
0 siblings, 1 reply; 12+ messages in thread
From: Neal Cardwell @ 2015-05-30 23:08 UTC (permalink / raw)
To: Grant Zhang; +Cc: Netdev, Yuchung Cheng, Eric Dumazet
On Sat, May 30, 2015 at 2:52 PM, Grant Zhang <gzhang@fastly.com> wrote:
> Thank you Neal. Most likely I will test the patch on Monday and report
> back the result.
>
> As for the TcpExtTCPSACKReneging counter, attached is the captured
> counter value on a 1-second interval for 10 minutes.
OK, great. Those TcpExtTCPSACKReneging values look consistent with the
theory underlying the patch, so that's a good sign.
Thanks!
neal
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-05-30 23:08 ` Neal Cardwell
@ 2015-06-04 16:35 ` Grant Zhang
2015-06-04 19:34 ` Neal Cardwell
2015-06-04 19:38 ` Martin KaFai Lau
0 siblings, 2 replies; 12+ messages in thread
From: Grant Zhang @ 2015-06-04 16:35 UTC (permalink / raw)
To: Neal Cardwell; +Cc: Netdev, Yuchung Cheng, Eric Dumazet
[-- Attachment #1: Type: text/plain, Size: 819 bytes --]
Hi Neal,
Unfortunately with the patch we still see the same stack trace. Attached
is the TcpExtTCPSACKReneging with the patch, captured with 60 seconds
interval. Its value is incremented at an similar speed as before, about
30/minute.
If you want to collect any other data, please feel free to let me know.
Thanks,
Grant
On 5/30/15 4:08 PM, Neal Cardwell wrote:
> On Sat, May 30, 2015 at 2:52 PM, Grant Zhang <gzhang@fastly.com> wrote:
>> Thank you Neal. Most likely I will test the patch on Monday and report
>> back the result.
>>
>> As for the TcpExtTCPSACKReneging counter, attached is the captured
>> counter value on a 1-second interval for 10 minutes.
>
> OK, great. Those TcpExtTCPSACKReneging values look consistent with the
> theory underlying the patch, so that's a good sign.
>
> Thanks!
>
> neal
>
[-- Attachment #2: reneg_patch.txt --]
[-- Type: text/plain, Size: 5280 bytes --]
TcpExtTCPSACKReneging 489 0.0
TcpExtTCPSACKReneging 504 0.0
TcpExtTCPSACKReneging 523 0.0
TcpExtTCPSACKReneging 544 0.0
TcpExtTCPSACKReneging 565 0.0
TcpExtTCPSACKReneging 573 0.0
TcpExtTCPSACKReneging 589 0.0
TcpExtTCPSACKReneging 621 0.0
TcpExtTCPSACKReneging 632 0.0
TcpExtTCPSACKReneging 646 0.0
TcpExtTCPSACKReneging 657 0.0
TcpExtTCPSACKReneging 696 0.0
TcpExtTCPSACKReneging 720 0.0
TcpExtTCPSACKReneging 735 0.0
TcpExtTCPSACKReneging 758 0.0
TcpExtTCPSACKReneging 784 0.0
TcpExtTCPSACKReneging 823 0.0
TcpExtTCPSACKReneging 833 0.0
TcpExtTCPSACKReneging 857 0.0
TcpExtTCPSACKReneging 882 0.0
TcpExtTCPSACKReneging 916 0.0
TcpExtTCPSACKReneging 925 0.0
TcpExtTCPSACKReneging 947 0.0
TcpExtTCPSACKReneging 960 0.0
TcpExtTCPSACKReneging 1011 0.0
TcpExtTCPSACKReneging 1044 0.0
TcpExtTCPSACKReneging 1070 0.0
TcpExtTCPSACKReneging 1079 0.0
TcpExtTCPSACKReneging 1096 0.0
TcpExtTCPSACKReneging 1122 0.0
TcpExtTCPSACKReneging 1127 0.0
TcpExtTCPSACKReneging 1164 0.0
TcpExtTCPSACKReneging 1197 0.0
TcpExtTCPSACKReneging 1216 0.0
TcpExtTCPSACKReneging 1233 0.0
TcpExtTCPSACKReneging 1287 0.0
TcpExtTCPSACKReneging 1317 0.0
TcpExtTCPSACKReneging 1330 0.0
TcpExtTCPSACKReneging 1343 0.0
TcpExtTCPSACKReneging 1369 0.0
TcpExtTCPSACKReneging 1392 0.0
TcpExtTCPSACKReneging 1411 0.0
TcpExtTCPSACKReneging 1426 0.0
TcpExtTCPSACKReneging 1471 0.0
TcpExtTCPSACKReneging 1482 0.0
TcpExtTCPSACKReneging 1501 0.0
TcpExtTCPSACKReneging 1541 0.0
TcpExtTCPSACKReneging 1558 0.0
TcpExtTCPSACKReneging 1575 0.0
TcpExtTCPSACKReneging 1652 0.0
TcpExtTCPSACKReneging 1737 0.0
TcpExtTCPSACKReneging 1765 0.0
TcpExtTCPSACKReneging 1789 0.0
TcpExtTCPSACKReneging 1823 0.0
TcpExtTCPSACKReneging 1862 0.0
TcpExtTCPSACKReneging 1887 0.0
TcpExtTCPSACKReneging 1901 0.0
TcpExtTCPSACKReneging 1919 0.0
TcpExtTCPSACKReneging 1941 0.0
TcpExtTCPSACKReneging 1962 0.0
TcpExtTCPSACKReneging 1989 0.0
TcpExtTCPSACKReneging 2019 0.0
TcpExtTCPSACKReneging 2057 0.0
TcpExtTCPSACKReneging 2078 0.0
TcpExtTCPSACKReneging 2100 0.0
TcpExtTCPSACKReneging 2185 0.0
TcpExtTCPSACKReneging 2246 0.0
TcpExtTCPSACKReneging 2285 0.0
TcpExtTCPSACKReneging 2313 0.0
TcpExtTCPSACKReneging 2337 0.0
TcpExtTCPSACKReneging 2363 0.0
TcpExtTCPSACKReneging 2414 0.0
TcpExtTCPSACKReneging 2559 0.0
TcpExtTCPSACKReneging 2593 0.0
TcpExtTCPSACKReneging 2607 0.0
TcpExtTCPSACKReneging 2714 0.0
TcpExtTCPSACKReneging 2830 0.0
TcpExtTCPSACKReneging 2953 0.0
TcpExtTCPSACKReneging 2986 0.0
TcpExtTCPSACKReneging 3030 0.0
TcpExtTCPSACKReneging 3049 0.0
TcpExtTCPSACKReneging 3073 0.0
TcpExtTCPSACKReneging 3094 0.0
TcpExtTCPSACKReneging 3141 0.0
TcpExtTCPSACKReneging 3195 0.0
TcpExtTCPSACKReneging 3263 0.0
TcpExtTCPSACKReneging 3300 0.0
TcpExtTCPSACKReneging 3348 0.0
TcpExtTCPSACKReneging 3412 0.0
TcpExtTCPSACKReneging 3475 0.0
TcpExtTCPSACKReneging 3522 0.0
TcpExtTCPSACKReneging 3581 0.0
TcpExtTCPSACKReneging 3627 0.0
TcpExtTCPSACKReneging 3674 0.0
TcpExtTCPSACKReneging 3732 0.0
TcpExtTCPSACKReneging 3785 0.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-06-04 16:35 ` Grant Zhang
@ 2015-06-04 19:34 ` Neal Cardwell
2015-06-04 19:38 ` Martin KaFai Lau
1 sibling, 0 replies; 12+ messages in thread
From: Neal Cardwell @ 2015-06-04 19:34 UTC (permalink / raw)
To: Grant Zhang; +Cc: Netdev, Yuchung Cheng, Eric Dumazet
On Thu, Jun 4, 2015 at 12:35 PM, Grant Zhang <gzhang@fastly.com> wrote:
> Hi Neal,
>
> Unfortunately with the patch we still see the same stack trace. Attached is
> the TcpExtTCPSACKReneging with the patch, captured with 60 seconds interval.
> Its value is incremented at an similar speed as before, about 30/minute.
>
> If you want to collect any other data, please feel free to let me know.
OK, very interesting. Thank you so much for testing that patch. We
will work on coming up with another patch to try to address this.
Thanks!
neal
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-06-04 16:35 ` Grant Zhang
2015-06-04 19:34 ` Neal Cardwell
@ 2015-06-04 19:38 ` Martin KaFai Lau
2015-06-04 20:10 ` Grant Zhang
1 sibling, 1 reply; 12+ messages in thread
From: Martin KaFai Lau @ 2015-06-04 19:38 UTC (permalink / raw)
To: Grant Zhang
Cc: netdev, Neal Cardwell, Yuchung Cheng, Eric Dumazet, Kernel Team
Hi Grant,
On Thu, Jun 04, 2015 at 09:35:04AM -0700, Grant Zhang wrote:
> Hi Neal,
>
> Unfortunately with the patch we still see the same stack trace.
> Attached is the TcpExtTCPSACKReneging with the patch, captured with
> 60 seconds interval. Its value is incremented at an similar speed as
> before, about 30/minute.
>
> If you want to collect any other data, please feel free to let me know.
>
We are also seeing similar WARN_ON stack in our kernel 4.0 testing.
What is your net.ipv4.tcp_mtu_probing setting? I am currently testing some
code changes and waiting for some more data. If it is 1 or 2,
can you help to check whether turning it off (by setting it to 0) will stop the
WARN_ON or not in your environment? Note that after setting it to 0, you
may need to wait for a while (like a few mins) for the existing probing
activities to quiet down before observing the WARN_ON output.
Thanks,
--Martin
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-06-04 19:38 ` Martin KaFai Lau
@ 2015-06-04 20:10 ` Grant Zhang
2015-06-04 20:56 ` Martin KaFai Lau
0 siblings, 1 reply; 12+ messages in thread
From: Grant Zhang @ 2015-06-04 20:10 UTC (permalink / raw)
To: Martin KaFai Lau
Cc: netdev, Neal Cardwell, Yuchung Cheng, Eric Dumazet, Kernel Team
Hi Martin,
Thank you! My net.ipv4.tcp_mtu_probing is 1. After turning it off, the
WARN_ON stack is gone.
Could you elaborate a bit on why this setting relates to the WARN_ON
trace? And what are the pros/cons for disabling mtu_probing?
Thanks,
Grant
On 6/4/15 12:38 PM, Martin KaFai Lau wrote:
> Hi Grant,
>
> On Thu, Jun 04, 2015 at 09:35:04AM -0700, Grant Zhang wrote:
>> Hi Neal,
>>
>> Unfortunately with the patch we still see the same stack trace.
>> Attached is the TcpExtTCPSACKReneging with the patch, captured with
>> 60 seconds interval. Its value is incremented at an similar speed as
>> before, about 30/minute.
>>
>> If you want to collect any other data, please feel free to let me know.
>>
>
> We are also seeing similar WARN_ON stack in our kernel 4.0 testing.
>
> What is your net.ipv4.tcp_mtu_probing setting? I am currently testing some
> code changes and waiting for some more data. If it is 1 or 2,
> can you help to check whether turning it off (by setting it to 0) will stop the
> WARN_ON or not in your environment? Note that after setting it to 0, you
> may need to wait for a while (like a few mins) for the existing probing
> activities to quiet down before observing the WARN_ON output.
>
> Thanks,
> --Martin
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-06-04 20:10 ` Grant Zhang
@ 2015-06-04 20:56 ` Martin KaFai Lau
2015-06-04 21:29 ` Yuchung Cheng
0 siblings, 1 reply; 12+ messages in thread
From: Martin KaFai Lau @ 2015-06-04 20:56 UTC (permalink / raw)
To: Grant Zhang
Cc: netdev, Neal Cardwell, Yuchung Cheng, Eric Dumazet, Kernel Team
On Thu, Jun 04, 2015 at 01:10:26PM -0700, Grant Zhang wrote:
> Hi Martin,
>
> Thank you! My net.ipv4.tcp_mtu_probing is 1. After turning it off,
> the WARN_ON stack is gone.
Thanks for confirming it.
> Could you elaborate a bit on why this setting relates to the WARN_ON
> trace?
The WARN_ON is complaining about tcp_fragment() is trying to slice
a skb which has a too-short skb->len.
When doing mtu probing, it may slice the skb. In some cases (which
I also failed to reproduce in packetdrill), it does not
update some related skb values and then confuse tcp_fragment()
later on.
> And what are the pros/cons for disabling mtu_probing?
It depends on your traffic, I guess. However, turning it off is not the
right fix.
FYI, here is the change I am trying.
Thanks,
--Martin
diff --git i/net/ipv4/tcp_output.c w/net/ipv4/tcp_output.c
index acec745..e767e53 100644
--- i/net/ipv4/tcp_output.c
+++ w/net/ipv4/tcp_output.c
@@ -1920,6 +1920,8 @@ static int tcp_mtu_probe(struct sock *sk)
~(TCPHDR_FIN|TCPHDR_PSH);
if (!skb_shinfo(skb)->nr_frags) {
skb_pull(skb, copy);
+ if (tcp_skb_pcount(skb) > 1)
+ tcp_set_skb_tso_segs(sk, skb, mss_now);
if (skb->ip_summed != CHECKSUM_PARTIAL)
skb->csum = csum_partial(skb->data,
skb->len, 0);
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: Recurring trace from tcp_fragment()
2015-06-04 20:56 ` Martin KaFai Lau
@ 2015-06-04 21:29 ` Yuchung Cheng
0 siblings, 0 replies; 12+ messages in thread
From: Yuchung Cheng @ 2015-06-04 21:29 UTC (permalink / raw)
To: Martin KaFai Lau
Cc: Grant Zhang, netdev, Neal Cardwell, Eric Dumazet, Kernel Team
On Thu, Jun 4, 2015 at 1:56 PM, Martin KaFai Lau <kafai@fb.com> wrote:
>
> On Thu, Jun 04, 2015 at 01:10:26PM -0700, Grant Zhang wrote:
> > Hi Martin,
> >
> > Thank you! My net.ipv4.tcp_mtu_probing is 1. After turning it off,
> > the WARN_ON stack is gone.
> Thanks for confirming it.
>
> > Could you elaborate a bit on why this setting relates to the WARN_ON
> > trace?
> The WARN_ON is complaining about tcp_fragment() is trying to slice
> a skb which has a too-short skb->len.
>
> When doing mtu probing, it may slice the skb. In some cases (which
> I also failed to reproduce in packetdrill), it does not
> update some related skb values and then confuse tcp_fragment()
> later on.
>
> > And what are the pros/cons for disabling mtu_probing?
> It depends on your traffic, I guess. However, turning it off is not the
> right fix.
There might be two bugs. We saw the warning with mtu_probing=0. This
would explain why Neal's fix did not work.
>
> FYI, here is the change I am trying.
>
> Thanks,
> --Martin
>
> diff --git i/net/ipv4/tcp_output.c w/net/ipv4/tcp_output.c
> index acec745..e767e53 100644
> --- i/net/ipv4/tcp_output.c
> +++ w/net/ipv4/tcp_output.c
> @@ -1920,6 +1920,8 @@ static int tcp_mtu_probe(struct sock *sk)
> ~(TCPHDR_FIN|TCPHDR_PSH);
> if (!skb_shinfo(skb)->nr_frags) {
> skb_pull(skb, copy);
> + if (tcp_skb_pcount(skb) > 1)
> + tcp_set_skb_tso_segs(sk, skb, mss_now);
> if (skb->ip_summed != CHECKSUM_PARTIAL)
> skb->csum = csum_partial(skb->data,
> skb->len, 0);
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2015-06-04 21:30 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-29 19:21 Recurring trace from tcp_fragment() Grant Zhang
2015-05-29 19:46 ` Neal Cardwell
2015-05-29 19:53 ` Grant Zhang
2015-05-30 17:29 ` Neal Cardwell
2015-05-30 18:52 ` Grant Zhang
2015-05-30 23:08 ` Neal Cardwell
2015-06-04 16:35 ` Grant Zhang
2015-06-04 19:34 ` Neal Cardwell
2015-06-04 19:38 ` Martin KaFai Lau
2015-06-04 20:10 ` Grant Zhang
2015-06-04 20:56 ` Martin KaFai Lau
2015-06-04 21:29 ` Yuchung Cheng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).