* [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded
@ 2007-09-24 10:28 Ilpo Järvinen
2007-09-24 10:28 ` [PATCH 1/5] [TCP]: Create tcp_sacktag_state Ilpo Järvinen
2007-10-10 9:58 ` [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded David Miller
0 siblings, 2 replies; 9+ messages in thread
From: Ilpo Järvinen @ 2007-09-24 10:28 UTC (permalink / raw)
To: David Miller, Stephen Hemminger, SANGTAE HA, Tom Quetchenbach,
Baruch Even <bar
Cc: netdev
Hi all,
After couple of wrong-wayed before/after()s and one infinite
loopy version, here's the current trial version of a sacktag
cache usage recode....
Two first patches come from tcp-2.6 (rebased and rotated).
This series apply cleanly only on top of the other three patch
series I posted earlier today. The last debug patch provides
some statistics for those interested enough.
Dave, please DO NOT apply! ...Some thoughts could be nice
though :-).
It should improve processing of such likely events as cumulative
ACKs and new forward holed SACK considerably because full walk
is not necessary anymore (old code could have been tweaked to
cover them but it's better to drop each special case handling
altogether and do a generic solution. Redundancy of fastpath
hints and highest_sack stuff is also addressed, however, it
might have slight effect as the hint could point to something
less than highest_sack occassionally, whether that's significant
remains to see... In all cases except hint below highest_sack,
the new solution should perform at least as well as the old code
(with a bit larger constant though, no additional cache misses
though) because the SACK blocks old code choose not to process
should all fall to LINUX_MIB_TCP_FULLSKIP category.
It's possible to improve it easily with RB-tree stuff though
this version is based on code using linked lists. I'm not yet
too sure that I got everything 100% correct as I "tweak"
start/end_seqs and exit skb loops a way that is prone to
off-by-one errors, could miss skb here and there. I'll probably
also recode dsack handling too to avoid recursion.
Stephen, Sangtae, others? experiencing those unexpected RTOs
during recovery of large windowed TCP, could you please give it
a try if it helps any...
--
i.
ps. Our net-2.6.24 (and mainline?) DSACK processing code could
be broken btw. DSACK in the middle of other SACK block during
in-order walk seems to not be processed at all as the earlier
(sorted) block caused skb to advance past it already? (Just
occurred to me, I'll see what I can do to that if I've enough
time).
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/5] [TCP]: Create tcp_sacktag_state.
2007-09-24 10:28 [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded Ilpo Järvinen
@ 2007-09-24 10:28 ` Ilpo Järvinen
2007-09-24 10:28 ` [PATCH 2/5] [TCP]: Create tcp_sacktag_one() Ilpo Järvinen
2007-10-10 9:58 ` [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded David Miller
1 sibling, 1 reply; 9+ messages in thread
From: Ilpo Järvinen @ 2007-09-24 10:28 UTC (permalink / raw)
To: David Miller, Stephen Hemminger, SANGTAE HA, Tom Quetchenbach,
Baruch Even <bar
Cc: netdev, David S. Miller
From: David S. Miller <davem@davemloft.net>
It is difficult to break out the inner-logic of
tcp_sacktag_write_queue() into worker functions because
so many local variables get updated in-place.
Start to overcome this by creating a structure block
of state variables that can be passed around into
worker routines.
[I made minor tweaks due to rebase/reordering of stuff
in tcp-2.6 tree, and dropped found_dup_sack & dup_sack
from state -ij]
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
net/ipv4/tcp_input.c | 89 ++++++++++++++++++++++++++-----------------------
1 files changed, 47 insertions(+), 42 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 259f517..04ff465 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1102,6 +1102,13 @@ static int tcp_is_sackblock_valid(struct tcp_sock *tp, int is_dsack,
return !before(start_seq, end_seq - tp->max_window);
}
+struct tcp_sacktag_state {
+ unsigned int flag;
+ int reord;
+ int prior_fackets;
+ u32 lost_retrans;
+ int first_sack_index;
+};
static int tcp_check_dsack(struct tcp_sock *tp, struct sk_buff *ack_skb,
struct tcp_sack_block_wire *sp, int num_sacks,
@@ -1146,25 +1153,22 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
struct tcp_sack_block_wire *sp = (struct tcp_sack_block_wire *)(ptr+2);
struct sk_buff *cached_skb;
int num_sacks = (ptr[1] - TCPOLEN_SACK_BASE)>>3;
- int reord = tp->packets_out;
- int prior_fackets;
- u32 lost_retrans = 0;
- int flag = 0;
- int found_dup_sack = 0;
+ struct tcp_sacktag_state state;
+ int found_dup_sack;
int cached_fack_count;
int i;
- int first_sack_index;
+ int force_one_sack;
+
+ state.flag = 0;
if (!tp->sacked_out) {
tp->fackets_out = 0;
tp->highest_sack = tp->snd_una;
}
- prior_fackets = tp->fackets_out;
- found_dup_sack = tcp_check_dsack(tp, ack_skb, sp,
- num_sacks, prior_snd_una);
+ found_dup_sack = tcp_check_dsack(tp, ack_skb, sp, num_sacks, prior_snd_una);
if (found_dup_sack)
- flag |= FLAG_DSACKING_ACK;
+ state.flag |= FLAG_DSACKING_ACK;
/* Eliminate too old ACKs, but take into
* account more or less fresh ones, they can
@@ -1177,18 +1181,18 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
* if the only SACK change is the increase of the end_seq of
* the first block then only apply that SACK block
* and use retrans queue hinting otherwise slowpath */
- flag = 1;
+ force_one_sack = 1;
for (i = 0; i < num_sacks; i++) {
__be32 start_seq = sp[i].start_seq;
__be32 end_seq = sp[i].end_seq;
if (i == 0) {
if (tp->recv_sack_cache[i].start_seq != start_seq)
- flag = 0;
+ force_one_sack = 0;
} else {
if ((tp->recv_sack_cache[i].start_seq != start_seq) ||
(tp->recv_sack_cache[i].end_seq != end_seq))
- flag = 0;
+ force_one_sack = 0;
}
tp->recv_sack_cache[i].start_seq = start_seq;
tp->recv_sack_cache[i].end_seq = end_seq;
@@ -1199,8 +1203,8 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
tp->recv_sack_cache[i].end_seq = 0;
}
- first_sack_index = 0;
- if (flag)
+ state.first_sack_index = 0;
+ if (force_one_sack)
num_sacks = 1;
else {
int j;
@@ -1218,17 +1222,14 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
sp[j+1] = tmp;
/* Track where the first SACK block goes to */
- if (j == first_sack_index)
- first_sack_index = j+1;
+ if (j == state.first_sack_index)
+ state.first_sack_index = j+1;
}
}
}
}
- /* clear flag as used for different purpose in following code */
- flag = 0;
-
/* Use SACK fastpath hint if valid */
cached_skb = tp->fastpath_skb_hint;
cached_fack_count = tp->fastpath_cnt_hint;
@@ -1237,12 +1238,16 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
cached_fack_count = 0;
}
+ state.reord = tp->packets_out;
+ state.prior_fackets = tp->fackets_out;
+ state.lost_retrans = 0;
+
for (i=0; i<num_sacks; i++, sp++) {
struct sk_buff *skb;
__u32 start_seq = ntohl(sp->start_seq);
__u32 end_seq = ntohl(sp->end_seq);
int fack_count;
- int dup_sack = (found_dup_sack && (i == first_sack_index));
+ int dup_sack = (found_dup_sack && (i == state.first_sack_index));
if (!tcp_is_sackblock_valid(tp, dup_sack, start_seq, end_seq)) {
if (dup_sack) {
@@ -1265,7 +1270,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
/* Event "B" in the comment above. */
if (after(end_seq, tp->high_seq))
- flag |= FLAG_DATA_LOST;
+ state.flag |= FLAG_DATA_LOST;
tcp_for_write_queue_from(skb, sk) {
int in_sack, pcount;
@@ -1276,7 +1281,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
cached_skb = skb;
cached_fack_count = fack_count;
- if (i == first_sack_index) {
+ if (i == state.first_sack_index) {
tp->fastpath_skb_hint = skb;
tp->fastpath_cnt_hint = fack_count;
}
@@ -1325,12 +1330,12 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
if (sacked&TCPCB_RETRANS) {
if ((dup_sack && in_sack) &&
(sacked&TCPCB_SACKED_ACKED))
- reord = min(fack_count, reord);
+ state.reord = min(fack_count, state.reord);
} else {
/* If it was in a hole, we detected reordering. */
- if (fack_count < prior_fackets &&
+ if (fack_count < state.prior_fackets &&
!(sacked&TCPCB_SACKED_ACKED))
- reord = min(fack_count, reord);
+ state.reord = min(fack_count, state.reord);
}
/* Nothing to do; acked frame is about to be dropped. */
@@ -1339,8 +1344,8 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
if ((sacked&TCPCB_SACKED_RETRANS) &&
after(end_seq, TCP_SKB_CB(skb)->ack_seq) &&
- (!lost_retrans || after(end_seq, lost_retrans)))
- lost_retrans = end_seq;
+ (!state.lost_retrans || after(end_seq, state.lost_retrans)))
+ state.lost_retrans = end_seq;
if (!in_sack)
continue;
@@ -1364,8 +1369,8 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
* which was in hole. It is reordering.
*/
if (!(sacked & TCPCB_RETRANS) &&
- fack_count < prior_fackets)
- reord = min(fack_count, reord);
+ fack_count < state.prior_fackets)
+ state.reord = min(fack_count, state.reord);
if (sacked & TCPCB_LOST) {
TCP_SKB_CB(skb)->sacked &= ~TCPCB_LOST;
@@ -1381,15 +1386,15 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
* Clearing correct due to in-order walk
*/
if (after(end_seq, tp->frto_highmark)) {
- flag &= ~FLAG_ONLY_ORIG_SACKED;
+ state.flag &= ~FLAG_ONLY_ORIG_SACKED;
} else {
if (!(sacked & TCPCB_RETRANS))
- flag |= FLAG_ONLY_ORIG_SACKED;
+ state.flag |= FLAG_ONLY_ORIG_SACKED;
}
}
TCP_SKB_CB(skb)->sacked |= TCPCB_SACKED_ACKED;
- flag |= FLAG_DATA_SACKED;
+ state.flag |= FLAG_DATA_SACKED;
tp->sacked_out += tcp_skb_pcount(skb);
if (fack_count > tp->fackets_out)
@@ -1400,7 +1405,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
tp->highest_sack = TCP_SKB_CB(skb)->seq;
} else {
if (dup_sack && (sacked&TCPCB_RETRANS))
- reord = min(fack_count, reord);
+ state.reord = min(fack_count, state.reord);
}
/* D-SACK. We can detect redundant retransmission
@@ -1423,20 +1428,20 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
* we have to account for reordering! Ugly,
* but should help.
*/
- if (lost_retrans && icsk->icsk_ca_state == TCP_CA_Recovery) {
+ if (state.lost_retrans && icsk->icsk_ca_state == TCP_CA_Recovery) {
struct sk_buff *skb;
tcp_for_write_queue(skb, sk) {
if (skb == tcp_send_head(sk))
break;
- if (after(TCP_SKB_CB(skb)->seq, lost_retrans))
+ if (after(TCP_SKB_CB(skb)->seq, state.lost_retrans))
break;
if (!after(TCP_SKB_CB(skb)->end_seq, tp->snd_una))
continue;
if ((TCP_SKB_CB(skb)->sacked&TCPCB_SACKED_RETRANS) &&
- after(lost_retrans, TCP_SKB_CB(skb)->ack_seq) &&
+ after(state.lost_retrans, TCP_SKB_CB(skb)->ack_seq) &&
(tcp_is_fack(tp) ||
- !before(lost_retrans,
+ !before(state.lost_retrans,
TCP_SKB_CB(skb)->ack_seq + tp->reordering *
tp->mss_cache))) {
TCP_SKB_CB(skb)->sacked &= ~TCPCB_SACKED_RETRANS;
@@ -1448,7 +1453,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
if (!(TCP_SKB_CB(skb)->sacked&(TCPCB_LOST|TCPCB_SACKED_ACKED))) {
tp->lost_out += tcp_skb_pcount(skb);
TCP_SKB_CB(skb)->sacked |= TCPCB_LOST;
- flag |= FLAG_DATA_SACKED;
+ state.flag |= FLAG_DATA_SACKED;
NET_INC_STATS_BH(LINUX_MIB_TCPLOSTRETRANSMIT);
}
}
@@ -1457,9 +1462,9 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
tcp_verify_left_out(tp);
- if ((reord < tp->fackets_out) && icsk->icsk_ca_state != TCP_CA_Loss &&
+ if ((state.reord < tp->fackets_out) && icsk->icsk_ca_state != TCP_CA_Loss &&
(!tp->frto_highmark || after(tp->snd_una, tp->frto_highmark)))
- tcp_update_reordering(sk, ((tp->fackets_out + 1) - reord), 0);
+ tcp_update_reordering(sk, ((tp->fackets_out + 1) - state.reord), 0);
#if FASTRETRANS_DEBUG > 0
BUG_TRAP((int)tp->sacked_out >= 0);
@@ -1467,7 +1472,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
BUG_TRAP((int)tp->retrans_out >= 0);
BUG_TRAP((int)tcp_packets_in_flight(tp) >= 0);
#endif
- return flag;
+ return state.flag;
}
/* F-RTO can only be used if TCP has never retransmitted anything other than
--
1.5.0.6
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/5] [TCP]: Create tcp_sacktag_one().
2007-09-24 10:28 ` [PATCH 1/5] [TCP]: Create tcp_sacktag_state Ilpo Järvinen
@ 2007-09-24 10:28 ` Ilpo Järvinen
2007-09-24 10:28 ` [PATCH 3/5] [TCP]: Convert highest_sack to sk_buff to allow direct access Ilpo Järvinen
0 siblings, 1 reply; 9+ messages in thread
From: Ilpo Järvinen @ 2007-09-24 10:28 UTC (permalink / raw)
To: David Miller, Stephen Hemminger, SANGTAE HA, Tom Quetchenbach,
Baruch Even <bar
Cc: netdev, David S. Miller
From: David S. Miller <davem@sunset.davemloft.net>
Worker function that implements the main logic of
the inner-most loop of tcp_sacktag_write_queue().
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
net/ipv4/tcp_input.c | 213 ++++++++++++++++++++++++++------------------------
1 files changed, 110 insertions(+), 103 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 04ff465..76e9c9b 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1143,6 +1143,114 @@ static int tcp_check_dsack(struct tcp_sock *tp, struct sk_buff *ack_skb,
return dup_sack;
}
+static void tcp_sacktag_one(struct sk_buff *skb, struct tcp_sock *tp,
+ struct tcp_sacktag_state *state, int in_sack,
+ int dup_sack, int fack_count, u32 end_seq)
+{
+ u8 sacked = TCP_SKB_CB(skb)->sacked;
+
+ /* Account D-SACK for retransmitted packet. */
+ if ((dup_sack && in_sack) &&
+ (sacked & TCPCB_RETRANS) &&
+ after(TCP_SKB_CB(skb)->end_seq, tp->undo_marker))
+ tp->undo_retrans--;
+
+ /* The frame is ACKed. */
+ if (!after(TCP_SKB_CB(skb)->end_seq, tp->snd_una)) {
+ if (sacked & TCPCB_RETRANS) {
+ if ((dup_sack && in_sack) &&
+ (sacked & TCPCB_SACKED_ACKED))
+ state->reord = min(fack_count, state->reord);
+ } else {
+ /* If it was in a hole, we detected reordering. */
+ if (fack_count < state->prior_fackets &&
+ !(sacked & TCPCB_SACKED_ACKED))
+ state->reord = min(fack_count, state->reord);
+ }
+
+ /* Nothing to do; acked frame is about to be dropped. */
+ return;
+ }
+
+ if ((sacked & TCPCB_SACKED_RETRANS) &&
+ after(end_seq, TCP_SKB_CB(skb)->ack_seq) &&
+ (!state->lost_retrans || after(end_seq, state->lost_retrans)))
+ state->lost_retrans = end_seq;
+
+ if (!in_sack)
+ return;
+
+ if (!(sacked & TCPCB_SACKED_ACKED)) {
+ if (sacked & TCPCB_SACKED_RETRANS) {
+ /* If the segment is not tagged as lost,
+ * we do not clear RETRANS, believing
+ * that retransmission is still in flight.
+ */
+ if (sacked & TCPCB_LOST) {
+ TCP_SKB_CB(skb)->sacked &=
+ ~(TCPCB_LOST|TCPCB_SACKED_RETRANS);
+ tp->lost_out -= tcp_skb_pcount(skb);
+ tp->retrans_out -= tcp_skb_pcount(skb);
+
+ /* clear lost hint */
+ tp->retransmit_skb_hint = NULL;
+ }
+ } else {
+ /* New sack for not retransmitted frame,
+ * which was in hole. It is reordering.
+ */
+ if (!(sacked & TCPCB_RETRANS) &&
+ fack_count < state->prior_fackets)
+ state->reord = min(fack_count, state->reord);
+
+ if (sacked & TCPCB_LOST) {
+ TCP_SKB_CB(skb)->sacked &= ~TCPCB_LOST;
+ tp->lost_out -= tcp_skb_pcount(skb);
+
+ /* clear lost hint */
+ tp->retransmit_skb_hint = NULL;
+ }
+ /* SACK enhanced F-RTO detection.
+ * Set flag if and only if non-rexmitted
+ * segments below frto_highmark are
+ * SACKed (RFC4138; Appendix B).
+ * Clearing correct due to in-order walk
+ */
+ if (after(end_seq, tp->frto_highmark)) {
+ state->flag &= ~FLAG_ONLY_ORIG_SACKED;
+ } else {
+ if (!(sacked & TCPCB_RETRANS))
+ state->flag |= FLAG_ONLY_ORIG_SACKED;
+ }
+ }
+
+ TCP_SKB_CB(skb)->sacked |= TCPCB_SACKED_ACKED;
+ state->flag |= FLAG_DATA_SACKED;
+ tp->sacked_out += tcp_skb_pcount(skb);
+
+ if (fack_count > tp->fackets_out)
+ tp->fackets_out = fack_count;
+
+ if (after(TCP_SKB_CB(skb)->seq,
+ tp->highest_sack))
+ tp->highest_sack = TCP_SKB_CB(skb)->seq;
+ } else {
+ if (dup_sack && (sacked&TCPCB_RETRANS))
+ state->reord = min(fack_count, state->reord);
+ }
+
+ /* D-SACK. We can detect redundant retransmission
+ * in S|R and plain R frames and clear it.
+ * undo_retrans is decreased above, L|R frames
+ * are accounted above as well.
+ */
+ if (dup_sack && (TCP_SKB_CB(skb)->sacked&TCPCB_SACKED_RETRANS)) {
+ TCP_SKB_CB(skb)->sacked &= ~TCPCB_SACKED_RETRANS;
+ tp->retrans_out -= tcp_skb_pcount(skb);
+ tp->retransmit_skb_hint = NULL;
+ }
+}
+
static int
tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_una)
{
@@ -1274,7 +1382,6 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
tcp_for_write_queue_from(skb, sk) {
int in_sack, pcount;
- u8 sacked;
if (skb == tcp_send_head(sk))
break;
@@ -1317,108 +1424,8 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
fack_count += pcount;
- sacked = TCP_SKB_CB(skb)->sacked;
-
- /* Account D-SACK for retransmitted packet. */
- if ((dup_sack && in_sack) &&
- (sacked & TCPCB_RETRANS) &&
- after(TCP_SKB_CB(skb)->end_seq, tp->undo_marker))
- tp->undo_retrans--;
-
- /* The frame is ACKed. */
- if (!after(TCP_SKB_CB(skb)->end_seq, tp->snd_una)) {
- if (sacked&TCPCB_RETRANS) {
- if ((dup_sack && in_sack) &&
- (sacked&TCPCB_SACKED_ACKED))
- state.reord = min(fack_count, state.reord);
- } else {
- /* If it was in a hole, we detected reordering. */
- if (fack_count < state.prior_fackets &&
- !(sacked&TCPCB_SACKED_ACKED))
- state.reord = min(fack_count, state.reord);
- }
-
- /* Nothing to do; acked frame is about to be dropped. */
- continue;
- }
-
- if ((sacked&TCPCB_SACKED_RETRANS) &&
- after(end_seq, TCP_SKB_CB(skb)->ack_seq) &&
- (!state.lost_retrans || after(end_seq, state.lost_retrans)))
- state.lost_retrans = end_seq;
-
- if (!in_sack)
- continue;
-
- if (!(sacked&TCPCB_SACKED_ACKED)) {
- if (sacked & TCPCB_SACKED_RETRANS) {
- /* If the segment is not tagged as lost,
- * we do not clear RETRANS, believing
- * that retransmission is still in flight.
- */
- if (sacked & TCPCB_LOST) {
- TCP_SKB_CB(skb)->sacked &= ~(TCPCB_LOST|TCPCB_SACKED_RETRANS);
- tp->lost_out -= tcp_skb_pcount(skb);
- tp->retrans_out -= tcp_skb_pcount(skb);
-
- /* clear lost hint */
- tp->retransmit_skb_hint = NULL;
- }
- } else {
- /* New sack for not retransmitted frame,
- * which was in hole. It is reordering.
- */
- if (!(sacked & TCPCB_RETRANS) &&
- fack_count < state.prior_fackets)
- state.reord = min(fack_count, state.reord);
-
- if (sacked & TCPCB_LOST) {
- TCP_SKB_CB(skb)->sacked &= ~TCPCB_LOST;
- tp->lost_out -= tcp_skb_pcount(skb);
-
- /* clear lost hint */
- tp->retransmit_skb_hint = NULL;
- }
- /* SACK enhanced F-RTO detection.
- * Set flag if and only if non-rexmitted
- * segments below frto_highmark are
- * SACKed (RFC4138; Appendix B).
- * Clearing correct due to in-order walk
- */
- if (after(end_seq, tp->frto_highmark)) {
- state.flag &= ~FLAG_ONLY_ORIG_SACKED;
- } else {
- if (!(sacked & TCPCB_RETRANS))
- state.flag |= FLAG_ONLY_ORIG_SACKED;
- }
- }
-
- TCP_SKB_CB(skb)->sacked |= TCPCB_SACKED_ACKED;
- state.flag |= FLAG_DATA_SACKED;
- tp->sacked_out += tcp_skb_pcount(skb);
-
- if (fack_count > tp->fackets_out)
- tp->fackets_out = fack_count;
-
- if (after(TCP_SKB_CB(skb)->seq,
- tp->highest_sack))
- tp->highest_sack = TCP_SKB_CB(skb)->seq;
- } else {
- if (dup_sack && (sacked&TCPCB_RETRANS))
- state.reord = min(fack_count, state.reord);
- }
-
- /* D-SACK. We can detect redundant retransmission
- * in S|R and plain R frames and clear it.
- * undo_retrans is decreased above, L|R frames
- * are accounted above as well.
- */
- if (dup_sack &&
- (TCP_SKB_CB(skb)->sacked&TCPCB_SACKED_RETRANS)) {
- TCP_SKB_CB(skb)->sacked &= ~TCPCB_SACKED_RETRANS;
- tp->retrans_out -= tcp_skb_pcount(skb);
- tp->retransmit_skb_hint = NULL;
- }
+ tcp_sacktag_one(skb, tp, &state, in_sack,
+ dup_sack, fack_count, end_seq);
}
}
--
1.5.0.6
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/5] [TCP]: Convert highest_sack to sk_buff to allow direct access
2007-09-24 10:28 ` [PATCH 2/5] [TCP]: Create tcp_sacktag_one() Ilpo Järvinen
@ 2007-09-24 10:28 ` Ilpo Järvinen
2007-09-24 10:28 ` [RFC PATCH 4/5] [TCP]: Rewrite sack_recv_cache (WIP) Ilpo Järvinen
0 siblings, 1 reply; 9+ messages in thread
From: Ilpo Järvinen @ 2007-09-24 10:28 UTC (permalink / raw)
To: David Miller, Stephen Hemminger, SANGTAE HA, Tom Quetchenbach,
Baruch Even <bar
Cc: netdev, Ilpo Järvinen
From: =?ISO-8859-1?q?Ilpo_J=E4rvinen?= <ilpo.jarvinen@helsinki.fi>
It is going to replace the sack fastpath hint quite soon... :-)
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
include/linux/tcp.h | 6 ++++--
include/net/tcp.h | 13 +++++++++++++
net/ipv4/tcp_input.c | 12 ++++++------
net/ipv4/tcp_output.c | 19 ++++++++++---------
4 files changed, 33 insertions(+), 17 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index f8cf090..1d6be2a 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -332,8 +332,10 @@ struct tcp_sock {
struct tcp_sack_block_wire recv_sack_cache[4];
- u32 highest_sack; /* Start seq of globally highest revd SACK
- * (validity guaranteed only if sacked_out > 0) */
+ struct sk_buff *highest_sack; /* highest skb with SACK received
+ * (validity guaranteed only if
+ * sacked_out > 0)
+ */
/* from STCP, retrans queue hinting */
struct sk_buff* lost_skb_hint;
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 991ccdc..8bc64b7 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1308,6 +1308,19 @@ static inline int tcp_write_queue_empty(struct sock *sk)
return skb_queue_empty(&sk->sk_write_queue);
}
+/* Start sequence of the highest skb with SACKed bit, valid only if
+ * sacked > 0 or when the caller has ensured validity by itself.
+ */
+static inline u32 tcp_highest_sack_seq(struct sock *sk)
+{
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ if (WARN_ON(!tp->sacked_out &&
+ tp->highest_sack != tcp_write_queue_head(sk)))
+ return tp->snd_una;
+ return TCP_SKB_CB(tp->highest_sack)->seq;
+}
+
/* /proc */
enum tcp_seq_states {
TCP_SEQ_STATE_LISTENING,
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 76e9c9b..85dd4b0 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1143,10 +1143,11 @@ static int tcp_check_dsack(struct tcp_sock *tp, struct sk_buff *ack_skb,
return dup_sack;
}
-static void tcp_sacktag_one(struct sk_buff *skb, struct tcp_sock *tp,
+static void tcp_sacktag_one(struct sk_buff *skb, struct sock *sk,
struct tcp_sacktag_state *state, int in_sack,
int dup_sack, int fack_count, u32 end_seq)
{
+ struct tcp_sock *tp = tcp_sk(sk);
u8 sacked = TCP_SKB_CB(skb)->sacked;
/* Account D-SACK for retransmitted packet. */
@@ -1231,9 +1232,8 @@ static void tcp_sacktag_one(struct sk_buff *skb, struct tcp_sock *tp,
if (fack_count > tp->fackets_out)
tp->fackets_out = fack_count;
- if (after(TCP_SKB_CB(skb)->seq,
- tp->highest_sack))
- tp->highest_sack = TCP_SKB_CB(skb)->seq;
+ if (after(TCP_SKB_CB(skb)->seq, tcp_highest_sack_seq(sk)))
+ tp->highest_sack = skb;
} else {
if (dup_sack && (sacked&TCPCB_RETRANS))
state->reord = min(fack_count, state->reord);
@@ -1271,7 +1271,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
if (!tp->sacked_out) {
tp->fackets_out = 0;
- tp->highest_sack = tp->snd_una;
+ tp->highest_sack = tcp_write_queue_head(sk);
}
found_dup_sack = tcp_check_dsack(tp, ack_skb, sp, num_sacks, prior_snd_una);
@@ -1424,7 +1424,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
fack_count += pcount;
- tcp_sacktag_one(skb, tp, &state, in_sack,
+ tcp_sacktag_one(skb, sk, &state, in_sack,
dup_sack, fack_count, end_seq);
}
}
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 94c8011..fd51692 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -657,13 +657,15 @@ static void tcp_set_skb_tso_segs(struct sock *sk, struct sk_buff *skb, unsigned
* tweak SACK fastpath hint too as it would overwrite all changes unless
* hint is also changed.
*/
-static void tcp_adjust_fackets_out(struct tcp_sock *tp, struct sk_buff *skb,
+static void tcp_adjust_fackets_out(struct sock *sk, struct sk_buff *skb,
int decr)
{
+ struct tcp_sock *tp = tcp_sk(sk);
+
if (!tp->sacked_out)
return;
- if (!before(tp->highest_sack, TCP_SKB_CB(skb)->seq))
+ if (!before(tcp_highest_sack_seq(sk), TCP_SKB_CB(skb)->seq))
tp->fackets_out -= decr;
/* cnt_hint is "off-by-one" compared with fackets_out (see sacktag) */
@@ -712,8 +714,8 @@ int tcp_fragment(struct sock *sk, struct sk_buff *skb, u32 len, unsigned int mss
TCP_SKB_CB(buff)->end_seq = TCP_SKB_CB(skb)->end_seq;
TCP_SKB_CB(skb)->end_seq = TCP_SKB_CB(buff)->seq;
- if (tp->sacked_out && (TCP_SKB_CB(skb)->seq == tp->highest_sack))
- tp->highest_sack = TCP_SKB_CB(buff)->seq;
+ if (tp->sacked_out && (skb == tp->highest_sack))
+ tp->highest_sack = buff;
/* PSH and FIN should only be set in the second packet. */
flags = TCP_SKB_CB(skb)->flags;
@@ -771,7 +773,7 @@ int tcp_fragment(struct sock *sk, struct sk_buff *skb, u32 len, unsigned int mss
tcp_dec_pcount_approx_int(&tp->sacked_out, diff);
tcp_verify_left_out(tp);
}
- tcp_adjust_fackets_out(tp, skb, diff);
+ tcp_adjust_fackets_out(sk, skb, diff);
}
/* Link BUFF into the send queue. */
@@ -1718,8 +1720,7 @@ static void tcp_retrans_try_collapse(struct sock *sk, struct sk_buff *skb, int m
BUG_ON(tcp_skb_pcount(skb) != 1 ||
tcp_skb_pcount(next_skb) != 1);
- if (WARN_ON(tp->sacked_out &&
- (TCP_SKB_CB(next_skb)->seq == tp->highest_sack)))
+ if (WARN_ON(tp->sacked_out && (next_skb == tp->highest_sack)))
return;
/* Ok. We will be able to collapse the packet. */
@@ -1754,7 +1755,7 @@ static void tcp_retrans_try_collapse(struct sock *sk, struct sk_buff *skb, int m
if (tcp_is_reno(tp) && tp->sacked_out)
tcp_dec_pcount_approx(&tp->sacked_out, next_skb);
- tcp_adjust_fackets_out(tp, skb, tcp_skb_pcount(next_skb));
+ tcp_adjust_fackets_out(sk, skb, tcp_skb_pcount(next_skb));
tp->packets_out -= tcp_skb_pcount(next_skb);
/* changed transmit queue under us so clear hints */
@@ -2031,7 +2032,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
break;
tp->forward_skb_hint = skb;
- if (after(TCP_SKB_CB(skb)->seq, tp->highest_sack))
+ if (after(TCP_SKB_CB(skb)->seq, tcp_highest_sack_seq(sk)))
break;
if (tcp_packets_in_flight(tp) >= tp->snd_cwnd)
--
1.5.0.6
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC PATCH 4/5] [TCP]: Rewrite sack_recv_cache (WIP)
2007-09-24 10:28 ` [PATCH 3/5] [TCP]: Convert highest_sack to sk_buff to allow direct access Ilpo Järvinen
@ 2007-09-24 10:28 ` Ilpo Järvinen
2007-09-24 10:28 ` [DEVELOPER PATCH 5/5] [TCP]: Track sacktag Ilpo Järvinen
0 siblings, 1 reply; 9+ messages in thread
From: Ilpo Järvinen @ 2007-09-24 10:28 UTC (permalink / raw)
To: David Miller, Stephen Hemminger, SANGTAE HA, Tom Quetchenbach,
Baruch Even <bar
Cc: netdev, Ilpo Järvinen
From: =?ISO-8859-1?q?Ilpo_J=E4rvinen?= <ilpo.jarvinen@helsinki.fi>
Previously a number of cases in TCP SACK processing fail to take
advantage of costly stored information in sack_recv_cache. Most
importantly expected events such as cumulative ACK, new hole
ACKs and first ACK after RTO fall to this category. Processing
on such ACKs result in rather long walks building up latencies
(which easily gets nasty when window is large), which are
completely unnecessary, usually no new information was gathered
except the new SACK block above the hole in the respective case.
Since the inclusion of highest_sack, there's a lot information
that is very likely redundant (SACK fastpath hint stuff,
fackets_out, highest_sack), though there's no ultimate guarantee
that they'll remain the same whole the time (in all unearthly
scenarios). Take advantage of this too and drop fastpath hint.
Effectively this drops "special cased" fastpath. This change
adds some complexity to introduce better coveraged "fastpath".
The current ACK's SACK blocks are compared against each cached
block individially and only ranges that are new are then scanned
by the high constant walk. For other parts of write queue, even
when in previously known part of the SACK blocks, a faster skip
function is used. In addition, whenever possible, TCP
fast-forwards to highest_sack skb that was made available
earlier. In typical case, no other things but this fast-forward
and mandatory markings after that occur making the access
pattern quite similar to the former fastpath. DSACKs are special
case that must always be walked.
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
include/linux/tcp.h | 4 +-
include/net/tcp.h | 1 -
net/ipv4/tcp_input.c | 320 ++++++++++++++++++++++++++++++------------------
net/ipv4/tcp_output.c | 12 +--
4 files changed, 202 insertions(+), 135 deletions(-)
diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index 1d6be2a..8d91eac 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -330,7 +330,7 @@ struct tcp_sock {
struct tcp_sack_block duplicate_sack[1]; /* D-SACK block */
struct tcp_sack_block selective_acks[4]; /* The SACKS themselves*/
- struct tcp_sack_block_wire recv_sack_cache[4];
+ struct tcp_sack_block recv_sack_cache[4];
struct sk_buff *highest_sack; /* highest skb with SACK received
* (validity guaranteed only if
@@ -343,9 +343,7 @@ struct tcp_sock {
struct sk_buff *scoreboard_skb_hint;
struct sk_buff *retransmit_skb_hint;
struct sk_buff *forward_skb_hint;
- struct sk_buff *fastpath_skb_hint;
- int fastpath_cnt_hint;
int lost_cnt_hint;
int retransmit_cnt_hint;
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 8bc64b7..d5def9b 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1078,7 +1078,6 @@ static inline void tcp_clear_retrans_hints_partial(struct tcp_sock *tp)
static inline void tcp_clear_all_retrans_hints(struct tcp_sock *tp)
{
tcp_clear_retrans_hints_partial(tp);
- tp->fastpath_skb_hint = NULL;
}
/* MD5 Signature */
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 85dd4b0..9dfdd67 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1106,11 +1106,15 @@ struct tcp_sacktag_state {
unsigned int flag;
int reord;
int prior_fackets;
+ int fack_count;
u32 lost_retrans;
- int first_sack_index;
+ u32 dup_start;
+ u32 dup_end;
};
-static int tcp_check_dsack(struct tcp_sock *tp, struct sk_buff *ack_skb,
+static int tcp_check_dsack(struct tcp_sock *tp,
+ struct tcp_sacktag_state *state,
+ struct sk_buff *ack_skb,
struct tcp_sack_block_wire *sp, int num_sacks,
u32 prior_snd_una)
{
@@ -1120,6 +1124,8 @@ static int tcp_check_dsack(struct tcp_sock *tp, struct sk_buff *ack_skb,
if (before(start_seq_0, TCP_SKB_CB(ack_skb)->ack_seq)) {
dup_sack = 1;
+ state->dup_start = start_seq_0;
+ state->dup_end = end_seq_0;
tcp_dsack_seen(tp);
NET_INC_STATS_BH(LINUX_MIB_TCPDSACKRECV);
} else if (num_sacks > 1) {
@@ -1129,6 +1135,8 @@ static int tcp_check_dsack(struct tcp_sock *tp, struct sk_buff *ack_skb,
if (!after(end_seq_0, end_seq_1) &&
!before(start_seq_0, start_seq_1)) {
dup_sack = 1;
+ state->dup_start = start_seq_1;
+ state->dup_end = end_seq_1;
tcp_dsack_seen(tp);
NET_INC_STATS_BH(LINUX_MIB_TCPDSACKOFORECV);
}
@@ -1251,6 +1259,104 @@ static void tcp_sacktag_one(struct sk_buff *skb, struct sock *sk,
}
}
+static struct sk_buff *tcp_sacktag_walk(struct sk_buff *skb, struct sock *sk,
+ struct tcp_sacktag_state *state,
+ u32 start_seq, u32 end_seq,
+ int dup_sack)
+{
+ tcp_for_write_queue_from(skb, sk) {
+ int in_sack, pcount;
+
+ if (skb == tcp_send_head(sk))
+ break;
+
+ /* The retransmission queue is always in order, so
+ * we can short-circuit the walk early.
+ */
+ if (!before(TCP_SKB_CB(skb)->seq, end_seq))
+ break;
+
+ if (dup_sack)
+ state->dup_start = 0;
+
+ if (state->dup_start && !before(TCP_SKB_CB(skb)->seq, state->dup_start))
+ tcp_sacktag_walk(skb, sk, state, state->dup_start, state->dup_end, 1);
+
+ in_sack = !after(start_seq, TCP_SKB_CB(skb)->seq) &&
+ !before(end_seq, TCP_SKB_CB(skb)->end_seq);
+
+ pcount = tcp_skb_pcount(skb);
+
+ if (pcount > 1 && !in_sack &&
+ after(TCP_SKB_CB(skb)->end_seq, start_seq)) {
+ unsigned int pkt_len;
+
+ in_sack = !after(start_seq, TCP_SKB_CB(skb)->seq);
+
+ if (!in_sack)
+ pkt_len = (start_seq - TCP_SKB_CB(skb)->seq);
+ else
+ pkt_len = (end_seq - TCP_SKB_CB(skb)->seq);
+ if (tcp_fragment(sk, skb, pkt_len, skb_shinfo(skb)->gso_size))
+ break;
+ pcount = tcp_skb_pcount(skb);
+ }
+
+ state->fack_count += pcount;
+
+ tcp_sacktag_one(skb, sk, state, in_sack, dup_sack,
+ state->fack_count, end_seq);
+ }
+ return skb;
+}
+
+/* Avoid all extra work that is being done by sacktag while walking in
+ * a normal way
+ */
+static struct sk_buff *tcp_sacktag_skip(struct sk_buff *skb, struct sock *sk,
+ struct tcp_sacktag_state *state,
+ u32 skip_to_seq)
+{
+ tcp_for_write_queue_from(skb, sk) {
+ if (skb == tcp_send_head(sk))
+ break;
+
+ if (before(TCP_SKB_CB(skb)->end_seq, skip_to_seq))
+ break;
+
+ /* DSACKs must always be processed */
+ if (state->dup_start && !before(TCP_SKB_CB(skb)->seq, state->dup_start)) {
+ skb = tcp_sacktag_walk(skb, sk, state, state->dup_start,
+ state->dup_end, 1);
+ }
+ }
+ return skb;
+}
+
+/* We have better entry point available */
+static struct sk_buff *tcp_sacktag_skip_to_highsack(struct sk_buff *skb,
+ struct sock *sk,
+ struct tcp_sacktag_state *state,
+ struct tcp_sack_block *cache)
+{
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ if (state->dup_start && after(state->dup_start, cache->start_seq) &&
+ before(state->dup_start, TCP_SKB_CB(tp->highest_sack)->end_seq)) {
+ skb = tcp_sacktag_skip(skb, sk, state, state->dup_start);
+ tcp_sacktag_walk(skb, sk, state, state->dup_start, state->dup_end, 1);
+ }
+ skb = tcp_write_queue_next(sk, tp->highest_sack);
+ state->fack_count = tp->fackets_out;
+
+ return skb;
+}
+
+static int tcp_sack_cache_ok(struct tcp_sock *tp, struct tcp_sack_block *cache)
+{
+ return cache < tp->recv_sack_cache + ARRAY_SIZE(tp->recv_sack_cache);
+}
+
static int
tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_una)
{
@@ -1258,23 +1364,26 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
struct tcp_sock *tp = tcp_sk(sk);
unsigned char *ptr = (skb_transport_header(ack_skb) +
TCP_SKB_CB(ack_skb)->sacked);
- struct tcp_sack_block_wire *sp = (struct tcp_sack_block_wire *)(ptr+2);
- struct sk_buff *cached_skb;
+ struct tcp_sack_block_wire *sp_wire = (struct tcp_sack_block_wire *)(ptr+2);
+ struct tcp_sack_block sp[4];
+ struct tcp_sack_block *cache;
int num_sacks = (ptr[1] - TCPOLEN_SACK_BASE)>>3;
+ int used_sacks;
struct tcp_sacktag_state state;
int found_dup_sack;
- int cached_fack_count;
- int i;
- int force_one_sack;
+ struct sk_buff *skb;
+ int i, j;
state.flag = 0;
+ state.dup_start = 0;
+ state.dup_end = 0;
if (!tp->sacked_out) {
tp->fackets_out = 0;
tp->highest_sack = tcp_write_queue_head(sk);
}
- found_dup_sack = tcp_check_dsack(tp, ack_skb, sp, num_sacks, prior_snd_una);
+ found_dup_sack = tcp_check_dsack(tp, &state, ack_skb, sp_wire, num_sacks, prior_snd_una);
if (found_dup_sack)
state.flag |= FLAG_DSACKING_ACK;
@@ -1285,79 +1394,16 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
if (before(TCP_SKB_CB(ack_skb)->ack_seq, prior_snd_una - tp->max_window))
return 0;
- /* SACK fastpath:
- * if the only SACK change is the increase of the end_seq of
- * the first block then only apply that SACK block
- * and use retrans queue hinting otherwise slowpath */
- force_one_sack = 1;
+ used_sacks = 0;
for (i = 0; i < num_sacks; i++) {
- __be32 start_seq = sp[i].start_seq;
- __be32 end_seq = sp[i].end_seq;
+ int dup_sack = !i && found_dup_sack;
- if (i == 0) {
- if (tp->recv_sack_cache[i].start_seq != start_seq)
- force_one_sack = 0;
- } else {
- if ((tp->recv_sack_cache[i].start_seq != start_seq) ||
- (tp->recv_sack_cache[i].end_seq != end_seq))
- force_one_sack = 0;
- }
- tp->recv_sack_cache[i].start_seq = start_seq;
- tp->recv_sack_cache[i].end_seq = end_seq;
- }
- /* Clear the rest of the cache sack blocks so they won't match mistakenly. */
- for (; i < ARRAY_SIZE(tp->recv_sack_cache); i++) {
- tp->recv_sack_cache[i].start_seq = 0;
- tp->recv_sack_cache[i].end_seq = 0;
- }
+ sp[used_sacks].start_seq = ntohl(get_unaligned(&sp_wire[i].start_seq));
+ sp[used_sacks].end_seq = ntohl(get_unaligned(&sp_wire[i].end_seq));
- state.first_sack_index = 0;
- if (force_one_sack)
- num_sacks = 1;
- else {
- int j;
- tp->fastpath_skb_hint = NULL;
-
- /* order SACK blocks to allow in order walk of the retrans queue */
- for (i = num_sacks-1; i > 0; i--) {
- for (j = 0; j < i; j++){
- if (after(ntohl(sp[j].start_seq),
- ntohl(sp[j+1].start_seq))){
- struct tcp_sack_block_wire tmp;
-
- tmp = sp[j];
- sp[j] = sp[j+1];
- sp[j+1] = tmp;
-
- /* Track where the first SACK block goes to */
- if (j == state.first_sack_index)
- state.first_sack_index = j+1;
- }
-
- }
- }
- }
-
- /* Use SACK fastpath hint if valid */
- cached_skb = tp->fastpath_skb_hint;
- cached_fack_count = tp->fastpath_cnt_hint;
- if (!cached_skb) {
- cached_skb = tcp_write_queue_head(sk);
- cached_fack_count = 0;
- }
-
- state.reord = tp->packets_out;
- state.prior_fackets = tp->fackets_out;
- state.lost_retrans = 0;
-
- for (i=0; i<num_sacks; i++, sp++) {
- struct sk_buff *skb;
- __u32 start_seq = ntohl(sp->start_seq);
- __u32 end_seq = ntohl(sp->end_seq);
- int fack_count;
- int dup_sack = (found_dup_sack && (i == state.first_sack_index));
-
- if (!tcp_is_sackblock_valid(tp, dup_sack, start_seq, end_seq)) {
+ if (!tcp_is_sackblock_valid(tp, dup_sack,
+ sp[used_sacks].start_seq,
+ sp[used_sacks].end_seq)) {
if (dup_sack) {
if (!tp->undo_marker)
NET_INC_STATS_BH(LINUX_MIB_TCPDSACKIGNOREDNOUNDO);
@@ -1366,68 +1412,102 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
} else {
/* Don't count olds caused by ACK reordering */
if ((TCP_SKB_CB(ack_skb)->ack_seq != tp->snd_una) &&
- !after(end_seq, tp->snd_una))
+ !after(sp[used_sacks].end_seq, tp->snd_una))
continue;
NET_INC_STATS_BH(LINUX_MIB_TCPSACKDISCARD);
}
continue;
}
- skb = cached_skb;
- fack_count = cached_fack_count;
-
- /* Event "B" in the comment above. */
- if (after(end_seq, tp->high_seq))
- state.flag |= FLAG_DATA_LOST;
+ /* Ignore very old stuff early */
+ if (!after(sp[used_sacks].end_seq, prior_snd_una))
+ continue;
- tcp_for_write_queue_from(skb, sk) {
- int in_sack, pcount;
+ used_sacks++;
+ }
- if (skb == tcp_send_head(sk))
- break;
+ /* order SACK blocks to allow in order walk of the retrans queue */
+ for (i = used_sacks-1; i > 0; i--) {
+ for (j = 0; j < i; j++){
+ if (after(sp[j].start_seq, sp[j+1].start_seq)) {
+ struct tcp_sack_block tmp;
- cached_skb = skb;
- cached_fack_count = fack_count;
- if (i == state.first_sack_index) {
- tp->fastpath_skb_hint = skb;
- tp->fastpath_cnt_hint = fack_count;
+ tmp = sp[j];
+ sp[j] = sp[j+1];
+ sp[j+1] = tmp;
}
+ }
+ }
- /* The retransmission queue is always in order, so
- * we can short-circuit the walk early.
- */
- if (!before(TCP_SKB_CB(skb)->seq, end_seq))
- break;
+ state.reord = tp->packets_out;
+ state.prior_fackets = tp->fackets_out;
+ state.lost_retrans = 0;
+ state.fack_count = 0;
- in_sack = !after(start_seq, TCP_SKB_CB(skb)->seq) &&
- !before(end_seq, TCP_SKB_CB(skb)->end_seq);
+ skb = tcp_write_queue_head(sk);
+ i = 0;
- pcount = tcp_skb_pcount(skb);
+ if (!tp->sacked_out) {
+ /* It's already past, so skip checking against it */
+ cache = tp->recv_sack_cache + ARRAY_SIZE(tp->recv_sack_cache);
+ } else {
+ cache = tp->recv_sack_cache;
+ /* Skip empty blocks in at head of the cache */
+ while (tcp_sack_cache_ok(tp, cache) && !cache->start_seq &&
+ !cache->end_seq)
+ cache++;
+ }
- if (pcount > 1 && !in_sack &&
- after(TCP_SKB_CB(skb)->end_seq, start_seq)) {
- unsigned int pkt_len;
+ while (i < used_sacks) {
+ u32 start_seq = sp[i].start_seq;
+ u32 end_seq = sp[i].end_seq;
- in_sack = !after(start_seq,
- TCP_SKB_CB(skb)->seq);
+ /* Event "B" in the comment above. */
+ if (after(end_seq, tp->high_seq))
+ state.flag |= FLAG_DATA_LOST;
- if (!in_sack)
- pkt_len = (start_seq -
- TCP_SKB_CB(skb)->seq);
- else
- pkt_len = (end_seq -
- TCP_SKB_CB(skb)->seq);
- if (tcp_fragment(sk, skb, pkt_len, skb_shinfo(skb)->gso_size))
- break;
- pcount = tcp_skb_pcount(skb);
- }
+ /* Skip too early cached blocks */
+ while (tcp_sack_cache_ok(tp, cache) &&
+ !before(start_seq, cache->end_seq))
+ cache++;
- fack_count += pcount;
+ if (tcp_sack_cache_ok(tp, cache)) {
+ if (after(end_seq, cache->start_seq)) {
+ if (before(start_seq, cache->start_seq)) {
+ skb = tcp_sacktag_skip(skb, sk, &state, start_seq);
+ skb = tcp_sacktag_walk(skb, sk, &state, start_seq, cache->start_seq, 0);
+ }
+ /* Rest of the block already fully processed? */
+ if (!after(end_seq, cache->end_seq)) {
+ i++;
+ continue;
+ }
+ if (TCP_SKB_CB(tp->highest_sack)->end_seq != cache->end_seq) {
+ skb = tcp_sacktag_skip(skb, sk, &state, cache->end_seq);
+ cache++;
+ continue;
+ }
- tcp_sacktag_one(skb, sk, &state, in_sack,
- dup_sack, fack_count, end_seq);
+ skb = tcp_sacktag_skip_to_highsack(skb, sk, &state, cache);
+ }
+ } else if (!before(start_seq, tcp_highest_sack_seq(sk)) &&
+ before(TCP_SKB_CB(skb)->seq, tcp_highest_sack_seq(sk))) {
+ skb = tcp_write_queue_next(sk, tp->highest_sack);
+ state.fack_count = tp->fackets_out;
}
+
+ skb = tcp_sacktag_skip(skb, sk, &state, start_seq);
+ skb = tcp_sacktag_walk(skb, sk, &state, start_seq, end_seq, 0);
+ i++;
+ }
+
+ /* Clear the head of the cache sack blocks so we can skip it next time */
+ for (i = 0; i < ARRAY_SIZE(tp->recv_sack_cache) - used_sacks; i++) {
+ tp->recv_sack_cache[i].start_seq = 0;
+ tp->recv_sack_cache[i].end_seq = 0;
}
+ for (j = 0; j < used_sacks; j++)
+ tp->recv_sack_cache[i++] = sp[j];
/* Check for lost retransmit. This superb idea is
* borrowed from "ratehalving". Event "C".
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index fd51692..4cfda16 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -653,9 +653,7 @@ static void tcp_set_skb_tso_segs(struct sock *sk, struct sk_buff *skb, unsigned
}
/* When a modification to fackets out becomes necessary, we need to check
- * skb is counted to fackets_out or not. Another important thing is to
- * tweak SACK fastpath hint too as it would overwrite all changes unless
- * hint is also changed.
+ * skb is counted to fackets_out or not.
*/
static void tcp_adjust_fackets_out(struct sock *sk, struct sk_buff *skb,
int decr)
@@ -667,11 +665,6 @@ static void tcp_adjust_fackets_out(struct sock *sk, struct sk_buff *skb,
if (!before(tcp_highest_sack_seq(sk), TCP_SKB_CB(skb)->seq))
tp->fackets_out -= decr;
-
- /* cnt_hint is "off-by-one" compared with fackets_out (see sacktag) */
- if (tp->fastpath_skb_hint != NULL &&
- after(TCP_SKB_CB(tp->fastpath_skb_hint)->seq, TCP_SKB_CB(skb)->seq))
- tp->fastpath_cnt_hint -= decr;
}
/* Function to create two new TCP segments. Shrinks the given segment
@@ -1760,9 +1753,6 @@ static void tcp_retrans_try_collapse(struct sock *sk, struct sk_buff *skb, int m
/* changed transmit queue under us so clear hints */
tcp_clear_retrans_hints_partial(tp);
- /* manually tune sacktag skb hint */
- if (tp->fastpath_skb_hint == next_skb)
- tp->fastpath_skb_hint = skb;
sk_stream_free_skb(sk, next_skb);
}
--
1.5.0.6
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [DEVELOPER PATCH 5/5] [TCP]: Track sacktag
2007-09-24 10:28 ` [RFC PATCH 4/5] [TCP]: Rewrite sack_recv_cache (WIP) Ilpo Järvinen
@ 2007-09-24 10:28 ` Ilpo Järvinen
0 siblings, 0 replies; 9+ messages in thread
From: Ilpo Järvinen @ 2007-09-24 10:28 UTC (permalink / raw)
To: David Miller, Stephen Hemminger, SANGTAE HA, Tom Quetchenbach,
Baruch Even <bar
Cc: netdev, Ilpo Järvinen
From: =?ISO-8859-1?q?Ilpo_J=E4rvinen?= <ilpo.jarvinen@helsinki.fi>
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
include/linux/snmp.h | 20 +++++++++++++++++++
net/ipv4/proc.c | 20 +++++++++++++++++++
net/ipv4/tcp_input.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++--
3 files changed, 89 insertions(+), 3 deletions(-)
diff --git a/include/linux/snmp.h b/include/linux/snmp.h
index 89f0c2b..42b8c07 100644
--- a/include/linux/snmp.h
+++ b/include/linux/snmp.h
@@ -214,6 +214,26 @@ enum
LINUX_MIB_TCPDSACKIGNOREDOLD, /* TCPSACKIgnoredOld */
LINUX_MIB_TCPDSACKIGNOREDNOUNDO, /* TCPSACKIgnoredNoUndo */
LINUX_MIB_TCPSPURIOUSRTOS, /* TCPSpuriousRTOs */
+ LINUX_MIB_TCP_SACKTAG,
+ LINUX_MIB_TCP_SACK0,
+ LINUX_MIB_TCP_SACK1,
+ LINUX_MIB_TCP_SACK2,
+ LINUX_MIB_TCP_SACK3,
+ LINUX_MIB_TCP_SACK4,
+ LINUX_MIB_TCP_WALKEDSKBS,
+ LINUX_MIB_TCP_WALKEDDSACKS,
+ LINUX_MIB_TCP_SKIPPEDSKBS,
+ LINUX_MIB_TCP_NOCACHE,
+ LINUX_MIB_TCP_FULLWALK,
+ LINUX_MIB_TCP_HEADWALK,
+ LINUX_MIB_TCP_TAILWALK,
+ LINUX_MIB_TCP_FULLSKIP,
+ LINUX_MIB_TCP_TAILSKIP,
+ LINUX_MIB_TCP_HEADSKIP,
+ LINUX_MIB_TCP_FULLSKIP_TOHIGH,
+ LINUX_MIB_TCP_TAILSKIP_TOHIGH,
+ LINUX_MIB_TCP_NEWSKIP,
+ LINUX_MIB_TCP_CACHEREMAINING,
__LINUX_MIB_MAX
};
diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index 9dee70e..4808f82 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -246,6 +246,26 @@ static const struct snmp_mib snmp4_net_list[] = {
SNMP_MIB_ITEM("TCPDSACKIgnoredOld", LINUX_MIB_TCPDSACKIGNOREDOLD),
SNMP_MIB_ITEM("TCPDSACKIgnoredNoUndo", LINUX_MIB_TCPDSACKIGNOREDNOUNDO),
SNMP_MIB_ITEM("TCPSpuriousRTOs", LINUX_MIB_TCPSPURIOUSRTOS),
+ SNMP_MIB_ITEM("TCP_SACKTAG", LINUX_MIB_TCP_SACKTAG),
+ SNMP_MIB_ITEM("TCP_SACK0", LINUX_MIB_TCP_SACK0),
+ SNMP_MIB_ITEM("TCP_SACK1", LINUX_MIB_TCP_SACK1),
+ SNMP_MIB_ITEM("TCP_SACK2", LINUX_MIB_TCP_SACK2),
+ SNMP_MIB_ITEM("TCP_SACK3", LINUX_MIB_TCP_SACK3),
+ SNMP_MIB_ITEM("TCP_SACK4", LINUX_MIB_TCP_SACK4),
+ SNMP_MIB_ITEM("TCP_WALKEDSKBS", LINUX_MIB_TCP_WALKEDSKBS),
+ SNMP_MIB_ITEM("TCP_WALKEDDSACKS", LINUX_MIB_TCP_WALKEDDSACKS),
+ SNMP_MIB_ITEM("TCP_SKIPPEDSKBS", LINUX_MIB_TCP_SKIPPEDSKBS),
+ SNMP_MIB_ITEM("TCP_NOCACHE", LINUX_MIB_TCP_NOCACHE),
+ SNMP_MIB_ITEM("TCP_FULLWALK", LINUX_MIB_TCP_FULLWALK),
+ SNMP_MIB_ITEM("TCP_HEADWALK", LINUX_MIB_TCP_HEADWALK),
+ SNMP_MIB_ITEM("TCP_TAILWALK", LINUX_MIB_TCP_TAILWALK),
+ SNMP_MIB_ITEM("TCP_FULLSKIP", LINUX_MIB_TCP_FULLSKIP),
+ SNMP_MIB_ITEM("TCP_TAILSKIP", LINUX_MIB_TCP_TAILSKIP),
+ SNMP_MIB_ITEM("TCP_HEADSKIP", LINUX_MIB_TCP_HEADSKIP),
+ SNMP_MIB_ITEM("TCP_FULLSKIP_TOHIGH", LINUX_MIB_TCP_FULLSKIP_TOHIGH),
+ SNMP_MIB_ITEM("TCP_TAILSKIP_TOHIGH", LINUX_MIB_TCP_TAILSKIP_TOHIGH),
+ SNMP_MIB_ITEM("TCP_NEWSKIP", LINUX_MIB_TCP_NEWSKIP),
+ SNMP_MIB_ITEM("TCP_CACHEREMAINING", LINUX_MIB_TCP_CACHEREMAINING),
SNMP_MIB_SENTINEL
};
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 9dfdd67..38045c8 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1306,6 +1306,10 @@ static struct sk_buff *tcp_sacktag_walk(struct sk_buff *skb, struct sock *sk,
tcp_sacktag_one(skb, sk, state, in_sack, dup_sack,
state->fack_count, end_seq);
+
+ NET_INC_STATS_BH(LINUX_MIB_TCP_WALKEDSKBS);
+ if (dup_sack)
+ NET_INC_STATS_BH(LINUX_MIB_TCP_WALKEDDSACKS);
}
return skb;
}
@@ -1329,6 +1333,7 @@ static struct sk_buff *tcp_sacktag_skip(struct sk_buff *skb, struct sock *sk,
skb = tcp_sacktag_walk(skb, sk, state, state->dup_start,
state->dup_end, 1);
}
+ NET_INC_STATS_BH(LINUX_MIB_TCP_SKIPPEDSKBS);
}
return skb;
}
@@ -1458,9 +1463,22 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
cache++;
}
+ NET_INC_STATS_BH(LINUX_MIB_TCP_SACKTAG);
+ switch (used_sacks) {
+ case 0: NET_INC_STATS_BH(LINUX_MIB_TCP_SACK0); break;
+ case 1: NET_INC_STATS_BH(LINUX_MIB_TCP_SACK1); break;
+ case 2: NET_INC_STATS_BH(LINUX_MIB_TCP_SACK2); break;
+ case 3: NET_INC_STATS_BH(LINUX_MIB_TCP_SACK3); break;
+ case 4: NET_INC_STATS_BH(LINUX_MIB_TCP_SACK4); break;
+ }
+
+ if (!tcp_sack_cache_ok(tp, cache))
+ NET_INC_STATS_BH(LINUX_MIB_TCP_NOCACHE);
+
while (i < used_sacks) {
u32 start_seq = sp[i].start_seq;
u32 end_seq = sp[i].end_seq;
+ int fullwalk = 0;
/* Event "B" in the comment above. */
if (after(end_seq, tp->high_seq))
@@ -1473,41 +1491,69 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb, u32 prior_snd_
if (tcp_sack_cache_ok(tp, cache)) {
if (after(end_seq, cache->start_seq)) {
+ int headskip = 0;
+
if (before(start_seq, cache->start_seq)) {
skb = tcp_sacktag_skip(skb, sk, &state, start_seq);
skb = tcp_sacktag_walk(skb, sk, &state, start_seq, cache->start_seq, 0);
- }
+ NET_INC_STATS_BH(LINUX_MIB_TCP_HEADWALK);
+ } else
+ headskip = 1;
+
/* Rest of the block already fully processed? */
if (!after(end_seq, cache->end_seq)) {
i++;
+ if (headskip)
+ NET_INC_STATS_BH(LINUX_MIB_TCP_FULLSKIP);
+ else
+ NET_INC_STATS_BH(LINUX_MIB_TCP_TAILSKIP);
continue;
}
+
if (TCP_SKB_CB(tp->highest_sack)->end_seq != cache->end_seq) {
skb = tcp_sacktag_skip(skb, sk, &state, cache->end_seq);
cache++;
+ if (headskip)
+ NET_INC_STATS_BH(LINUX_MIB_TCP_HEADSKIP);
continue;
}
skb = tcp_sacktag_skip_to_highsack(skb, sk, &state, cache);
- }
+ if (headskip)
+ NET_INC_STATS_BH(LINUX_MIB_TCP_FULLSKIP_TOHIGH);
+ else
+ NET_INC_STATS_BH(LINUX_MIB_TCP_TAILSKIP_TOHIGH);
+ } else
+ fullwalk = 1;
} else if (!before(start_seq, tcp_highest_sack_seq(sk)) &&
before(TCP_SKB_CB(skb)->seq, tcp_highest_sack_seq(sk))) {
skb = tcp_write_queue_next(sk, tp->highest_sack);
state.fack_count = tp->fackets_out;
+ NET_INC_STATS_BH(LINUX_MIB_TCP_NEWSKIP);
+ fullwalk = 1;
}
skb = tcp_sacktag_skip(skb, sk, &state, start_seq);
skb = tcp_sacktag_walk(skb, sk, &state, start_seq, end_seq, 0);
+ if (fullwalk)
+ NET_INC_STATS_BH(LINUX_MIB_TCP_FULLWALK);
+ else
+ NET_INC_STATS_BH(LINUX_MIB_TCP_TAILWALK);
i++;
}
+ if (tcp_sack_cache_ok(tp, cache))
+ NET_INC_STATS_BH(LINUX_MIB_TCP_CACHEREMAINING);
+
/* Clear the head of the cache sack blocks so we can skip it next time */
for (i = 0; i < ARRAY_SIZE(tp->recv_sack_cache) - used_sacks; i++) {
tp->recv_sack_cache[i].start_seq = 0;
tp->recv_sack_cache[i].end_seq = 0;
}
- for (j = 0; j < used_sacks; j++)
+ for (j = 0; j < used_sacks; j++) {
+ WARN_ON(i >= ARRAY_SIZE(tp->recv_sack_cache));
tp->recv_sack_cache[i++] = sp[j];
+ }
/* Check for lost retransmit. This superb idea is
* borrowed from "ratehalving". Event "C".
--
1.5.0.6
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded
2007-09-24 10:28 [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded Ilpo Järvinen
2007-09-24 10:28 ` [PATCH 1/5] [TCP]: Create tcp_sacktag_state Ilpo Järvinen
@ 2007-10-10 9:58 ` David Miller
2007-10-10 10:26 ` Ilpo Järvinen
1 sibling, 1 reply; 9+ messages in thread
From: David Miller @ 2007-10-10 9:58 UTC (permalink / raw)
To: ilpo.jarvinen; +Cc: shemminger, sangtae.ha, virtualphtn, baruch, netdev
From: "Ilpo_Järvinen" <ilpo.jarvinen@helsinki.fi>
Date: Mon, 24 Sep 2007 13:28:42 +0300
> After couple of wrong-wayed before/after()s and one infinite
> loopy version, here's the current trial version of a sacktag
> cache usage recode....
>
> Two first patches come from tcp-2.6 (rebased and rotated).
> This series apply cleanly only on top of the other three patch
> series I posted earlier today. The last debug patch provides
> some statistics for those interested enough.
>
> Dave, please DO NOT apply! ...Some thoughts could be nice
> though :-).
Ilpo, I have not forgotten about this patch set.
It is something I plan to look over after the madness of merging
net-2.6.24 to Linus is complete.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded
2007-10-10 9:58 ` [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded David Miller
@ 2007-10-10 10:26 ` Ilpo Järvinen
2007-10-10 10:45 ` David Miller
0 siblings, 1 reply; 9+ messages in thread
From: Ilpo Järvinen @ 2007-10-10 10:26 UTC (permalink / raw)
To: David Miller
Cc: Stephen Hemminger, sangtae.ha, virtualphtn, Baruch Even, Netdev
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1395 bytes --]
On Wed, 10 Oct 2007, David Miller wrote:
> From: "Ilpo_Järvinen" <ilpo.jarvinen@helsinki.fi>
> Date: Mon, 24 Sep 2007 13:28:42 +0300
>
> > After couple of wrong-wayed before/after()s and one infinite
> > loopy version, here's the current trial version of a sacktag
> > cache usage recode....
> >
> > Two first patches come from tcp-2.6 (rebased and rotated).
> > This series apply cleanly only on top of the other three patch
> > series I posted earlier today. The last debug patch provides
> > some statistics for those interested enough.
> >
> > Dave, please DO NOT apply! ...Some thoughts could be nice
> > though :-).
>
> Ilpo, I have not forgotten about this patch set.
>
> It is something I plan to look over after the madness of merging
> net-2.6.24 to Linus is complete.
Thanks, there's probably going to be some trouble though, I'd bet it
doesn't anymore apply cleanly to net-2.6.23 HEAD because of something else
that got applied (don't remember exactly but I guess that highest_sack
reno fix did that).
I try to get them resent soon but currently my thoughts are in solving
DSACK ignored bug (and doing the associated cleanups) which again will
cause those code move conflicts to reoccur. Therefore I'd love to postpone
the rebase a bit... Hmm, SACK code is under such flux currently that I'll
have to deal conflicts almost daily due to overlapping ideas...
--
i.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded
2007-10-10 10:26 ` Ilpo Järvinen
@ 2007-10-10 10:45 ` David Miller
0 siblings, 0 replies; 9+ messages in thread
From: David Miller @ 2007-10-10 10:45 UTC (permalink / raw)
To: ilpo.jarvinen; +Cc: shemminger, sangtae.ha, virtualphtn, baruch, netdev
From: "Ilpo_Järvinen" <ilpo.jarvinen@helsinki.fi>
Date: Wed, 10 Oct 2007 13:26:05 +0300 (EEST)
> Hmm, SACK code is under such flux currently that I'll
> have to deal conflicts almost daily due to overlapping ideas...
Welcome to my world, just scale it to 800 patches and entire
networking tree :-))))
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2007-10-10 10:45 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-24 10:28 [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded Ilpo Järvinen
2007-09-24 10:28 ` [PATCH 1/5] [TCP]: Create tcp_sacktag_state Ilpo Järvinen
2007-09-24 10:28 ` [PATCH 2/5] [TCP]: Create tcp_sacktag_one() Ilpo Järvinen
2007-09-24 10:28 ` [PATCH 3/5] [TCP]: Convert highest_sack to sk_buff to allow direct access Ilpo Järvinen
2007-09-24 10:28 ` [RFC PATCH 4/5] [TCP]: Rewrite sack_recv_cache (WIP) Ilpo Järvinen
2007-09-24 10:28 ` [DEVELOPER PATCH 5/5] [TCP]: Track sacktag Ilpo Järvinen
2007-10-10 9:58 ` [RFC PATCH net-2.6.24 0/5]: TCP sacktag cache usage recoded David Miller
2007-10-10 10:26 ` Ilpo Järvinen
2007-10-10 10:45 ` David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).