netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/10]: tcp & more tcp
@ 2008-12-05 20:58 Ilpo Järvinen
  2008-12-05 20:58 ` [PATCH 01/10] tcp: force mss equality with the next skb too Ilpo Järvinen
  2008-12-06  6:49 ` [PATCH 0/10]: tcp & more tcp David Miller
  0 siblings, 2 replies; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev

1) Some fixes to recent work, and some to pre-historic things.

2) sacktag state struct to avoid ..., *ptr, *ptr2, *ptr3 crazyness

3) Besides that combine some code here and there.

--
 i.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 01/10] tcp: force mss equality with the next skb too.
  2008-12-05 20:58 [PATCH 0/10]: tcp & more tcp Ilpo Järvinen
@ 2008-12-05 20:58 ` Ilpo Järvinen
  2008-12-05 20:58   ` [PATCH 02/10] tcp: Fix thinko making the not-shiftable to cover S|R as well Ilpo Järvinen
  2008-12-05 22:13   ` [PATCH 01/10] tcp: force mss equality with the next skb too Andi Kleen
  2008-12-06  6:49 ` [PATCH 0/10]: tcp & more tcp David Miller
  1 sibling, 2 replies; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen

Also make if-goto forest nicer looking.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
 net/ipv4/tcp_input.c |    9 ++++-----
 1 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index d67b6e9..63c3ef6 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1575,11 +1575,10 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
 		goto out;
 	skb = tcp_write_queue_next(sk, prev);
 
-	if (!skb_can_shift(skb))
-		goto out;
-	if (skb == tcp_send_head(sk))
-		goto out;
-	if ((TCP_SKB_CB(skb)->sacked & TCPCB_TAGBITS) != TCPCB_SACKED_ACKED)
+	if (!skb_can_shift(skb) ||
+	    (skb == tcp_send_head(sk)) ||
+	    ((TCP_SKB_CB(skb)->sacked & TCPCB_TAGBITS) != TCPCB_SACKED_ACKED) ||
+	    (mss != tcp_shift_mss(skb)))
 		goto out;
 
 	len = skb->len;
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 02/10] tcp: Fix thinko making the not-shiftable to cover S|R as well
  2008-12-05 20:58 ` [PATCH 01/10] tcp: force mss equality with the next skb too Ilpo Järvinen
@ 2008-12-05 20:58   ` Ilpo Järvinen
  2008-12-05 20:58     ` [PATCH 03/10] tcp: make mtu probe failure to not break gso'ed skbs unnecessarily Ilpo Järvinen
  2008-12-05 22:13   ` [PATCH 01/10] tcp: force mss equality with the next skb too Andi Kleen
  1 sibling, 1 reply; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen

S|R won't result in S if just SACK is received. DSACK is
another story (but it is covered correctly already).

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
 net/ipv4/tcp_input.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 63c3ef6..33902f6 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1481,7 +1481,7 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
 
 	/* Normally R but no L won't result in plain S */
 	if (!dup_sack &&
-	    (TCP_SKB_CB(skb)->sacked & TCPCB_TAGBITS) == TCPCB_SACKED_RETRANS)
+	    (TCP_SKB_CB(skb)->sacked & (TCPCB_LOST|TCPCB_SACKED_RETRANS)) == TCPCB_SACKED_RETRANS)
 		goto fallback;
 	if (!skb_can_shift(skb))
 		goto fallback;
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 03/10] tcp: make mtu probe failure to not break gso'ed skbs unnecessarily
  2008-12-05 20:58   ` [PATCH 02/10] tcp: Fix thinko making the not-shiftable to cover S|R as well Ilpo Järvinen
@ 2008-12-05 20:58     ` Ilpo Järvinen
  2008-12-05 20:58       ` [PATCH 04/10] tcp: introduce struct tcp_sacktag_state to reduce arg pressure Ilpo Järvinen
  0 siblings, 1 reply; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen

I noticed that since skb->len has nothing to do with actual segment
length with gso, we need to figure it out separately, reuse
a function from the recent shifting stuff (generalize it).

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
 net/ipv4/tcp_input.c |   19 +++++++------------
 1 files changed, 7 insertions(+), 12 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 33902f6..21c6701 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1445,14 +1445,9 @@ static int tcp_shifted_skb(struct sock *sk, struct sk_buff *prev,
 /* I wish gso_size would have a bit more sane initialization than
  * something-or-zero which complicates things
  */
-static int tcp_shift_mss(struct sk_buff *skb)
+static int tcp_skb_seglen(struct sk_buff *skb)
 {
-	int mss = tcp_skb_mss(skb);
-
-	if (!mss)
-		mss = skb->len;
-
-	return mss;
+	return tcp_skb_pcount(skb) == 1 ? skb->len : tcp_skb_mss(skb);
 }
 
 /* Shifting pages past head area doesn't work */
@@ -1503,12 +1498,12 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
 	if (in_sack) {
 		len = skb->len;
 		pcount = tcp_skb_pcount(skb);
-		mss = tcp_shift_mss(skb);
+		mss = tcp_skb_seglen(skb);
 
 		/* TODO: Fix DSACKs to not fragment already SACKed and we can
 		 * drop this restriction as unnecessary
 		 */
-		if (mss != tcp_shift_mss(prev))
+		if (mss != tcp_skb_seglen(prev))
 			goto fallback;
 	} else {
 		if (!after(TCP_SKB_CB(skb)->end_seq, start_seq))
@@ -1549,7 +1544,7 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
 		/* TODO: Fix DSACKs to not fragment already SACKed and we can
 		 * drop this restriction as unnecessary
 		 */
-		if (mss != tcp_shift_mss(prev))
+		if (mss != tcp_skb_seglen(prev))
 			goto fallback;
 
 		if (len == mss) {
@@ -1578,7 +1573,7 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
 	if (!skb_can_shift(skb) ||
 	    (skb == tcp_send_head(sk)) ||
 	    ((TCP_SKB_CB(skb)->sacked & TCPCB_TAGBITS) != TCPCB_SACKED_ACKED) ||
-	    (mss != tcp_shift_mss(skb)))
+	    (mss != tcp_skb_seglen(skb)))
 		goto out;
 
 	len = skb->len;
@@ -2853,7 +2848,7 @@ void tcp_simple_retransmit(struct sock *sk)
 	tcp_for_write_queue(skb, sk) {
 		if (skb == tcp_send_head(sk))
 			break;
-		if (skb->len > mss &&
+		if (tcp_skb_seglen(skb) > mss &&
 		    !(TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED)) {
 			if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_RETRANS) {
 				TCP_SKB_CB(skb)->sacked &= ~TCPCB_SACKED_RETRANS;
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 04/10] tcp: introduce struct tcp_sacktag_state to reduce arg pressure
  2008-12-05 20:58     ` [PATCH 03/10] tcp: make mtu probe failure to not break gso'ed skbs unnecessarily Ilpo Järvinen
@ 2008-12-05 20:58       ` Ilpo Järvinen
  2008-12-05 20:58         ` [PATCH 05/10] tcp: no need to pass prev skb around, reduces " Ilpo Järvinen
  0 siblings, 1 reply; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen

There are just too many args to some sacktag functions. This
idea was first proposed by David S. Miller around a year ago,
and the current situation is much worse that what it was back
then.

tcp_sacktag_one can be made a bit simpler by returning the
new sacked (it can be achieved with a single variable though
the previous code "caching" sacked into a local variable and
therefore it is not exactly equal but the results will be the
same).

codiff on x86_64
  tcp_sacktag_one         |  -15
  tcp_shifted_skb         |  -50
  tcp_match_skb_to_sack   |   -1
  tcp_sacktag_walk        |  -64
  tcp_sacktag_write_queue |  -59
  tcp_urg                 |   +1
  tcp_event_data_recv     |   -1
 7 functions changed, 1 bytes added, 190 bytes removed, diff: -189

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
 net/ipv4/tcp_input.c |  145 +++++++++++++++++++++++++------------------------
 1 files changed, 74 insertions(+), 71 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 21c6701..57134c2 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1237,6 +1237,12 @@ static int tcp_check_dsack(struct sock *sk, struct sk_buff *ack_skb,
 	return dup_sack;
 }
 
+struct tcp_sacktag_state {
+	int reord;
+	int fack_count;
+	int flag;
+};
+
 /* Check if skb is fully within the SACK block. In presence of GSO skbs,
  * the incoming SACK may not exactly match but we can find smaller MSS
  * aligned portion of it that matches. Therefore we might need to fragment
@@ -1290,25 +1296,25 @@ static int tcp_match_skb_to_sack(struct sock *sk, struct sk_buff *skb,
 	return in_sack;
 }
 
-static int tcp_sacktag_one(struct sk_buff *skb, struct sock *sk,
-			   int *reord, int dup_sack, int fack_count,
-			   u8 *sackedto, int pcount)
+static u8 tcp_sacktag_one(struct sk_buff *skb, struct sock *sk,
+			  struct tcp_sacktag_state *state,
+			  int dup_sack, int pcount)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	u8 sacked = TCP_SKB_CB(skb)->sacked;
-	int flag = 0;
+	int fack_count = state->fack_count;
 
 	/* Account D-SACK for retransmitted packet. */
 	if (dup_sack && (sacked & TCPCB_RETRANS)) {
 		if (after(TCP_SKB_CB(skb)->end_seq, tp->undo_marker))
 			tp->undo_retrans--;
 		if (sacked & TCPCB_SACKED_ACKED)
-			*reord = min(fack_count, *reord);
+			state->reord = min(fack_count, state->reord);
 	}
 
 	/* Nothing to do; acked frame is about to be dropped (was ACKed). */
 	if (!after(TCP_SKB_CB(skb)->end_seq, tp->snd_una))
-		return flag;
+		return sacked;
 
 	if (!(sacked & TCPCB_SACKED_ACKED)) {
 		if (sacked & TCPCB_SACKED_RETRANS) {
@@ -1317,7 +1323,7 @@ static int tcp_sacktag_one(struct sk_buff *skb, struct sock *sk,
 			 * that retransmission is still in flight.
 			 */
 			if (sacked & TCPCB_LOST) {
-				*sackedto &= ~(TCPCB_LOST|TCPCB_SACKED_RETRANS);
+				sacked &= ~(TCPCB_LOST|TCPCB_SACKED_RETRANS);
 				tp->lost_out -= pcount;
 				tp->retrans_out -= pcount;
 			}
@@ -1328,21 +1334,22 @@ static int tcp_sacktag_one(struct sk_buff *skb, struct sock *sk,
 				 */
 				if (before(TCP_SKB_CB(skb)->seq,
 					   tcp_highest_sack_seq(tp)))
-					*reord = min(fack_count, *reord);
+					state->reord = min(fack_count,
+							   state->reord);
 
 				/* SACK enhanced F-RTO (RFC4138; Appendix B) */
 				if (!after(TCP_SKB_CB(skb)->end_seq, tp->frto_highmark))
-					flag |= FLAG_ONLY_ORIG_SACKED;
+					state->flag |= FLAG_ONLY_ORIG_SACKED;
 			}
 
 			if (sacked & TCPCB_LOST) {
-				*sackedto &= ~TCPCB_LOST;
+				sacked &= ~TCPCB_LOST;
 				tp->lost_out -= pcount;
 			}
 		}
 
-		*sackedto |= TCPCB_SACKED_ACKED;
-		flag |= FLAG_DATA_SACKED;
+		sacked |= TCPCB_SACKED_ACKED;
+		state->flag |= FLAG_DATA_SACKED;
 		tp->sacked_out += pcount;
 
 		fack_count += pcount;
@@ -1361,21 +1368,20 @@ static int tcp_sacktag_one(struct sk_buff *skb, struct sock *sk,
 	 * frames and clear it. undo_retrans is decreased above, L|R frames
 	 * are accounted above as well.
 	 */
-	if (dup_sack && (*sackedto & TCPCB_SACKED_RETRANS)) {
-		*sackedto &= ~TCPCB_SACKED_RETRANS;
+	if (dup_sack && (sacked & TCPCB_SACKED_RETRANS)) {
+		sacked &= ~TCPCB_SACKED_RETRANS;
 		tp->retrans_out -= pcount;
 	}
 
-	return flag;
+	return sacked;
 }
 
 static int tcp_shifted_skb(struct sock *sk, struct sk_buff *prev,
-			   struct sk_buff *skb, unsigned int pcount,
-			   int shifted, int fack_count, int *reord,
-			   int *flag, int mss)
+			   struct sk_buff *skb,
+			   struct tcp_sacktag_state *state,
+			   unsigned int pcount, int shifted, int mss)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
-	u8 dummy_sacked = TCP_SKB_CB(skb)->sacked;	/* We discard results */
 
 	BUG_ON(!pcount);
 
@@ -1407,8 +1413,8 @@ static int tcp_shifted_skb(struct sock *sk, struct sk_buff *prev,
 		skb_shinfo(skb)->gso_type = 0;
 	}
 
-	*flag |= tcp_sacktag_one(skb, sk, reord, 0, fack_count, &dummy_sacked,
-				 pcount);
+	/* We discard results */
+	tcp_sacktag_one(skb, sk, state, 0, pcount);
 
 	/* Difference in this won't matter, both ACKed by the same cumul. ACK */
 	TCP_SKB_CB(prev)->sacked |= (TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS);
@@ -1460,9 +1466,9 @@ static int skb_can_shift(struct sk_buff *skb)
  * skb.
  */
 static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
+					  struct tcp_sacktag_state *state,
 					  u32 start_seq, u32 end_seq,
-					  int dup_sack, int *fack_count,
-					  int *reord, int *flag)
+					  int dup_sack)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	struct sk_buff *prev;
@@ -1559,8 +1565,7 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
 
 	if (!skb_shift(prev, skb, len))
 		goto fallback;
-	if (!tcp_shifted_skb(sk, prev, skb, pcount, len, *fack_count, reord,
-			     flag, mss))
+	if (!tcp_shifted_skb(sk, prev, skb, state, pcount, len, mss))
 		goto out;
 
 	/* Hole filled allows collapsing with the next as well, this is very
@@ -1579,12 +1584,12 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
 	len = skb->len;
 	if (skb_shift(prev, skb, len)) {
 		pcount += tcp_skb_pcount(skb);
-		tcp_shifted_skb(sk, prev, skb, tcp_skb_pcount(skb), len,
-				*fack_count, reord, flag, mss);
+		tcp_shifted_skb(sk, prev, skb, state, tcp_skb_pcount(skb), len,
+				mss);
 	}
 
 out:
-	*fack_count += pcount;
+	state->fack_count += pcount;
 	return prev;
 
 noop:
@@ -1597,9 +1602,9 @@ fallback:
 
 static struct sk_buff *tcp_sacktag_walk(struct sk_buff *skb, struct sock *sk,
 					struct tcp_sack_block *next_dup,
+					struct tcp_sacktag_state *state,
 					u32 start_seq, u32 end_seq,
-					int dup_sack_in, int *fack_count,
-					int *reord, int *flag)
+					int dup_sack_in)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	struct sk_buff *tmp;
@@ -1629,9 +1634,8 @@ static struct sk_buff *tcp_sacktag_walk(struct sk_buff *skb, struct sock *sk,
 		 * so not even _safe variant of the loop is enough.
 		 */
 		if (in_sack <= 0) {
-			tmp = tcp_shift_skb_data(sk, skb, start_seq,
-						 end_seq, dup_sack,
-						 fack_count, reord, flag);
+			tmp = tcp_shift_skb_data(sk, skb, state,
+						 start_seq, end_seq, dup_sack);
 			if (tmp != NULL) {
 				if (tmp != skb) {
 					skb = tmp;
@@ -1650,17 +1654,17 @@ static struct sk_buff *tcp_sacktag_walk(struct sk_buff *skb, struct sock *sk,
 			break;
 
 		if (in_sack) {
-			*flag |= tcp_sacktag_one(skb, sk, reord, dup_sack,
-						 *fack_count,
-						 &(TCP_SKB_CB(skb)->sacked),
-						 tcp_skb_pcount(skb));
+			TCP_SKB_CB(skb)->sacked = tcp_sacktag_one(skb, sk,
+								  state,
+								  dup_sack,
+								  tcp_skb_pcount(skb));
 
 			if (!before(TCP_SKB_CB(skb)->seq,
 				    tcp_highest_sack_seq(tp)))
 				tcp_advance_highest_sack(sk, skb);
 		}
 
-		*fack_count += tcp_skb_pcount(skb);
+		state->fack_count += tcp_skb_pcount(skb);
 	}
 	return skb;
 }
@@ -1669,7 +1673,8 @@ static struct sk_buff *tcp_sacktag_walk(struct sk_buff *skb, struct sock *sk,
  * a normal way
  */
 static struct sk_buff *tcp_sacktag_skip(struct sk_buff *skb, struct sock *sk,
-					u32 skip_to_seq, int *fack_count)
+					struct tcp_sacktag_state *state,
+					u32 skip_to_seq)
 {
 	tcp_for_write_queue_from(skb, sk) {
 		if (skb == tcp_send_head(sk))
@@ -1678,7 +1683,7 @@ static struct sk_buff *tcp_sacktag_skip(struct sk_buff *skb, struct sock *sk,
 		if (after(TCP_SKB_CB(skb)->end_seq, skip_to_seq))
 			break;
 
-		*fack_count += tcp_skb_pcount(skb);
+		state->fack_count += tcp_skb_pcount(skb);
 	}
 	return skb;
 }
@@ -1686,18 +1691,17 @@ static struct sk_buff *tcp_sacktag_skip(struct sk_buff *skb, struct sock *sk,
 static struct sk_buff *tcp_maybe_skipping_dsack(struct sk_buff *skb,
 						struct sock *sk,
 						struct tcp_sack_block *next_dup,
-						u32 skip_to_seq,
-						int *fack_count, int *reord,
-						int *flag)
+						struct tcp_sacktag_state *state,
+						u32 skip_to_seq)
 {
 	if (next_dup == NULL)
 		return skb;
 
 	if (before(next_dup->start_seq, skip_to_seq)) {
-		skb = tcp_sacktag_skip(skb, sk, next_dup->start_seq, fack_count);
-		skb = tcp_sacktag_walk(skb, sk, NULL,
-				     next_dup->start_seq, next_dup->end_seq,
-				     1, fack_count, reord, flag);
+		skb = tcp_sacktag_skip(skb, sk, state, next_dup->start_seq);
+		skb = tcp_sacktag_walk(skb, sk, NULL, state,
+				       next_dup->start_seq, next_dup->end_seq,
+				       1);
 	}
 
 	return skb;
@@ -1719,15 +1723,16 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
 	struct tcp_sack_block_wire *sp_wire = (struct tcp_sack_block_wire *)(ptr+2);
 	struct tcp_sack_block sp[TCP_NUM_SACKS];
 	struct tcp_sack_block *cache;
+	struct tcp_sacktag_state state;
 	struct sk_buff *skb;
 	int num_sacks = min(TCP_NUM_SACKS, (ptr[1] - TCPOLEN_SACK_BASE) >> 3);
 	int used_sacks;
-	int reord = tp->packets_out;
-	int flag = 0;
 	int found_dup_sack = 0;
-	int fack_count;
 	int i, j;
 	int first_sack_index;
+	
+	state.flag = 0;
+	state.reord = tp->packets_out;
 
 	if (!tp->sacked_out) {
 		if (WARN_ON(tp->fackets_out))
@@ -1738,7 +1743,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
 	found_dup_sack = tcp_check_dsack(sk, ack_skb, sp_wire,
 					 num_sacks, prior_snd_una);
 	if (found_dup_sack)
-		flag |= FLAG_DSACKING_ACK;
+		state.flag |= FLAG_DSACKING_ACK;
 
 	/* Eliminate too old ACKs, but take into
 	 * account more or less fresh ones, they can
@@ -1807,7 +1812,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
 	}
 
 	skb = tcp_write_queue_head(sk);
-	fack_count = 0;
+	state.fack_count = 0;
 	i = 0;
 
 	if (!tp->sacked_out) {
@@ -1832,7 +1837,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
 
 		/* Event "B" in the comment above. */
 		if (after(end_seq, tp->high_seq))
-			flag |= FLAG_DATA_LOST;
+			state.flag |= FLAG_DATA_LOST;
 
 		/* Skip too early cached blocks */
 		while (tcp_sack_cache_ok(tp, cache) &&
@@ -1845,13 +1850,13 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
 
 			/* Head todo? */
 			if (before(start_seq, cache->start_seq)) {
-				skb = tcp_sacktag_skip(skb, sk, start_seq,
-						       &fack_count);
+				skb = tcp_sacktag_skip(skb, sk, &state,
+						       start_seq);
 				skb = tcp_sacktag_walk(skb, sk, next_dup,
+						       &state,
 						       start_seq,
 						       cache->start_seq,
-						       dup_sack, &fack_count,
-						       &reord, &flag);
+						       dup_sack);
 			}
 
 			/* Rest of the block already fully processed? */
@@ -1859,9 +1864,8 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
 				goto advance_sp;
 
 			skb = tcp_maybe_skipping_dsack(skb, sk, next_dup,
-						       cache->end_seq,
-						       &fack_count, &reord,
-						       &flag);
+						       &state,
+						       cache->end_seq);
 
 			/* ...tail remains todo... */
 			if (tcp_highest_sack_seq(tp) == cache->end_seq) {
@@ -1869,13 +1873,12 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
 				skb = tcp_highest_sack(sk);
 				if (skb == NULL)
 					break;
-				fack_count = tp->fackets_out;
+				state.fack_count = tp->fackets_out;
 				cache++;
 				goto walk;
 			}
 
-			skb = tcp_sacktag_skip(skb, sk, cache->end_seq,
-					       &fack_count);
+			skb = tcp_sacktag_skip(skb, sk, &state, cache->end_seq);
 			/* Check overlap against next cached too (past this one already) */
 			cache++;
 			continue;
@@ -1885,20 +1888,20 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
 			skb = tcp_highest_sack(sk);
 			if (skb == NULL)
 				break;
-			fack_count = tp->fackets_out;
+			state.fack_count = tp->fackets_out;
 		}
-		skb = tcp_sacktag_skip(skb, sk, start_seq, &fack_count);
+		skb = tcp_sacktag_skip(skb, sk, &state, start_seq);
 
 walk:
-		skb = tcp_sacktag_walk(skb, sk, next_dup, start_seq, end_seq,
-				       dup_sack, &fack_count, &reord, &flag);
+		skb = tcp_sacktag_walk(skb, sk, next_dup, &state,
+				       start_seq, end_seq, dup_sack);
 
 advance_sp:
 		/* SACK enhanced FRTO (RFC4138, Appendix B): Clearing correct
 		 * due to in-order walk
 		 */
 		if (after(end_seq, tp->frto_highmark))
-			flag &= ~FLAG_ONLY_ORIG_SACKED;
+			state.flag &= ~FLAG_ONLY_ORIG_SACKED;
 
 		i++;
 	}
@@ -1915,10 +1918,10 @@ advance_sp:
 
 	tcp_verify_left_out(tp);
 
-	if ((reord < tp->fackets_out) &&
+	if ((state.reord < tp->fackets_out) &&
 	    ((icsk->icsk_ca_state != TCP_CA_Loss) || tp->undo_marker) &&
 	    (!tp->frto_highmark || after(tp->snd_una, tp->frto_highmark)))
-		tcp_update_reordering(sk, tp->fackets_out - reord, 0);
+		tcp_update_reordering(sk, tp->fackets_out - state.reord, 0);
 
 out:
 
@@ -1928,7 +1931,7 @@ out:
 	WARN_ON((int)tp->retrans_out < 0);
 	WARN_ON((int)tcp_packets_in_flight(tp) < 0);
 #endif
-	return flag;
+	return state.flag;
 }
 
 /* Limits sacked_out so that sum with lost_out isn't ever larger than
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 05/10] tcp: no need to pass prev skb around, reduces arg pressure
  2008-12-05 20:58       ` [PATCH 04/10] tcp: introduce struct tcp_sacktag_state to reduce arg pressure Ilpo Järvinen
@ 2008-12-05 20:58         ` Ilpo Järvinen
  2008-12-05 20:58           ` [PATCH 06/10] tcp: drop tcp_bound_rto, merge content of it tcp_set_rto Ilpo Järvinen
  0 siblings, 1 reply; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
 net/ipv4/tcp_input.c |    9 ++++-----
 1 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 57134c2..37b1cac 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1376,12 +1376,12 @@ static u8 tcp_sacktag_one(struct sk_buff *skb, struct sock *sk,
 	return sacked;
 }
 
-static int tcp_shifted_skb(struct sock *sk, struct sk_buff *prev,
-			   struct sk_buff *skb,
+static int tcp_shifted_skb(struct sock *sk, struct sk_buff *skb,
 			   struct tcp_sacktag_state *state,
 			   unsigned int pcount, int shifted, int mss)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
+	struct sk_buff *prev = tcp_write_queue_prev(sk, skb);
 
 	BUG_ON(!pcount);
 
@@ -1565,7 +1565,7 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
 
 	if (!skb_shift(prev, skb, len))
 		goto fallback;
-	if (!tcp_shifted_skb(sk, prev, skb, state, pcount, len, mss))
+	if (!tcp_shifted_skb(sk, skb, state, pcount, len, mss))
 		goto out;
 
 	/* Hole filled allows collapsing with the next as well, this is very
@@ -1584,8 +1584,7 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
 	len = skb->len;
 	if (skb_shift(prev, skb, len)) {
 		pcount += tcp_skb_pcount(skb);
-		tcp_shifted_skb(sk, prev, skb, state, tcp_skb_pcount(skb), len,
-				mss);
+		tcp_shifted_skb(sk, skb, state, tcp_skb_pcount(skb), len, mss);
 	}
 
 out:
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 06/10] tcp: drop tcp_bound_rto, merge content of it tcp_set_rto
  2008-12-05 20:58         ` [PATCH 05/10] tcp: no need to pass prev skb around, reduces " Ilpo Järvinen
@ 2008-12-05 20:58           ` Ilpo Järvinen
  2008-12-05 20:58             ` [PATCH 07/10] tcp: share code through function, not through copy-paste. :-) Ilpo Järvinen
  0 siblings, 1 reply; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen

Both are called by the same sites.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
 net/ipv4/tcp_input.c |   12 +++---------
 1 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 37b1cac..7d887ac 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -701,13 +701,10 @@ static inline void tcp_set_rto(struct sock *sk)
 	 *    all the algo is pure shit and should be replaced
 	 *    with correct one. It is exactly, which we pretend to do.
 	 */
-}
 
-/* NOTE: clamping at TCP_RTO_MIN is not required, current algo
- * guarantees that rto is higher.
- */
-static inline void tcp_bound_rto(struct sock *sk)
-{
+	/* NOTE: clamping at TCP_RTO_MIN is not required, current algo
+	 * guarantees that rto is higher.
+	 */
 	if (inet_csk(sk)->icsk_rto > TCP_RTO_MAX)
 		inet_csk(sk)->icsk_rto = TCP_RTO_MAX;
 }
@@ -928,7 +925,6 @@ static void tcp_init_metrics(struct sock *sk)
 		tp->mdev_max = tp->rttvar = max(tp->mdev, tcp_rto_min(sk));
 	}
 	tcp_set_rto(sk);
-	tcp_bound_rto(sk);
 	if (inet_csk(sk)->icsk_rto < TCP_TIMEOUT_INIT && !tp->rx_opt.saw_tstamp)
 		goto reset;
 	tp->snd_cwnd = tcp_init_cwnd(tp, dst);
@@ -3081,7 +3077,6 @@ static void tcp_ack_saw_tstamp(struct sock *sk, int flag)
 	tcp_rtt_estimator(sk, seq_rtt);
 	tcp_set_rto(sk);
 	inet_csk(sk)->icsk_backoff = 0;
-	tcp_bound_rto(sk);
 }
 
 static void tcp_ack_no_tstamp(struct sock *sk, u32 seq_rtt, int flag)
@@ -3101,7 +3096,6 @@ static void tcp_ack_no_tstamp(struct sock *sk, u32 seq_rtt, int flag)
 	tcp_rtt_estimator(sk, seq_rtt);
 	tcp_set_rto(sk);
 	inet_csk(sk)->icsk_backoff = 0;
-	tcp_bound_rto(sk);
 }
 
 static inline void tcp_ack_update_rtt(struct sock *sk, const int flag,
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 07/10] tcp: share code through function, not through copy-paste. :-)
  2008-12-05 20:58           ` [PATCH 06/10] tcp: drop tcp_bound_rto, merge content of it tcp_set_rto Ilpo Järvinen
@ 2008-12-05 20:58             ` Ilpo Järvinen
  2008-12-05 20:58               ` [PATCH 08/10] tcp: move some parts from tcp_write_xmit Ilpo Järvinen
  0 siblings, 1 reply; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
 net/ipv4/tcp_input.c |   17 ++++++++++-------
 1 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 7d887ac..e379af9 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -3052,6 +3052,13 @@ static void tcp_fastretrans_alert(struct sock *sk, int pkts_acked, int flag)
 	tcp_xmit_retransmit_queue(sk);
 }
 
+static void tcp_valid_rtt_meas(struct sock *sk, u32 seq_rtt)
+{
+	tcp_rtt_estimator(sk, seq_rtt);
+	tcp_set_rto(sk);
+	inet_csk(sk)->icsk_backoff = 0;
+}
+
 /* Read draft-ietf-tcplw-high-performance before mucking
  * with this code. (Supersedes RFC1323)
  */
@@ -3073,10 +3080,8 @@ static void tcp_ack_saw_tstamp(struct sock *sk, int flag)
 	 * in window is lost... Voila.	 			--ANK (010210)
 	 */
 	struct tcp_sock *tp = tcp_sk(sk);
-	const __u32 seq_rtt = tcp_time_stamp - tp->rx_opt.rcv_tsecr;
-	tcp_rtt_estimator(sk, seq_rtt);
-	tcp_set_rto(sk);
-	inet_csk(sk)->icsk_backoff = 0;
+
+	tcp_valid_rtt_meas(sk, tcp_time_stamp - tp->rx_opt.rcv_tsecr);
 }
 
 static void tcp_ack_no_tstamp(struct sock *sk, u32 seq_rtt, int flag)
@@ -3093,9 +3098,7 @@ static void tcp_ack_no_tstamp(struct sock *sk, u32 seq_rtt, int flag)
 	if (flag & FLAG_RETRANS_DATA_ACKED)
 		return;
 
-	tcp_rtt_estimator(sk, seq_rtt);
-	tcp_set_rto(sk);
-	inet_csk(sk)->icsk_backoff = 0;
+	tcp_valid_rtt_meas(sk, seq_rtt);
 }
 
 static inline void tcp_ack_update_rtt(struct sock *sk, const int flag,
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 08/10] tcp: move some parts from tcp_write_xmit
  2008-12-05 20:58             ` [PATCH 07/10] tcp: share code through function, not through copy-paste. :-) Ilpo Järvinen
@ 2008-12-05 20:58               ` Ilpo Järvinen
  2008-12-05 20:58                 ` [PATCH 09/10] tcp: use tcp_write_xmit also in tcp_push_one Ilpo Järvinen
  0 siblings, 1 reply; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
 net/ipv4/tcp_output.c |   23 ++++++++++++-----------
 1 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 0744b9f..59505ce 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1527,13 +1527,6 @@ static int tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle)
 	int cwnd_quota;
 	int result;
 
-	/* If we are closed, the bytes will have to remain here.
-	 * In time closedown will finish, we empty the write queue and all
-	 * will be happy.
-	 */
-	if (unlikely(sk->sk_state == TCP_CLOSE))
-		return 0;
-
 	sent_pkts = 0;
 
 	/* Do MTU probing. */
@@ -1605,10 +1598,18 @@ void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss,
 {
 	struct sk_buff *skb = tcp_send_head(sk);
 
-	if (skb) {
-		if (tcp_write_xmit(sk, cur_mss, nonagle))
-			tcp_check_probe_timer(sk);
-	}
+	if (!skb)
+		return;
+
+	/* If we are closed, the bytes will have to remain here.
+	 * In time closedown will finish, we empty the write queue and
+	 * all will be happy.
+	 */
+	if (unlikely(sk->sk_state == TCP_CLOSE))
+		return;
+
+	if (tcp_write_xmit(sk, cur_mss, nonagle))
+		tcp_check_probe_timer(sk);
 }
 
 /* Send _single_ skb sitting at the send head. This function requires
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 09/10] tcp: use tcp_write_xmit also in tcp_push_one
  2008-12-05 20:58               ` [PATCH 08/10] tcp: move some parts from tcp_write_xmit Ilpo Järvinen
@ 2008-12-05 20:58                 ` Ilpo Järvinen
  2008-12-05 20:58                   ` [PATCH 10/10] tcp: fix tso_should_defer in 64bit Ilpo Järvinen
  0 siblings, 1 reply; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen

tcp_minshall_update is not significant difference since it only
checks for not full-sized skb which is BUG'ed one the push_one
path anyway.

tcp_snd_test is tcp_nagle_test+tcp_cwnd_test+tcp_snd_wnd_test,
just the order changed slightly.

net/ipv4/tcp_output.c:
  tcp_snd_test              |  -89
  tcp_mss_split_point       |  -91
  tcp_may_send_now          |  +53
  tcp_cwnd_validate         |  -98
  tso_fragment              | -239
  __tcp_push_pending_frames | -1340
  tcp_push_one              | -146
 7 functions changed, 53 bytes added, 2003 bytes removed, diff: -1950

net/ipv4/tcp_output.c:
  tcp_write_xmit | +1772
 1 function changed, 1772 bytes added, diff: +1772

tcp_output.o.new:
 8 functions changed, 1825 bytes added, 2003 bytes removed, diff: -178

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
---
 net/ipv4/tcp_output.c |   54 +++++++++++++++---------------------------------
 1 files changed, 17 insertions(+), 37 deletions(-)

diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 59505ce..e65c114 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1519,7 +1519,8 @@ static int tcp_mtu_probe(struct sock *sk)
  * Returns 1, if no segments are in flight and we have queued segments, but
  * cannot send anything now because of SWS or another problem.
  */
-static int tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle)
+static int tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle,
+			  int push_one, gfp_t gfp)
 {
 	struct tcp_sock *tp = tcp_sk(sk);
 	struct sk_buff *skb;
@@ -1529,11 +1530,14 @@ static int tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle)
 
 	sent_pkts = 0;
 
-	/* Do MTU probing. */
-	if ((result = tcp_mtu_probe(sk)) == 0) {
-		return 0;
-	} else if (result > 0) {
-		sent_pkts = 1;
+	if (!push_one) {
+		/* Do MTU probing. */
+		result = tcp_mtu_probe(sk);
+		if (!result) {
+			return 0;
+		} else if (result > 0) {
+			sent_pkts = 1;
+		}
 	}
 
 	while ((skb = tcp_send_head(sk))) {
@@ -1555,7 +1559,7 @@ static int tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle)
 						      nonagle : TCP_NAGLE_PUSH))))
 				break;
 		} else {
-			if (tcp_tso_should_defer(sk, skb))
+			if (!push_one && tcp_tso_should_defer(sk, skb))
 				break;
 		}
 
@@ -1570,7 +1574,7 @@ static int tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle)
 
 		TCP_SKB_CB(skb)->when = tcp_time_stamp;
 
-		if (unlikely(tcp_transmit_skb(sk, skb, 1, GFP_ATOMIC)))
+		if (unlikely(tcp_transmit_skb(sk, skb, 1, gfp)))
 			break;
 
 		/* Advance the send_head.  This one is sent out.
@@ -1580,6 +1584,9 @@ static int tcp_write_xmit(struct sock *sk, unsigned int mss_now, int nonagle)
 
 		tcp_minshall_update(tp, mss_now, skb);
 		sent_pkts++;
+
+		if (push_one)
+			break;
 	}
 
 	if (likely(sent_pkts)) {
@@ -1608,7 +1615,7 @@ void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss,
 	if (unlikely(sk->sk_state == TCP_CLOSE))
 		return;
 
-	if (tcp_write_xmit(sk, cur_mss, nonagle))
+	if (tcp_write_xmit(sk, cur_mss, nonagle, 0, GFP_ATOMIC))
 		tcp_check_probe_timer(sk);
 }
 
@@ -1617,38 +1624,11 @@ void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss,
  */
 void tcp_push_one(struct sock *sk, unsigned int mss_now)
 {
-	struct tcp_sock *tp = tcp_sk(sk);
 	struct sk_buff *skb = tcp_send_head(sk);
-	unsigned int tso_segs, cwnd_quota;
 
 	BUG_ON(!skb || skb->len < mss_now);
 
-	tso_segs = tcp_init_tso_segs(sk, skb, mss_now);
-	cwnd_quota = tcp_snd_test(sk, skb, mss_now, TCP_NAGLE_PUSH);
-
-	if (likely(cwnd_quota)) {
-		unsigned int limit;
-
-		BUG_ON(!tso_segs);
-
-		limit = mss_now;
-		if (tso_segs > 1 && !tcp_urg_mode(tp))
-			limit = tcp_mss_split_point(sk, skb, mss_now,
-						    cwnd_quota);
-
-		if (skb->len > limit &&
-		    unlikely(tso_fragment(sk, skb, limit, mss_now)))
-			return;
-
-		/* Send it out now. */
-		TCP_SKB_CB(skb)->when = tcp_time_stamp;
-
-		if (likely(!tcp_transmit_skb(sk, skb, 1, sk->sk_allocation))) {
-			tcp_event_new_data_sent(sk, skb);
-			tcp_cwnd_validate(sk);
-			return;
-		}
-	}
+	tcp_write_xmit(sk, mss_now, TCP_NAGLE_PUSH, 1, sk->sk_allocation);
 }
 
 /* This function returns the amount that we can raise the
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 10/10] tcp: fix tso_should_defer in 64bit
  2008-12-05 20:58                 ` [PATCH 09/10] tcp: use tcp_write_xmit also in tcp_push_one Ilpo Järvinen
@ 2008-12-05 20:58                   ` Ilpo Järvinen
  0 siblings, 0 replies; 13+ messages in thread
From: Ilpo Järvinen @ 2008-12-05 20:58 UTC (permalink / raw)
  To: David Miller; +Cc: netdev, Ilpo J�rvinen, Bill Fink

Since jiffies is unsigned long, the types get expanded into
that and after long enough time the difference will therefore
always be > 1 (and that probably happens near boot as well as
iirc the first jiffies wrap is scheduler close after boot to
find out problems related to that early).

This was originally noted by Bill Fink in Dec'07 but nobody
never ended fixing it.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Cc: Bill Fink <billfink@mindspring.com>
---
 net/ipv4/tcp_output.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index e65c114..dda42f0 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1334,7 +1334,7 @@ static int tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb)
 
 	/* Defer for less than two clock ticks. */
 	if (tp->tso_deferred &&
-	    ((jiffies << 1) >> 1) - (tp->tso_deferred >> 1) > 1)
+	    (((u32)jiffies << 1) >> 1) - (tp->tso_deferred >> 1) > 1)
 		goto send_now;
 
 	in_flight = tcp_packets_in_flight(tp);
-- 
1.5.2.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 01/10] tcp: force mss equality with the next skb too.
  2008-12-05 20:58 ` [PATCH 01/10] tcp: force mss equality with the next skb too Ilpo Järvinen
  2008-12-05 20:58   ` [PATCH 02/10] tcp: Fix thinko making the not-shiftable to cover S|R as well Ilpo Järvinen
@ 2008-12-05 22:13   ` Andi Kleen
  1 sibling, 0 replies; 13+ messages in thread
From: Andi Kleen @ 2008-12-05 22:13 UTC (permalink / raw)
  To: Ilpo Järvinen; +Cc: David Miller, netdev

"Ilpo Järvinen" <ilpo.jarvinen@helsinki.fi> writes:
> +++ b/net/ipv4/tcp_input.c
> @@ -1575,11 +1575,10 @@ static struct sk_buff *tcp_shift_skb_data(struct sock *sk, struct sk_buff *skb,
>  		goto out;
>  	skb = tcp_write_queue_next(sk, prev);
>  
> -	if (!skb_can_shift(skb))
> -		goto out;
> -	if (skb == tcp_send_head(sk))
> -		goto out;
> -	if ((TCP_SKB_CB(skb)->sacked & TCPCB_TAGBITS) != TCPCB_SACKED_ACKED)
> +	if (!skb_can_shift(skb) ||
> +	    (skb == tcp_send_head(sk)) ||
> +	    ((TCP_SKB_CB(skb)->sacked & TCPCB_TAGBITS) != TCPCB_SACKED_ACKED) ||
> +	    (mss != tcp_shift_mss(skb)))
>  		goto out;

Perhaps it's just me, but I think the code was far more readable
before your change.

-Andi

-- 
ak@linux.intel.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/10]: tcp & more tcp
  2008-12-05 20:58 [PATCH 0/10]: tcp & more tcp Ilpo Järvinen
  2008-12-05 20:58 ` [PATCH 01/10] tcp: force mss equality with the next skb too Ilpo Järvinen
@ 2008-12-06  6:49 ` David Miller
  1 sibling, 0 replies; 13+ messages in thread
From: David Miller @ 2008-12-06  6:49 UTC (permalink / raw)
  To: ilpo.jarvinen; +Cc: netdev

From: "Ilpo Järvinen" <ilpo.jarvinen@helsinki.fi>
Date: Fri,  5 Dec 2008 22:58:46 +0200

> 1) Some fixes to recent work, and some to pre-historic things.
> 
> 2) sacktag state struct to avoid ..., *ptr, *ptr2, *ptr3 crazyness
> 
> 3) Besides that combine some code here and there.

All applied, but I had to pull in net-2.6 into net-next-2.6
along the way or else patch 9 wouldn't apply.

Thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2008-12-06  6:49 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-12-05 20:58 [PATCH 0/10]: tcp & more tcp Ilpo Järvinen
2008-12-05 20:58 ` [PATCH 01/10] tcp: force mss equality with the next skb too Ilpo Järvinen
2008-12-05 20:58   ` [PATCH 02/10] tcp: Fix thinko making the not-shiftable to cover S|R as well Ilpo Järvinen
2008-12-05 20:58     ` [PATCH 03/10] tcp: make mtu probe failure to not break gso'ed skbs unnecessarily Ilpo Järvinen
2008-12-05 20:58       ` [PATCH 04/10] tcp: introduce struct tcp_sacktag_state to reduce arg pressure Ilpo Järvinen
2008-12-05 20:58         ` [PATCH 05/10] tcp: no need to pass prev skb around, reduces " Ilpo Järvinen
2008-12-05 20:58           ` [PATCH 06/10] tcp: drop tcp_bound_rto, merge content of it tcp_set_rto Ilpo Järvinen
2008-12-05 20:58             ` [PATCH 07/10] tcp: share code through function, not through copy-paste. :-) Ilpo Järvinen
2008-12-05 20:58               ` [PATCH 08/10] tcp: move some parts from tcp_write_xmit Ilpo Järvinen
2008-12-05 20:58                 ` [PATCH 09/10] tcp: use tcp_write_xmit also in tcp_push_one Ilpo Järvinen
2008-12-05 20:58                   ` [PATCH 10/10] tcp: fix tso_should_defer in 64bit Ilpo Järvinen
2008-12-05 22:13   ` [PATCH 01/10] tcp: force mss equality with the next skb too Andi Kleen
2008-12-06  6:49 ` [PATCH 0/10]: tcp & more tcp David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).