netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH][NET_SCHED] sch_sfq: fix queue limiting while enqueuing
@ 2008-04-27 14:22 Jarek Poplawski
  2008-04-27 18:29 ` Patrick McHardy
  2008-04-28  9:04 ` [PATCH][NET_SCHED] sch_sfq: prevent unnecessary reordering Jarek Poplawski
  0 siblings, 2 replies; 13+ messages in thread
From: Jarek Poplawski @ 2008-04-27 14:22 UTC (permalink / raw)
  To: David Miller; +Cc: netdev


[NET_SCHED] sch_sfq: fix queue limiting while enqueuing

Current way to use q->limit both as per flow and total queue limit
doesn't make much sense. There would be no reason for sfq's limit
parameter if the same could be done with ifconfig's txqueuelen, and
it's too small for this anyway: with a number of flow queues around
maximum (SFQ_DEPTH) each queue would be ~1 packet long, which means
no queuing...

There is also changed a place for checking the total limit: let's do
it at the beginning (like in pfifo_fast_enqueue()). IMHO current way
is especially wrong during congestion: cpu & time costly classifying
and enqueuing is done under queue_lock, to end with dropping of
enqueued packets while adding newer ones. This means reordering and
usually more resending. (I think, similar changes should be done in
a few more qdiscs.)


Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>

---

 net/sched/sch_sfq.c |   29 +++++++++++++----------------
 1 files changed, 13 insertions(+), 16 deletions(-)

diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
index a20e2ef..b4fd592 100644
--- a/net/sched/sch_sfq.c
+++ b/net/sched/sch_sfq.c
@@ -283,6 +283,9 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 	sfq_index x;
 	int ret;
 
+	if (unlikely(sch->q.qlen >= max_t(__u32, sch->dev->tx_queue_len, 1)))
+		return qdisc_drop(skb, sch);
+
 	hash = sfq_classify(skb, sch, &ret);
 	if (hash == 0) {
 		if (ret == NET_XMIT_BYPASS)
@@ -319,14 +322,11 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 			q->tail = x;
 		}
 	}
-	if (++sch->q.qlen <= q->limit) {
-		sch->bstats.bytes += skb->len;
-		sch->bstats.packets++;
-		return 0;
-	}
 
-	sfq_drop(sch);
-	return NET_XMIT_CN;
+	sch->q.qlen++;
+	sch->bstats.bytes += skb->len;
+	sch->bstats.packets++;
+	return 0;
 }
 
 static int
@@ -337,6 +337,9 @@ sfq_requeue(struct sk_buff *skb, struct Qdisc *sch)
 	sfq_index x;
 	int ret;
 
+	if (unlikely(sch->q.qlen >= max_t(__u32, sch->dev->tx_queue_len, 1)))
+		return qdisc_drop(skb, sch);
+
 	hash = sfq_classify(skb, sch, &ret);
 	if (hash == 0) {
 		if (ret == NET_XMIT_BYPASS)
@@ -381,14 +384,8 @@ sfq_requeue(struct sk_buff *skb, struct Qdisc *sch)
 		}
 	}
 
-	if (++sch->q.qlen <= q->limit) {
-		sch->qstats.requeues++;
-		return 0;
-	}
-
-	sch->qstats.drops++;
-	sfq_drop(sch);
-	return NET_XMIT_CN;
+	sch->qstats.requeues++;
+	return 0;
 }
 
 
@@ -467,7 +464,7 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
 		q->limit = min_t(u32, ctl->limit, SFQ_DEPTH - 1);
 
 	qlen = sch->q.qlen;
-	while (sch->q.qlen > q->limit)
+	while (q->max_depth > q->limit)
 		sfq_drop(sch);
 	qdisc_tree_decrease_qlen(sch, qlen - sch->q.qlen);
 

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2008-04-30  7:12 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-04-27 14:22 [PATCH][NET_SCHED] sch_sfq: fix queue limiting while enqueuing Jarek Poplawski
2008-04-27 18:29 ` Patrick McHardy
2008-04-27 20:36   ` Jarek Poplawski
2008-04-28 14:02     ` Patrick McHardy
2008-04-28 14:58       ` Jarek Poplawski
2008-04-29 20:53         ` Jarek Poplawski
2008-04-30  7:04           ` Jarek Poplawski
2008-04-30  7:12             ` Patrick McHardy
2008-04-28  9:04 ` [PATCH][NET_SCHED] sch_sfq: prevent unnecessary reordering Jarek Poplawski
2008-04-28  9:03   ` David Miller
2008-04-28 14:39     ` Jarek Poplawski
2008-04-28 11:37   ` Andy Furniss
2008-04-28 15:41     ` Jarek Poplawski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).