netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: jamal <hadi@cyberus.ca>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Miller <davem@davemloft.net>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	netdev@vger.kernel.org
Subject: Re: [PATCH] net_sched: fix dequeuer fairness
Date: Sun, 26 Jun 2011 12:03:35 -0400	[thread overview]
Message-ID: <1309104215.5134.37.camel@mojatatu> (raw)
In-Reply-To: <1309102334.5134.31.camel@mojatatu>

[-- Attachment #1: Type: text/plain, Size: 79 bytes --]


Updated version of the patch with feedback from Ben and Eric.

cheers,
jamal


[-- Attachment #2: pns-3 --]
[-- Type: text/plain, Size: 3461 bytes --]

commit e8d4d1ef0584b1a9e7e3890f298da7aad7b7d111
Author: Jamal Hadi Salim <hadi@mojatatu.com>
Date:   Sun Jun 26 11:51:04 2011 -0400

    [PATCH] net_sched: fix dequeuer fairness
    
    Results on dummy device can be seen in my netconf 2011
    slides. These results are for a 10Gige IXGBE intel
    nic - on another i5 machine, very similar specs to
    the one used in the netconf2011 results.
    It turns out - this is a hell lot worse than dummy
    and so this patch is even more beneficial for 10G.
    
    Test setup:
    ----------
    
    System under test sending packets out.
    Additional box connected directly dropping packets.
    Installed prio qdisc on the eth device and default
    netdev default length of 1000 used as is.
    The 3 prio bands each were set to 100 (didnt factor in
    the results).
    
    5 packet runs were made and the middle 3 picked.
    
    results
    -------
    
    The "cpu" column indicates the which cpu the sample
    was taken on,
    The "Pkt runx" carries the number of packets a cpu
    dequeued when forced to be in the "dequeuer" role.
    The "avg" for each run is the number of times each
    cpu should be a "dequeuer" if the system was fair.
    
    3.0-rc4      (plain)
    cpu         Pkt run1        Pkt run2        Pkt run3
    ================================================
    cpu0        21853354        21598183        22199900
    cpu1          431058          473476          393159
    cpu2          481975          477529          458466
    cpu3        23261406        23412299        22894315
    avg         11506948        11490372        11486460
    
    3.0-rc4 with patch and default weight 64
    cpu Pkt run1        Pkt run2        Pkt run3
    ================================================
    cpu0        13205312        13109359        13132333
    cpu1        10189914        10159127        10122270
    cpu2        10213871        10124367        10168722
    cpu3        13165760        13164767        13096705
    avg         11693714        11639405        11630008
    
    As you can see the system is still not perfect but
    is a lot better than what it was before...
    
    At the moment we use the old backlog weight, weight_p
    which is 64 packets. It seems to be reasonably fine
    with that value.
    The system could be made more fair if we reduce the
    weight_p (as per my presentation), but we are going
    to affect the shared backlog weight. Unless deemed
    necessary, I think the default value is fine. If not
    we could add yet another knob.
    
    Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index b4c6809..64195d0 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -190,14 +190,18 @@ static inline int qdisc_restart(struct Qdisc *q)
 void __qdisc_run(struct Qdisc *q)
 {
 	unsigned long start_time = jiffies;
+	int quota = weight_p;
+	int work = 0;
 
 	while (qdisc_restart(q)) {
+		work++;
 		/*
-		 * Postpone processing if
-		 * 1. another process needs the CPU;
-		 * 2. we've been doing it for too long.
+		 * Ordered by possible occurrence: Postpone processing if
+		 * 1. we've exceeded packet quota
+		 * 2. another process needs the CPU;
+		 * 3. we've been doing it for too long.
 		 */
-		if (need_resched() || jiffies != start_time) {
+		if (work >= quota || need_resched()) {
 			__netif_schedule(q);
 			break;
 		}

  parent reply	other threads:[~2011-06-26 16:03 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-26 14:07 [PATCH] net_sched: fix dequeuer fairness jamal
2011-06-26 14:17 ` jamal
2011-06-26 14:53   ` Ben Hutchings
2011-06-26 15:27     ` jamal
2011-06-26 15:09 ` Eric Dumazet
2011-06-26 15:32   ` jamal
2011-06-26 15:53     ` Eric Dumazet
2011-06-26 16:13       ` jamal
2011-06-26 16:03     ` jamal [this message]
2011-06-26 16:26       ` Eric Dumazet
2011-06-26 16:29         ` jamal
2011-06-26 21:18           ` Ben Hutchings
2011-06-26 16:38         ` jamal
2011-06-26 18:13           ` jamal
2011-06-27  7:15             ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1309104215.5134.37.camel@mojatatu \
    --to=hadi@cyberus.ca \
    --cc=davem@davemloft.net \
    --cc=eric.dumazet@gmail.com \
    --cc=herbert@gondor.apana.org.au \
    --cc=jhs@mojatatu.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).