netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] net_sched: fix dequeuer fairness
@ 2011-06-26 14:07 jamal
  2011-06-26 14:17 ` jamal
  2011-06-26 15:09 ` Eric Dumazet
  0 siblings, 2 replies; 15+ messages in thread
From: jamal @ 2011-06-26 14:07 UTC (permalink / raw)
  To: David Miller; +Cc: Eric Dumazet, Herbert Xu, netdev

[-- Attachment #1: Type: text/plain, Size: 325 bytes --]


Got the 10G intel cards installed finally and repeated
the tests on both dummy and Ixgbe. The unfairness was much 
higher with 10G than with dummy. The logs contain the results.

I could send another patch with stats gathering. 
The best place seems to be in net/softnet_stat re-using
the fast route entries.

cheers,
jamal

[-- Attachment #2: pns-1 --]
[-- Type: text/plain, Size: 3234 bytes --]

commit e7fbab65da4db8d2ef1a61c915dfa8c96c2e0368
Author: Jamal Hadi Salim <jhs@mojatatu.com>
Date:   Sun Jun 26 09:19:48 2011 -0400

    [PATCH] net_sched: fix dequeuer fairness
    
    Results on dummy device can be seen in my netconf 2011
    slides. These results are for a 10Gige IXGBE intel
    nic - on another i5 machine, very similar specs to
    the one used in the netconf2011 results.
    It turns out - this is a hell lot worse than dummy
    and so this patch is even more beneficial for 10G.
    
    Test setup:
    ----------
    
    System under test sending packets out.
    Additional box connected directly dropping packets.
    Installed prio qdisc on the eth device and default
    netdev default length of 1000 used as is.
    The 3 prio bands each were set to 100 (didnt factor in
    the results).
    
    5 packet runs were made and the middle 3 picked.
    
    results
    -------
    
    The "cpu" column indicates the which cpu the sample
    was taken on,
    The "Pkt runx" carries the number of packets a cpu
    dequeued when forced to be in the "dequeuer" role.
    The "avg" for each run is the number of times each
    cpu should be a "dequeuer" if the system was fair.
    
    3.0-rc4	 (plain)
    cpu	Pkt run1	Pkt run2	Pkt run3
    ================================================
    cpu0	21853354	21598183	22199900
    cpu1	  431058	  473476	  393159
    cpu2	  481975	  477529	  458466
    cpu3	23261406	23412299	22894315
    avg	11506948	11490372	11486460
    
    3.0-rc4 with patch and default weight 64
    cpu	Pkt run1	Pkt run2	Pkt run3
    ================================================
    cpu0	13205312	13109359	13132333
    cpu1	10189914	10159127	10122270
    cpu2	10213871	10124367	10168722
    cpu3	13165760	13164767	13096705
    avg	11693714	11639405	11630008
    
    As you can see the system is still not perfect but
    is a lot better than what it was before...
    
    At the moment we use the old backlog weight, weight_p
    which is 64 packets. It seems to be reasonably fine
    with that value.
    The system could be made more fair if we reduce the
    weight_p (as per my presentation), but we are going
    to affect the shared backlog weight. Unless deemed
    necessary, I think the default value is fine. If not
    we could add yet another knob.
    
    Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index b4c6809..578269e 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -190,14 +190,18 @@ static inline int qdisc_restart(struct Qdisc *q)
 void __qdisc_run(struct Qdisc *q)
 {
 	unsigned long start_time = jiffies;
+	int quota = 0;
+	int work = weight_p;
 
 	while (qdisc_restart(q)) {
+		quota++;
 		/*
-		 * Postpone processing if
-		 * 1. another process needs the CPU;
-		 * 2. we've been doing it for too long.
+		 * Ordered by possible occurrence: Postpone processing if
+		 * 1. we've exceeded packet quota
+		 * 2. another process needs the CPU;
+		 * 3. we've been doing it for too long.
 		 */
-		if (need_resched() || jiffies != start_time) {
+		if (quota >= work || need_resched() || jiffies != start_time) {
 			__netif_schedule(q);
 			break;
 		}

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 14:07 [PATCH] net_sched: fix dequeuer fairness jamal
@ 2011-06-26 14:17 ` jamal
  2011-06-26 14:53   ` Ben Hutchings
  2011-06-26 15:09 ` Eric Dumazet
  1 sibling, 1 reply; 15+ messages in thread
From: jamal @ 2011-06-26 14:17 UTC (permalink / raw)
  To: David Miller; +Cc: Eric Dumazet, Herbert Xu, netdev

[-- Attachment #1: Type: text/plain, Size: 478 bytes --]

Grr. A better tabulation (without tabs) of the results 
on this one.

cheers,
jamal

On Sun, 2011-06-26 at 10:07 -0400, jamal wrote:
> Got the 10G intel cards installed finally and repeated
> the tests on both dummy and Ixgbe. The unfairness was much 
> higher with 10G than with dummy. The logs contain the results.
> 
> I could send another patch with stats gathering. 
> The best place seems to be in net/softnet_stat re-using
> the fast route entries.
> 
> cheers,
> jamal


[-- Attachment #2: pns-2 --]
[-- Type: text/plain, Size: 3237 bytes --]

commit e7fbab65da4db8d2ef1a61c915dfa8c96c2e0368
Author: Jamal Hadi Salim <jhs@mojatatu.com>
Date:   Sun Jun 26 09:19:48 2011 -0400

    [PATCH] net_sched: fix dequeuer fairness
    
    Results on dummy device can be seen in my netconf 2011
    slides. These results are for a 10Gige IXGBE intel
    nic - on another i5 machine, very similar specs to
    the one used in the netconf2011 results.
    It turns out - this is a hell lot worse than dummy
    and so this patch is even more beneficial for 10G.
    
    Test setup:
    ----------
    
    System under test sending packets out.
    Additional box connected directly dropping packets.
    Installed prio qdisc on the eth device and default
    netdev default length of 1000 used as is.
    The 3 prio bands each were set to 100 (didnt factor in
    the results).
    
    5 packet runs were made and the middle 3 picked.
    
    results
    -------
    
    The "cpu" column indicates the which cpu the sample
    was taken on,
    The "Pkt runx" carries the number of packets a cpu
    dequeued when forced to be in the "dequeuer" role.
    The "avg" for each run is the number of times each
    cpu should be a "dequeuer" if the system was fair.
    
    3.0-rc4	 (plain)
    cpu		Pkt run1	Pkt run2	Pkt run3
    ================================================
    cpu0	21853354	21598183	22199900
    cpu1	  431058	  473476	  393159
    cpu2	  481975	  477529	  458466
    cpu3	23261406	23412299	22894315
    avg		11506948	11490372	11486460
    
    3.0-rc4 with patch and default weight 64
    cpu	Pkt run1	Pkt run2	Pkt run3
    ================================================
    cpu0	13205312	13109359	13132333
    cpu1	10189914	10159127	10122270
    cpu2	10213871	10124367	10168722
    cpu3	13165760	13164767	13096705
    avg		11693714	11639405	11630008
    
    As you can see the system is still not perfect but
    is a lot better than what it was before...
    
    At the moment we use the old backlog weight, weight_p
    which is 64 packets. It seems to be reasonably fine
    with that value.
    The system could be made more fair if we reduce the
    weight_p (as per my presentation), but we are going
    to affect the shared backlog weight. Unless deemed
    necessary, I think the default value is fine. If not
    we could add yet another knob.
    
    Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index b4c6809..578269e 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -190,14 +190,18 @@ static inline int qdisc_restart(struct Qdisc *q)
 void __qdisc_run(struct Qdisc *q)
 {
 	unsigned long start_time = jiffies;
+	int quota = 0;
+	int work = weight_p;
 
 	while (qdisc_restart(q)) {
+		quota++;
 		/*
-		 * Postpone processing if
-		 * 1. another process needs the CPU;
-		 * 2. we've been doing it for too long.
+		 * Ordered by possible occurrence: Postpone processing if
+		 * 1. we've exceeded packet quota
+		 * 2. another process needs the CPU;
+		 * 3. we've been doing it for too long.
 		 */
-		if (need_resched() || jiffies != start_time) {
+		if (quota >= work || need_resched() || jiffies != start_time) {
 			__netif_schedule(q);
 			break;
 		}

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 14:17 ` jamal
@ 2011-06-26 14:53   ` Ben Hutchings
  2011-06-26 15:27     ` jamal
  0 siblings, 1 reply; 15+ messages in thread
From: Ben Hutchings @ 2011-06-26 14:53 UTC (permalink / raw)
  To: jhs; +Cc: David Miller, Eric Dumazet, Herbert Xu, netdev

On Sun, 2011-06-26 at 10:17 -0400, jamal wrote:
[...]
> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
> index b4c6809..578269e 100644
> --- a/net/sched/sch_generic.c
> +++ b/net/sched/sch_generic.c
> @@ -190,14 +190,18 @@ static inline int qdisc_restart(struct Qdisc *q)
>  void __qdisc_run(struct Qdisc *q)
>  {
>         unsigned long start_time = jiffies;
> +       int quota = 0;
> +       int work = weight_p;

These variable names seem to be the wrong way round, i.e. the weight is
our 'quota' and the number of packets dequeued is the 'work' we've done.

Ben.

>         while (qdisc_restart(q)) {
> +               quota++;
>                 /*
> -                * Postpone processing if
> -                * 1. another process needs the CPU;
> -                * 2. we've been doing it for too long.
> +                * Ordered by possible occurrence: Postpone processing if
> +                * 1. we've exceeded packet quota
> +                * 2. another process needs the CPU;
> +                * 3. we've been doing it for too long.
>                  */
> -               if (need_resched() || jiffies != start_time) {
> +               if (quota >= work || need_resched() || jiffies != start_time) {
>                         __netif_schedule(q);
>                         break;
>                 }
> 

-- 
Ben Hutchings, Senior Software Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 14:07 [PATCH] net_sched: fix dequeuer fairness jamal
  2011-06-26 14:17 ` jamal
@ 2011-06-26 15:09 ` Eric Dumazet
  2011-06-26 15:32   ` jamal
  1 sibling, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2011-06-26 15:09 UTC (permalink / raw)
  To: jhs; +Cc: David Miller, Herbert Xu, netdev

Le dimanche 26 juin 2011 à 10:07 -0400, jamal a écrit :
> Got the 10G intel cards installed finally and repeated
> the tests on both dummy and Ixgbe. The unfairness was much 
> higher with 10G than with dummy. The logs contain the results.
> 
> I could send another patch with stats gathering. 
> The best place seems to be in net/softnet_stat re-using
> the fast route entries.
> 

Hi Jamal

I would just remove the jiffies break, now we have a 64 packets limit...

if (quota >= work || need_resched()) {
	...
}




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 14:53   ` Ben Hutchings
@ 2011-06-26 15:27     ` jamal
  0 siblings, 0 replies; 15+ messages in thread
From: jamal @ 2011-06-26 15:27 UTC (permalink / raw)
  To: Ben Hutchings; +Cc: David Miller, Eric Dumazet, Herbert Xu, netdev

On Sun, 2011-06-26 at 15:53 +0100, Ben Hutchings wrote:

> These variable names seem to be the wrong way round, i.e. the weight is
> our 'quota' and the number of packets dequeued is the 'work' we've done.

Makes sense - I will make the change.

cheers,
jamal


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 15:09 ` Eric Dumazet
@ 2011-06-26 15:32   ` jamal
  2011-06-26 15:53     ` Eric Dumazet
  2011-06-26 16:03     ` jamal
  0 siblings, 2 replies; 15+ messages in thread
From: jamal @ 2011-06-26 15:32 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, Herbert Xu, netdev

On Sun, 2011-06-26 at 17:09 +0200, Eric Dumazet wrote:


> I would just remove the jiffies break, now we have a 64 packets limit...
> 
> if (quota >= work || need_resched()) {
> 	...
> }

Seems reasonable to do. Some stats (on two different machines
at least with dummy) on a system with low # of processes:
~80% of the time - we exit the loop because of packet quota
~1% for both need_resched and jiffy
~19% simply because there were less than quota packets

Note: we do use a jiffy check on net_rx_action() but i suspect
we never ever hit it.

cheers,
jamal


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 15:32   ` jamal
@ 2011-06-26 15:53     ` Eric Dumazet
  2011-06-26 16:13       ` jamal
  2011-06-26 16:03     ` jamal
  1 sibling, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2011-06-26 15:53 UTC (permalink / raw)
  To: jhs; +Cc: David Miller, Herbert Xu, netdev

Le dimanche 26 juin 2011 à 11:32 -0400, jamal a écrit :
> On Sun, 2011-06-26 at 17:09 +0200, Eric Dumazet wrote:
> 
> 
> > I would just remove the jiffies break, now we have a 64 packets limit...
> > 
> > if (quota >= work || need_resched()) {
> > 	...
> > }
> 
> Seems reasonable to do. Some stats (on two different machines
> at least with dummy) on a system with low # of processes:
> ~80% of the time - we exit the loop because of packet quota
> ~1% for both need_resched and jiffy
> ~19% simply because there were less than quota packets
> 
> Note: we do use a jiffy check on net_rx_action() but i suspect
> we never ever hit it.

This is because of commit 24f8b2385e03a4f.

Prior to this, we could exit very fast from this function, even after
receiving a single packet.

jiffies break is kind of lazy, IMHO ;)




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 15:32   ` jamal
  2011-06-26 15:53     ` Eric Dumazet
@ 2011-06-26 16:03     ` jamal
  2011-06-26 16:26       ` Eric Dumazet
  1 sibling, 1 reply; 15+ messages in thread
From: jamal @ 2011-06-26 16:03 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, Herbert Xu, netdev

[-- Attachment #1: Type: text/plain, Size: 79 bytes --]


Updated version of the patch with feedback from Ben and Eric.

cheers,
jamal


[-- Attachment #2: pns-3 --]
[-- Type: text/plain, Size: 3461 bytes --]

commit e8d4d1ef0584b1a9e7e3890f298da7aad7b7d111
Author: Jamal Hadi Salim <hadi@mojatatu.com>
Date:   Sun Jun 26 11:51:04 2011 -0400

    [PATCH] net_sched: fix dequeuer fairness
    
    Results on dummy device can be seen in my netconf 2011
    slides. These results are for a 10Gige IXGBE intel
    nic - on another i5 machine, very similar specs to
    the one used in the netconf2011 results.
    It turns out - this is a hell lot worse than dummy
    and so this patch is even more beneficial for 10G.
    
    Test setup:
    ----------
    
    System under test sending packets out.
    Additional box connected directly dropping packets.
    Installed prio qdisc on the eth device and default
    netdev default length of 1000 used as is.
    The 3 prio bands each were set to 100 (didnt factor in
    the results).
    
    5 packet runs were made and the middle 3 picked.
    
    results
    -------
    
    The "cpu" column indicates the which cpu the sample
    was taken on,
    The "Pkt runx" carries the number of packets a cpu
    dequeued when forced to be in the "dequeuer" role.
    The "avg" for each run is the number of times each
    cpu should be a "dequeuer" if the system was fair.
    
    3.0-rc4      (plain)
    cpu         Pkt run1        Pkt run2        Pkt run3
    ================================================
    cpu0        21853354        21598183        22199900
    cpu1          431058          473476          393159
    cpu2          481975          477529          458466
    cpu3        23261406        23412299        22894315
    avg         11506948        11490372        11486460
    
    3.0-rc4 with patch and default weight 64
    cpu Pkt run1        Pkt run2        Pkt run3
    ================================================
    cpu0        13205312        13109359        13132333
    cpu1        10189914        10159127        10122270
    cpu2        10213871        10124367        10168722
    cpu3        13165760        13164767        13096705
    avg         11693714        11639405        11630008
    
    As you can see the system is still not perfect but
    is a lot better than what it was before...
    
    At the moment we use the old backlog weight, weight_p
    which is 64 packets. It seems to be reasonably fine
    with that value.
    The system could be made more fair if we reduce the
    weight_p (as per my presentation), but we are going
    to affect the shared backlog weight. Unless deemed
    necessary, I think the default value is fine. If not
    we could add yet another knob.
    
    Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index b4c6809..64195d0 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -190,14 +190,18 @@ static inline int qdisc_restart(struct Qdisc *q)
 void __qdisc_run(struct Qdisc *q)
 {
 	unsigned long start_time = jiffies;
+	int quota = weight_p;
+	int work = 0;
 
 	while (qdisc_restart(q)) {
+		work++;
 		/*
-		 * Postpone processing if
-		 * 1. another process needs the CPU;
-		 * 2. we've been doing it for too long.
+		 * Ordered by possible occurrence: Postpone processing if
+		 * 1. we've exceeded packet quota
+		 * 2. another process needs the CPU;
+		 * 3. we've been doing it for too long.
 		 */
-		if (need_resched() || jiffies != start_time) {
+		if (work >= quota || need_resched()) {
 			__netif_schedule(q);
 			break;
 		}

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 15:53     ` Eric Dumazet
@ 2011-06-26 16:13       ` jamal
  0 siblings, 0 replies; 15+ messages in thread
From: jamal @ 2011-06-26 16:13 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, Herbert Xu, netdev

On Sun, 2011-06-26 at 17:53 +0200, Eric Dumazet wrote:

> 
> This is because of commit 24f8b2385e03a4f.
> 
> Prior to this, we could exit very fast from this function, even after
> receiving a single packet.
> 
> jiffies break is kind of lazy, IMHO ;)

And subjective to the value of Hz. 

In the case of net_rx_action it seems that we need
"something" other than packet budget to get us out of there
in extreme case when we loop and none of the netdevs have anything
to offer. 
In the other extreme it would be very unfair to yield because
of jiffies when budget is not exhausted and devices have something
to offer.

One approach could be to deduct their napi weight when they
return a 0 for work done.

cheers,
jamal


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 16:03     ` jamal
@ 2011-06-26 16:26       ` Eric Dumazet
  2011-06-26 16:29         ` jamal
  2011-06-26 16:38         ` jamal
  0 siblings, 2 replies; 15+ messages in thread
From: Eric Dumazet @ 2011-06-26 16:26 UTC (permalink / raw)
  To: jhs; +Cc: David Miller, Herbert Xu, netdev

Le dimanche 26 juin 2011 à 12:03 -0400, jamal a écrit :
> Updated version of the patch with feedback from Ben and Eric.
> 

Difficult to discuss about your patch because you didnt inline it :-(

You should remove this line in the comment

+                * 3. we've been doing it for too long.





^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 16:26       ` Eric Dumazet
@ 2011-06-26 16:29         ` jamal
  2011-06-26 21:18           ` Ben Hutchings
  2011-06-26 16:38         ` jamal
  1 sibling, 1 reply; 15+ messages in thread
From: jamal @ 2011-06-26 16:29 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, Herbert Xu, netdev

On Sun, 2011-06-26 at 18:26 +0200, Eric Dumazet wrote:

> Difficult to discuss about your patch because you didnt inline it :-(

Evolution messes up with the whitespaces when i do that.

> You should remove this line in the comment
> 
> +                * 3. we've been doing it for too long.

yikes, yes. 

cheers,
jamal




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 16:26       ` Eric Dumazet
  2011-06-26 16:29         ` jamal
@ 2011-06-26 16:38         ` jamal
  2011-06-26 18:13           ` jamal
  1 sibling, 1 reply; 15+ messages in thread
From: jamal @ 2011-06-26 16:38 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, Herbert Xu, netdev

[-- Attachment #1: Type: text/plain, Size: 63 bytes --]


Updated version with latest comment from Eric.

cheers,
jamal

[-- Attachment #2: pns-4 --]
[-- Type: text/plain, Size: 3423 bytes --]

commit bd01154eff66964f516ba0914473b1ef49edcc33
Author: Jamal Hadi Salim <jhs@mojatatu.com>
Date:   Sun Jun 26 12:36:33 2011 -0400

    [PATCH] net_sched: fix dequeuer fairness
    
    Results on dummy device can be seen in my netconf 2011
    slides. These results are for a 10Gige IXGBE intel
    nic - on another i5 machine, very similar specs to
    the one used in the netconf2011 results.
    It turns out - this is a hell lot worse than dummy
    and so this patch is even more beneficial for 10G.
    
    Test setup:
    ----------
    
    System under test sending packets out.
    Additional box connected directly dropping packets.
    Installed prio qdisc on the eth device and default
    netdev default length of 1000 used as is.
    The 3 prio bands each were set to 100 (didnt factor in
    the results).
    
    5 packet runs were made and the middle 3 picked.
    
    results
    -------
    
    The "cpu" column indicates the which cpu the sample
    was taken on,
    The "Pkt runx" carries the number of packets a cpu
    dequeued when forced to be in the "dequeuer" role.
    The "avg" for each run is the number of times each
    cpu should be a "dequeuer" if the system was fair.
    
    3.0-rc4      (plain)
    cpu         Pkt run1        Pkt run2        Pkt run3
    ================================================
    cpu0        21853354        21598183        22199900
    cpu1          431058          473476          393159
    cpu2          481975          477529          458466
    cpu3        23261406        23412299        22894315
    avg         11506948        11490372        11486460
    
    3.0-rc4 with patch and default weight 64
    cpu 	     Pkt run1        Pkt run2        Pkt run3
    ================================================
    cpu0        13205312        13109359        13132333
    cpu1        10189914        10159127        10122270
    cpu2        10213871        10124367        10168722
    cpu3        13165760        13164767        13096705
    avg         11693714        11639405        11630008
    
    As you can see the system is still not perfect but
    is a lot better than what it was before...
    
    At the moment we use the old backlog weight, weight_p
    which is 64 packets. It seems to be reasonably fine
    with that value.
    The system could be made more fair if we reduce the
    weight_p (as per my presentation), but we are going
    to affect the shared backlog weight. Unless deemed
    necessary, I think the default value is fine. If not
    we could add yet another knob.
    
    Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index b4c6809..1006450 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -190,14 +190,17 @@ static inline int qdisc_restart(struct Qdisc *q)
 void __qdisc_run(struct Qdisc *q)
 {
 	unsigned long start_time = jiffies;
+	int quota = weight_p;
+	int work = 0;
 
 	while (qdisc_restart(q)) {
+		work++;
 		/*
-		 * Postpone processing if
-		 * 1. another process needs the CPU;
-		 * 2. we've been doing it for too long.
+		 * Ordered by possible occurrence: Postpone processing if
+		 * 1. we've exceeded packet quota
+		 * 2. another process needs the CPU;
 		 */
-		if (need_resched() || jiffies != start_time) {
+		if (work >= quota || need_resched()) {
 			__netif_schedule(q);
 			break;
 		}

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 16:38         ` jamal
@ 2011-06-26 18:13           ` jamal
  2011-06-27  7:15             ` David Miller
  0 siblings, 1 reply; 15+ messages in thread
From: jamal @ 2011-06-26 18:13 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, Herbert Xu, netdev, adi, Joe Perches


Sigh. One more change (should have heeded the compile warning)
thanks to adi <adi@postpi.com> for pointing it out.
Thanks to Joe Perches <joe@perches.com> for the tip on
inlining with Evolution (seems to work when i sent to myself).

----

commit a4e964941428bdf58741702e2808d2f813dac1fd
Author: Jamal Hadi Salim <jhs@mojatatu.com>
Date:   Sun Jun 26 14:06:29 2011 -0400

    [PATCH] net_sched: fix dequeuer fairness
    
    Results on dummy device can be seen in my netconf 2011
    slides. These results are for a 10Gige IXGBE intel
    nic - on another i5 machine, very similar specs to
    the one used in the netconf2011 results.
    It turns out - this is a hell lot worse than dummy
    and so this patch is even more beneficial for 10G.
    
    Test setup:
    ----------
    
    System under test sending packets out.
    Additional box connected directly dropping packets.
    Installed prio qdisc on the eth device and default
    netdev default length of 1000 used as is.
    The 3 prio bands each were set to 100 (didnt factor in
    the results).
    
    5 packet runs were made and the middle 3 picked.
    
    results
    -------
    
    The "cpu" column indicates the which cpu the sample
    was taken on,
    The "Pkt runx" carries the number of packets a cpu
    dequeued when forced to be in the "dequeuer" role.
    The "avg" for each run is the number of times each
    cpu should be a "dequeuer" if the system was fair.
    
    3.0-rc4      (plain)
    cpu         Pkt run1        Pkt run2        Pkt run3
    ================================================
    cpu0        21853354        21598183        22199900
    cpu1          431058          473476          393159
    cpu2          481975          477529          458466
    cpu3        23261406        23412299        22894315
    avg         11506948        11490372        11486460
    
    3.0-rc4 with patch and default weight 64
    cpu 	     Pkt run1        Pkt run2        Pkt run3
    ================================================
    cpu0        13205312        13109359        13132333
    cpu1        10189914        10159127        10122270
    cpu2        10213871        10124367        10168722
    cpu3        13165760        13164767        13096705
    avg         11693714        11639405        11630008
    
    As you can see the system is still not perfect but
    is a lot better than what it was before...
    
    At the moment we use the old backlog weight, weight_p
    which is 64 packets. It seems to be reasonably fine
    with that value.
    The system could be made more fair if we reduce the
    weight_p (as per my presentation), but we are going
    to affect the shared backlog weight. Unless deemed
    necessary, I think the default value is fine. If not
    we could add yet another knob.
    
    Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index b4c6809..d253c16 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -189,15 +189,17 @@ static inline int qdisc_restart(struct Qdisc *q)
 
 void __qdisc_run(struct Qdisc *q)
 {
-	unsigned long start_time = jiffies;
+	int quota = weight_p;
+	int work = 0;
 
 	while (qdisc_restart(q)) {
+		work++;
 		/*
-		 * Postpone processing if
-		 * 1. another process needs the CPU;
-		 * 2. we've been doing it for too long.
+		 * Ordered by possible occurrence: Postpone processing if
+		 * 1. we've exceeded packet quota
+		 * 2. another process needs the CPU;
 		 */
-		if (need_resched() || jiffies != start_time) {
+		if (work >= quota || need_resched()) {
 			__netif_schedule(q);
 			break;
 		}



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 16:29         ` jamal
@ 2011-06-26 21:18           ` Ben Hutchings
  0 siblings, 0 replies; 15+ messages in thread
From: Ben Hutchings @ 2011-06-26 21:18 UTC (permalink / raw)
  To: jhs; +Cc: Eric Dumazet, David Miller, Herbert Xu, netdev

On Sun, 2011-06-26 at 12:29 -0400, jamal wrote:
> On Sun, 2011-06-26 at 18:26 +0200, Eric Dumazet wrote:
> 
> > Difficult to discuss about your patch because you didnt inline it :-(
> 
> Evolution messes up with the whitespaces when i do that.
[...]

It works for me.  Compose as plain text, and select 'Preformatted'
before inserting the file.  Or use 'git imap-send' to put the message in
your drafts folder and then edit that.

Ben.

-- 
Ben Hutchings, Senior Software Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] net_sched: fix dequeuer fairness
  2011-06-26 18:13           ` jamal
@ 2011-06-27  7:15             ` David Miller
  0 siblings, 0 replies; 15+ messages in thread
From: David Miller @ 2011-06-27  7:15 UTC (permalink / raw)
  To: jhs, hadi; +Cc: eric.dumazet, herbert, netdev, adi, joe

From: jamal <hadi@cyberus.ca>
Date: Sun, 26 Jun 2011 14:13:54 -0400

>     [PATCH] net_sched: fix dequeuer fairness

Applied, thanks!

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2011-06-27  7:15 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-26 14:07 [PATCH] net_sched: fix dequeuer fairness jamal
2011-06-26 14:17 ` jamal
2011-06-26 14:53   ` Ben Hutchings
2011-06-26 15:27     ` jamal
2011-06-26 15:09 ` Eric Dumazet
2011-06-26 15:32   ` jamal
2011-06-26 15:53     ` Eric Dumazet
2011-06-26 16:13       ` jamal
2011-06-26 16:03     ` jamal
2011-06-26 16:26       ` Eric Dumazet
2011-06-26 16:29         ` jamal
2011-06-26 21:18           ` Ben Hutchings
2011-06-26 16:38         ` jamal
2011-06-26 18:13           ` jamal
2011-06-27  7:15             ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).