From: Ingo Molnar <mingo@elte.hu>
To: Eric Dumazet <dada1@cosmosbay.com>
Cc: Vladimir Ivashchenko <hazard@francoudi.com>, netdev@vger.kernel.org
Subject: Re: bond + tc regression ?
Date: Wed, 6 May 2009 10:03:35 +0200 [thread overview]
Message-ID: <20090506080335.GA8098@elte.hu> (raw)
In-Reply-To: <4A008A72.6030607@cosmosbay.com>
* Eric Dumazet <dada1@cosmosbay.com> wrote:
> Vladimir Ivashchenko a écrit :
> >>> On both kernels, the system is running with at least 70% idle CPU.
> >>> The network interrupts are distributed accross the cores.
> >> You should not distribute interrupts, but bound a NIC to one CPU
> >
> > Kernels 2.6.28 and 2.6.29 do this by default, so I thought its correct.
> > The defaults are wrong?
>
> Yes they are, at least for forwarding setups.
>
> >
> > I have tried with IRQs bound to one CPU per NIC. Same result.
>
> Did you check "grep eth /proc/interrupts" that your affinities setup
> were indeed taken into account ?
>
> You should use same CPU for eth0 and eth2 (bond0),
>
> and another CPU for eth1 and eth3 (bond1)
>
> check how your cpus are setup
>
> egrep 'physical id|core id|processor' /proc/cpuinfo
>
> Because you might play and find best combo
>
>
> If you use 2.6.29, apply following patch to get better system accounting,
> to check if your cpu are saturated or not by hard/soft irqs
>
> --- linux-2.6.29/kernel/sched.c.orig 2009-05-05 20:46:49.000000000 +0200
> +++ linux-2.6.29/kernel/sched.c 2009-05-05 20:47:19.000000000 +0200
> @@ -4290,7 +4290,7 @@
>
> if (user_tick)
> account_user_time(p, one_jiffy, one_jiffy_scaled);
> - else if (p != rq->idle)
> + else if ((p != rq->idle) || (irq_count() != HARDIRQ_OFFSET))
> account_system_time(p, HARDIRQ_OFFSET, one_jiffy,
> one_jiffy_scaled);
> else
Note, your scheduler fix is upstream now in Linus's tree, as:
f5f293a: sched: account system time properly
"git cherry-pick f5f293a" will apply it to a .29 basis.
Ingo
next prev parent reply other threads:[~2009-05-06 8:03 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-05-05 15:45 bond + tc regression ? Vladimir Ivashchenko
2009-05-05 16:25 ` Denys Fedoryschenko
2009-05-05 16:31 ` Eric Dumazet
2009-05-05 17:41 ` Vladimir Ivashchenko
2009-05-05 18:50 ` Eric Dumazet
2009-05-05 23:50 ` Vladimir Ivashchenko
2009-05-05 23:52 ` Stephen Hemminger
2009-05-06 3:36 ` Eric Dumazet
2009-05-06 10:28 ` Vladimir Ivashchenko
2009-05-06 10:41 ` Eric Dumazet
2009-05-06 10:49 ` Denys Fedoryschenko
2009-05-06 18:45 ` Vladimir Ivashchenko
2009-05-06 19:30 ` Denys Fedoryschenko
2009-05-06 20:47 ` Vladimir Ivashchenko
2009-05-06 21:46 ` Denys Fedoryschenko
2009-05-08 20:46 ` Vladimir Ivashchenko
2009-05-08 21:05 ` Denys Fedoryschenko
2009-05-08 22:07 ` Vladimir Ivashchenko
2009-05-08 22:42 ` Denys Fedoryschenko
2009-05-17 18:46 ` Vladimir Ivashchenko
2009-05-18 8:51 ` Jarek Poplawski
2009-05-06 8:03 ` Ingo Molnar [this message]
2009-05-06 6:10 ` Jarek Poplawski
2009-05-06 10:36 ` Vladimir Ivashchenko
2009-05-06 10:48 ` Jarek Poplawski
2009-05-06 13:11 ` Vladimir Ivashchenko
2009-05-06 13:31 ` Patrick McHardy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090506080335.GA8098@elte.hu \
--to=mingo@elte.hu \
--cc=dada1@cosmosbay.com \
--cc=hazard@francoudi.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).