netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stephen Hemminger <shemminger@vyatta.com>
To: Vladimir Ivashchenko <hazard@francoudi.com>
Cc: Eric Dumazet <dada1@cosmosbay.com>, netdev@vger.kernel.org
Subject: Re: bond + tc regression ?
Date: Tue, 5 May 2009 16:52:53 -0700	[thread overview]
Message-ID: <20090505165253.21f9e086@nehalam> (raw)
In-Reply-To: <20090505235008.GA17690@francoudi.com>

On Wed, 6 May 2009 02:50:08 +0300
Vladimir Ivashchenko <hazard@francoudi.com> wrote:

> On Tue, May 05, 2009 at 08:50:26PM +0200, Eric Dumazet wrote:
> 
> > > I have tried with IRQs bound to one CPU per NIC. Same result.
> > 
> > Did you check "grep eth /proc/interrupts" that your affinities setup 
> > were indeed taken into account ?
> > 
> > You should use same CPU for eth0 and eth2 (bond0),
> > 
> > and another CPU for eth1 and eth3 (bond1)
> 
> Ok, the best result is when assign all IRQs to the same CPU. Zero drops.
> 
> When I bind slaves of bond interfaces to the same CPU, I start to get 
> some drops, but much less than before. I didn't play with combinations.
> 
> My problem is, after applying your accounting patch below, one of my 
> HTB servers reports only 30-40% CPU idle on one of the cores. That won't 
> take me for very long, load balancing across cores is needed.
> 
> Is there any way at least to balance individual NICs on per core basis?
> 

The user level irqbalance program is a good place to start:
  http://www.irqbalance.org/
But it doesn't yet no how to handle multi-queue devices, and it seems
to not handle NUMA (like SMP Nehalam) perfectly.

  reply	other threads:[~2009-05-05 23:52 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-05 15:45 bond + tc regression ? Vladimir Ivashchenko
2009-05-05 16:25 ` Denys Fedoryschenko
2009-05-05 16:31 ` Eric Dumazet
2009-05-05 17:41   ` Vladimir Ivashchenko
2009-05-05 18:50     ` Eric Dumazet
2009-05-05 23:50       ` Vladimir Ivashchenko
2009-05-05 23:52         ` Stephen Hemminger [this message]
2009-05-06  3:36         ` Eric Dumazet
2009-05-06 10:28           ` Vladimir Ivashchenko
2009-05-06 10:41             ` Eric Dumazet
2009-05-06 10:49               ` Denys Fedoryschenko
2009-05-06 18:45           ` Vladimir Ivashchenko
2009-05-06 19:30             ` Denys Fedoryschenko
2009-05-06 20:47               ` Vladimir Ivashchenko
2009-05-06 21:46                 ` Denys Fedoryschenko
2009-05-08 20:46                   ` Vladimir Ivashchenko
2009-05-08 21:05                     ` Denys Fedoryschenko
2009-05-08 22:07                       ` Vladimir Ivashchenko
2009-05-08 22:42                         ` Denys Fedoryschenko
2009-05-17 18:46                           ` Vladimir Ivashchenko
2009-05-18  8:51                             ` Jarek Poplawski
2009-05-06  8:03       ` Ingo Molnar
2009-05-06  6:10     ` Jarek Poplawski
2009-05-06 10:36       ` Vladimir Ivashchenko
2009-05-06 10:48         ` Jarek Poplawski
2009-05-06 13:11           ` Vladimir Ivashchenko
2009-05-06 13:31             ` Patrick McHardy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090505165253.21f9e086@nehalam \
    --to=shemminger@vyatta.com \
    --cc=dada1@cosmosbay.com \
    --cc=hazard@francoudi.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).