netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Garzik <jeff@garzik.org>
To: Herbert Xu <herbert@gondor.apana.org.au>
Cc: andi@firstfloor.org, arjan@infradead.org, matthew@wil.cx,
	jens.axboe@oracle.com, linux-kernel@vger.kernel.org,
	douglas.w.styner@intel.com, chinang.ma@intel.com,
	terry.o.prickett@intel.com, matthew.r.wilcox@intel.com,
	Eric.Moore@lsi.com, DL-MPTFusionLinux@lsi.com,
	netdev@vger.kernel.org
Subject: Re: >10% performance degradation since 2.6.18
Date: Sun, 05 Jul 2009 16:44:41 -0400	[thread overview]
Message-ID: <4A5110B9.4030904@garzik.org> (raw)
In-Reply-To: <20090705040137.GA7747@gondor.apana.org.au>

Herbert Xu wrote:
> Jeff Garzik <jeff@garzik.org> wrote:
>> What's the best setup for power usage?
>> What's the best setup for performance?
>> Are they the same?
> 
> Yes.

Is this a blind guess, or is there real world testing across multiple 
setups behind this answer?

Consider a 2-package, quad-core system with 3 userland threads actively 
performing network communication, plus periodic, low levels of network 
activity from OS utilities (such as nightly 'yum upgrade').

That is essentially an under-utilized 8-CPU system.  For such a case, it 
seems like a power win to idle or power down a few cores, or maybe even 
an entire package.

Efficient power usage means scaling _down_ when activity decreases.  A 
blind "distribute network activity across all CPUs" policy does not 
appear to be responsive to that type of situation.


>> Is it most optimal to have the interrupt for socket $X occur on the same 
>> CPU as where the app is running?
> 
> Yes.

Same question:  blind guess, or do you have numbers?

Consider two competing CPU hogs:  a kernel with tons of netfilter tables 
and rules, plus an application that uses a lot of CPU.

Can you not reach a threshold where it makes more sense to split kernel 
and userland work onto different CPUs?


>> If yes, how to best handle when the scheduler moves app to another CPU?
>> Should we reprogram the NIC hardware flow steering mechanism at that point?
> 
> Not really.  For now the best thing to do is to pin everything
> down and not move at all, because we can't afford to move.
> 
> The only way for moving to work is if we had the ability to get
> the sockets to follow the processes.  That means, we must have
> one RX queue per socket.

That seems to presume it is impossible to reprogram the NIC RX queue 
selection rules?

If you can add a new 'flow' to a NIC hardware RX queue, surely you can 
imagine a remove + add operation for a migrated 'flow'...  and thus, at 
least on the NIC hardware level, flows can follow processes.

	Jeff

  parent reply	other threads:[~2009-07-05 20:44 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20090703025607.GK5480@parisc-linux.org>
     [not found] ` <87skhdaaub.fsf@basil.nowhere.org>
     [not found]   ` <20090703185414.GP23611@kernel.dk>
     [not found]     ` <20090703191321.GO5480@parisc-linux.org>
     [not found]       ` <20090703192235.GV23611@kernel.dk>
     [not found]         ` <20090703194557.GQ5480@parisc-linux.org>
     [not found]           ` <20090703195458.GK2041@one.firstfloor.org>
     [not found]             ` <20090703130421.646fe5cb@infradead.org>
     [not found]               ` <20090703233505.GL2041@one.firstfloor.org>
     [not found]                 ` <20090703230408.4433ee39@infradead.org>
     [not found]                   ` <20090704084430.GO2041@one.firstfloor.org>
2009-07-04  9:19                     ` >10% performance degradation since 2.6.18 Jeff Garzik
2009-07-05  4:01                       ` Herbert Xu
2009-07-05 13:09                         ` Matthew Wilcox
2009-07-05 16:11                           ` Herbert Xu
2009-07-06  8:38                           ` Andi Kleen
2009-07-05 20:44                         ` Jeff Garzik [this message]
2009-07-06  1:19                           ` Herbert Xu
2009-07-06  8:45                           ` Andi Kleen
2009-07-06 17:00                         ` Rick Jones
2009-07-06 17:36                           ` Ma, Chinang
2009-07-06 17:42                             ` Matthew Wilcox
2009-07-06 17:57                               ` Ma, Chinang
2009-07-06 18:05                                 ` Matthew Wilcox
2009-07-06 18:48                                   ` Ma, Chinang
2009-07-06 18:53                                     ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A5110B9.4030904@garzik.org \
    --to=jeff@garzik.org \
    --cc=DL-MPTFusionLinux@lsi.com \
    --cc=Eric.Moore@lsi.com \
    --cc=andi@firstfloor.org \
    --cc=arjan@infradead.org \
    --cc=chinang.ma@intel.com \
    --cc=douglas.w.styner@intel.com \
    --cc=herbert@gondor.apana.org.au \
    --cc=jens.axboe@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matthew.r.wilcox@intel.com \
    --cc=matthew@wil.cx \
    --cc=netdev@vger.kernel.org \
    --cc=terry.o.prickett@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).