netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: ebiederm@xmission.com (Eric W. Biederman)
To: Jesse Brandeburg <jesse.brandeburg@gmail.com>
Cc: David Daney <ddaney@caviumnetworks.com>,
	Chris Friesen <cfriesen@nortel.com>,
	netdev@vger.kernel.org,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-mips <linux-mips@linux-mips.org>
Subject: Re: Irq architecture for multi-core network driver.
Date: Fri, 23 Oct 2009 16:22:36 -0700	[thread overview]
Message-ID: <m17huln1ab.fsf@fess.ebiederm.org> (raw)
In-Reply-To: <4807377b0910231028g60b479cfycdbf3f4e25384c58@mail.gmail.com> (Jesse Brandeburg's message of "Fri\, 23 Oct 2009 10\:28\:10 -0700")

Jesse Brandeburg <jesse.brandeburg@gmail.com> writes:

> On Fri, Oct 23, 2009 at 12:59 AM, Eric W. Biederman
> <ebiederm@xmission.com> wrote:
>> David Daney <ddaney@caviumnetworks.com> writes:
>>> Certainly this is one mode of operation that should be supported, but I would
>>> also like to be able to go for raw throughput and have as many cores as possible
>>> reading from a single queue (like I currently have).
>>
>> I believe will detect false packet drops and ask for unnecessary
>> retransmits if you have multiple cores processing a single queue,
>> because you are processing the packets out of order.
>
> So, the way the default linux kernel configures today's many core
> server systems is to leave the affinity mask by default at 0xffffffff,
> and most current Intel hardware based on 5000 (older core cpus), or
> 5500 chipset (used with Core i7 processors) that I have seen will
> allow for round robin interrupts by default.  This kind of sucks for
> the above unless you run irqbalance or set smp_affinity by hand.

On x86 if you have > 8 cores the hardware does not support any form of
irq balancing.  You do have an interesting point.

How often and how much does irq balancing hurt us.

> Yes, I know Arjan and others will say you should always run
> irqbalance, but some people don't and some distros don't ship it
> enabled by default (or their version doesn't work for one reason or
> another)  

irqbalance is actually more likely to move irqs than the hardware.
I have heard promises it won't move network irqs but I have seen
the opposite behavior.

> The question is should the kernel work better by default
> *without* irqbalance loaded, or does it not matter?

Good question.  I would aim for the kernel to work better by default.
Ideally we should have a coupling between which sockets applications have
open, which cpus those applications run on, and which core the irqs arrive
at.

> I don't believe we should re-enable the kernel irq balancer, but
> should we consider only setting a single bit in each new interrupt's
> irq affinity?  Doing it with a random spread for the initial affinity
> would be better than setting them all to one.

Not a bad idea.  The practical problem is that we usually have the irqs
setup before we have the additional cpus.  But that isn't entirely true,
I'm thinking of mostly pre-acpi rules.  With ACPI we do some kind of
on-demand setup of the gsi in the device initialization.

How irq threads interact also ways in here.

Eric

  reply	other threads:[~2009-10-23 23:22 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-10-22 21:40 Irq architecture for multi-core network driver David Daney
2009-10-22 22:05 ` Chris Friesen
2009-10-22 22:24   ` David Daney
2009-10-23  7:59     ` Eric W. Biederman
2009-10-23 17:28       ` Jesse Brandeburg
2009-10-23 23:22         ` Eric W. Biederman [this message]
2009-10-24 13:26           ` David Miller
2009-10-24  3:19         ` David Miller
2009-10-24 13:23     ` David Miller
2009-12-16 22:08     ` Chetan Loke
2009-12-16 22:30       ` David Daney
2009-12-16 23:00         ` Stephen Hemminger
2009-12-16 23:26           ` David Daney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=m17huln1ab.fsf@fess.ebiederm.org \
    --to=ebiederm@xmission.com \
    --cc=cfriesen@nortel.com \
    --cc=ddaney@caviumnetworks.com \
    --cc=jesse.brandeburg@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@linux-mips.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).