netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Chris Friesen" <cfriesen@nortel.com>
To: David Daney <ddaney@caviumnetworks.com>
Cc: netdev@vger.kernel.org,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-mips <linux-mips@linux-mips.org>
Subject: Re: Irq architecture for multi-core network driver.
Date: Thu, 22 Oct 2009 16:05:30 -0600	[thread overview]
Message-ID: <4AE0D72A.4090607@nortel.com> (raw)
In-Reply-To: <4AE0D14B.1070307@caviumnetworks.com>

On 10/22/2009 03:40 PM, David Daney wrote:

> The main problem I have encountered is how to fit the interrupt
> management into the kernel framework.  Currently the interrupt source
> is connected to a single irq number.  I request_irq, and then manage
> the masking and unmasking on a per cpu basis by directly manipulating
> the interrupt controller's affinity/routing registers.  This goes
> behind the back of all the kernel's standard interrupt management
> routines.  I am looking for a better approach.
> 
> One thing that comes to mind is that I could assign a different
> interrupt number per cpu to the interrupt signal.  So instead of
> having one irq I would have 32 of them.  The driver would then do
> request_irq for all 32 irqs, and could call enable_irq and disable_irq
> to enable and disable them.  The problem with this is that there isn't
> really a single packets-ready signal, but instead 16 of them.  So If I
> go this route I would have 16(lines) x 32(cpus) = 512 interrupt
> numbers just for the networking hardware, which seems a bit excessive.

Does your hardware do flow-based queues?  In this model you have
multiple rx queues and the hardware hashes incoming packets to a single
queue based on the addresses, ports, etc. This ensures that all the
packets of a single connection always get processed in the order they
arrived at the net device.

Typically in this model you have as many interrupts as queues
(presumably 16 in your case).  Each queue is assigned an interrupt and
that interrupt is affined to a single core.

The intel igb driver is an example of one that uses this sort of design.

Chris


  reply	other threads:[~2009-10-22 22:08 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-10-22 21:40 Irq architecture for multi-core network driver David Daney
2009-10-22 22:05 ` Chris Friesen [this message]
2009-10-22 22:24   ` David Daney
2009-10-23  7:59     ` Eric W. Biederman
2009-10-23 17:28       ` Jesse Brandeburg
2009-10-23 23:22         ` Eric W. Biederman
2009-10-24 13:26           ` David Miller
2009-10-24  3:19         ` David Miller
2009-10-24 13:23     ` David Miller
2009-12-16 22:08     ` Chetan Loke
2009-12-16 22:30       ` David Daney
2009-12-16 23:00         ` Stephen Hemminger
2009-12-16 23:26           ` David Daney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4AE0D72A.4090607@nortel.com \
    --to=cfriesen@nortel.com \
    --cc=ddaney@caviumnetworks.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@linux-mips.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).