From: Grant Grundler <grundler@parisc-linux.org>
To: David Miller <davem@davemloft.net>
Cc: grundler@parisc-linux.org, jbarnes@virtuousgeek.org,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: Notes from LPC PCI/MSI BoF session
Date: Thu, 25 Sep 2008 09:53:43 -0600 [thread overview]
Message-ID: <20080925155343.GC2997@colo.lackof.org> (raw)
In-Reply-To: <20080923.234705.261909334.davem@davemloft.net>
On Tue, Sep 23, 2008 at 11:47:05PM -0700, David Miller wrote:
> From: Grant Grundler <grundler@parisc-linux.org>
> Date: Tue, 23 Sep 2008 23:51:16 -0600
>
> > Dave Miller (and others) have clearly stated they don't want to see
> > CPU affinity handled in the device drivers and want irqbalanced
> > to handle interrupt distribution. The problem with this is irqbalanced
> > needs to know how each device driver is binding multiple MSI to it's queues.
> > Some devices could prefer several MSI go to the same processor and
> > others want each MSI bound to a different "node" (NUMA).
> >
> > Without any additional API, this means the device driver has to
> > update irqbalanced for each device it supports. We thought pci_ids.h
> > was a PITA...that would be trivial compared to maintaining this.
>
> We just need a consistent naming scheme for the IRQs to disseminate
> this information to irqbalanced, then there is one change to irqbalanced
> rather than one for each and every driver as you seem to suggest.
That's sort of what I proposed at the end of my email:
| A second solution I thought of later might be for the device driver to
| export (sysfs?) to irqbalanced which MSIs the driver instance owns and
| how many "domains" those MSIs can serve. irqbalanced can then write
| back into the same (sysfs?) the mapping of MSI to domains and update
| the smp_affinity mask for each of those MSI.
> Anything that's complicated and takes more than a paragraph or two
> to describe is not what we want.
I agree. The discussion will need more though.
thanks,
grant
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2008-09-25 15:54 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-09-22 19:29 Notes from LPC PCI/MSI BoF session Jesse Barnes
2008-09-24 5:51 ` Grant Grundler
2008-09-24 6:47 ` David Miller
2008-09-25 15:53 ` Grant Grundler [this message]
2008-09-24 15:44 ` Matthew Wilcox
2008-09-25 16:15 ` Grant Grundler
2008-10-01 15:00 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080925155343.GC2997@colo.lackof.org \
--to=grundler@parisc-linux.org \
--cc=davem@davemloft.net \
--cc=jbarnes@virtuousgeek.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox