linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jiang Liu <jiang.liu@linux.intel.com>
To: Thomas Gleixner <tglx@linutronix.de>,
	Keith Busch <keith.busch@intel.com>
Cc: x86@kernel.org, LKML <linux-kernel@vger.kernel.org>,
	Bryan Veal <bryan.e.veal@intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-pci@vger.kernel.org
Subject: Re: [RFC PATCH 1/2] x86: PCI bus specific MSI operations
Date: Sat, 29 Aug 2015 09:46:00 +0800	[thread overview]
Message-ID: <55E10ED8.1010809@linux.intel.com> (raw)
In-Reply-To: <alpine.DEB.2.11.1508281849510.15006@nanos>

On 2015/8/29 0:54, Thomas Gleixner wrote:
> On Thu, 27 Aug 2015, Keith Busch wrote:
> 
>> This patch adds struct x86_msi_ops to x86's PCI sysdata. This gives a
>> host bridge driver the option to provide alternate MSI Data Register
>> and MSI-X Table Entry programming for devices in PCI domains that do
>> not subscribe to usual "IOAPIC" format.
> 
> I'm not too fond about more ad hoc indirection and special casing. We
> should be able to handle this with hierarchical irq domains. Jiang
> might have an idea how to do that for your case.
Hi Thomas and Keith,
	I have noticed this patch set yesterday, but still investigating the
better way to handle this. Basically I think
we should build per-domain/per-bus/per-device PCI MSI irqdomain,
just like what ARM have done. That will give us a clear picture.
But I need more information about the hardware topology
to correctly build up the hierarchical irqdomain, especially the
relationship between the embedded host bridge and IOMMU units.
	Keith, could you please help to provide some doc with
hardware details?
Thanks!
Gerry

  parent reply	other threads:[~2015-08-29  1:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-27 22:39 [RFC PATCH 0/2] Driver for new PCI-e device Keith Busch
2015-08-27 22:39 ` [RFC PATCH 1/2] x86: PCI bus specific MSI operations Keith Busch
2015-08-28 16:54   ` Thomas Gleixner
2015-08-28 21:39     ` Keith Busch
2015-08-29  1:46     ` Jiang Liu [this message]
2015-08-27 22:39 ` [RFC PATCH 2/2] x86/pci: Initial commit for new VMD device driver Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55E10ED8.1010809@linux.intel.com \
    --to=jiang.liu@linux.intel.com \
    --cc=bryan.e.veal@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=keith.busch@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).