From: Alex Williamson <alex.williamson@redhat.com>
To: "Daniel P. Berrange" <berrange@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>,
Marcel Apfelbaum <marcel@redhat.com>,
qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
Drew Jones <drjones@redhat.com>,
mst@redhat.com, Andrea Bolognani <abologna@redhat.com>,
Gerd Hoffmann <kraxel@redhat.com>, Laine Stump <laine@redhat.com>
Subject: Re: [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines
Date: Tue, 4 Oct 2016 09:45:51 -0600 [thread overview]
Message-ID: <20161004094551.49627b65@t450s.home> (raw)
In-Reply-To: <20161004145911.GA2155@redhat.com>
On Tue, 4 Oct 2016 15:59:11 +0100
"Daniel P. Berrange" <berrange@redhat.com> wrote:
> On Mon, Sep 05, 2016 at 06:24:48PM +0200, Laszlo Ersek wrote:
> > On 09/01/16 15:22, Marcel Apfelbaum wrote:
> > > +2.3 PCI only hierarchy
> > > +======================
> > > +Legacy PCI devices can be plugged into pcie.0 as Integrated Devices or
> > > +into DMI-PCI bridge. PCI-PCI bridges can be plugged into DMI-PCI bridges
> > > +and can be nested until a depth of 6-7. DMI-BRIDGES should be plugged
> > > +only into pcie.0 bus.
> > > +
> > > + pcie.0 bus
> > > + ----------------------------------------------
> > > + | |
> > > + ----------- ------------------
> > > + | PCI Dev | | DMI-PCI BRIDGE |
> > > + ---------- ------------------
> > > + | |
> > > + ----------- ------------------
> > > + | PCI Dev | | PCI-PCI Bridge |
> > > + ----------- ------------------
> > > + | |
> > > + ----------- -----------
> > > + | PCI Dev | | PCI Dev |
> > > + ----------- -----------
> >
> > Works for me, but I would again elaborate a little bit on keeping the
> > hierarchy flat.
> >
> > First, in order to preserve compatibility with libvirt's current
> > behavior, let's not plug a PCI device directly in to the DMI-PCI bridge,
> > even if that's possible otherwise. Let's just say
> >
> > - there should be at most one DMI-PCI bridge (if a legacy PCI hierarchy
> > is required),
>
> Why do you suggest this ? If the guest has multiple NUMA nodes
> and you're creating a PXB for each NUMA node, then it looks valid
> to want to have a DMI-PCI bridge attached to each PXB, so you can
> have legacy PCI devices on each NUMA node, instead of putting them
> all on the PCI bridge without NUMA affinity.
Seems like this is one of those "generic" vs "specific" device issues.
We use the DMI-to-PCI bridge as if it were a PCIe-to-PCI bridge, but
DMI is actually an Intel proprietary interface, the bridge just has the
same software interface as a PCI bridge. So while you can use it as a
generic PCIe-to-PCI bridge, it's at least going to make me cringe every
time.
> > - only PCI-PCI bridges should be plugged into the DMI-PCI bridge,
>
> What's the rational for that, as opposed to plugging devices directly
> into the DMI-PCI bridge which seems to work ?
IIRC, something about hotplug, but from a PCI perspective it doesn't
make any sense to me either. Same with the restriction from using slot
0 on PCI bridges, there's no basis for that except on the root bus.
Thanks,
Alex
next prev parent reply other threads:[~2016-10-04 15:46 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-09-01 13:22 [Qemu-devel] [PATCH RFC] docs: add PCIe devices placement guidelines Marcel Apfelbaum
2016-09-01 13:27 ` Peter Maydell
2016-09-01 13:51 ` Marcel Apfelbaum
2016-09-01 17:14 ` Laszlo Ersek
2016-09-05 16:24 ` Laszlo Ersek
2016-09-05 20:02 ` Marcel Apfelbaum
2016-09-06 13:31 ` Laszlo Ersek
2016-09-06 14:46 ` Marcel Apfelbaum
2016-09-07 6:21 ` Gerd Hoffmann
2016-09-07 8:06 ` Laszlo Ersek
2016-09-07 8:23 ` Marcel Apfelbaum
2016-09-07 8:06 ` Marcel Apfelbaum
2016-09-07 16:08 ` Alex Williamson
2016-09-07 19:32 ` Marcel Apfelbaum
2016-09-07 17:55 ` Laine Stump
2016-09-07 19:39 ` Marcel Apfelbaum
2016-09-07 20:34 ` Laine Stump
2016-09-15 8:38 ` Andrew Jones
2016-09-15 14:20 ` Marcel Apfelbaum
2016-09-16 16:50 ` Andrea Bolognani
2016-09-08 7:33 ` Gerd Hoffmann
2016-09-06 11:35 ` Gerd Hoffmann
2016-09-06 13:58 ` Laine Stump
2016-09-07 7:04 ` Gerd Hoffmann
2016-09-07 18:20 ` Laine Stump
2016-09-08 7:26 ` Gerd Hoffmann
2016-09-06 14:47 ` Marcel Apfelbaum
2016-09-07 7:53 ` Laszlo Ersek
2016-09-07 7:57 ` Marcel Apfelbaum
2016-10-04 14:59 ` Daniel P. Berrange
2016-10-04 15:40 ` Laszlo Ersek
2016-10-04 16:10 ` Laine Stump
2016-10-04 16:43 ` Laszlo Ersek
2016-10-04 18:08 ` Laine Stump
2016-10-04 18:52 ` Alex Williamson
2016-10-10 12:02 ` Andrea Bolognani
2016-10-10 14:36 ` Marcel Apfelbaum
2016-10-11 15:37 ` Andrea Bolognani
2016-10-04 18:56 ` Laszlo Ersek
2016-10-04 17:54 ` Laine Stump
2016-10-05 9:17 ` Marcel Apfelbaum
2016-10-10 11:09 ` Andrea Bolognani
2016-10-10 14:15 ` Marcel Apfelbaum
2016-10-11 13:30 ` Andrea Bolognani
2016-10-04 15:45 ` Alex Williamson [this message]
2016-10-04 16:25 ` Laine Stump
2016-10-05 10:03 ` Marcel Apfelbaum
2016-09-06 15:38 ` Alex Williamson
2016-09-06 18:14 ` Marcel Apfelbaum
2016-09-06 18:32 ` Alex Williamson
2016-09-06 18:59 ` Marcel Apfelbaum
2016-09-07 7:44 ` Laszlo Ersek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161004094551.49627b65@t450s.home \
--to=alex.williamson@redhat.com \
--cc=abologna@redhat.com \
--cc=berrange@redhat.com \
--cc=drjones@redhat.com \
--cc=kraxel@redhat.com \
--cc=laine@redhat.com \
--cc=lersek@redhat.com \
--cc=marcel@redhat.com \
--cc=mst@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).