From: Bjorn Helgaas <helgaas@kernel.org>
To: Mika Westerberg <mika.westerberg@linux.intel.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>,
"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Lukas Wunner <lukas@wunner.de>,
Chris Chiu <chris.chiu@canonical.com>,
linux-pci@vger.kernel.org
Subject: Re: [PATCH v3 1/2] PCI: Take other bus devices into account when distributing resources
Date: Mon, 5 Dec 2022 16:46:34 -0600 [thread overview]
Message-ID: <20221205224634.GA1257222@bhelgaas> (raw)
In-Reply-To: <Y42dniafcY76DctG@black.fi.intel.com>
On Mon, Dec 05, 2022 at 09:28:30AM +0200, Mika Westerberg wrote:
> On Fri, Dec 02, 2022 at 05:34:24PM -0600, Bjorn Helgaas wrote:
> > On Wed, Nov 30, 2022 at 01:22:20PM +0200, Mika Westerberg wrote:
> > > A PCI bridge may reside on a bus with other devices as well. The
> > > resource distribution code does not take this into account properly and
> > > therefore it expands the bridge resource windows too much, not leaving
> > > space for the other devices (or functions a multifunction device) and
> > > + * Reduce the space available for distribution by the
> > > + * amount required by the other devices on the same bus
> > > + * as this bridge.
> > > + */
> > > + list_for_each_entry(dev, &bus->devices, bus_list) {
> > > + int i;
> > > +
> > > + if (dev == bridge)
> > > + continue;
> >
> > Why do we skip "bridge"? Bridges are allowed to have two BARs
> > themselves, and it seems like they should be included here.
>
> Good point but then we would need to skip below the bridge window
> resources to avoid accounting them.
Seems like we should handle bridge BARs. There are definitely bridges
(PCIe for sure, I dunno about conventional PCI) that implement them
and some drivers starting to appear that use them for performance
monitoring, etc.
> > This only happens for buses with a single bridge. Shouldn't it happen
> > regardless of how many bridges there are?
>
> This branch specifically deals with the "upstream port" so it gives all
> the spare resources to that upstream port. The whole resource
> distribution is actually done to accommondate Thunderbolt/USB4
> topologies which involve only PCIe devices so we always have PCIe
> upstream port and downstream ports which some of them are able to
> perform native PCIe hotplug. And for those ports we want to distribute
> the available resources so that they can expand to further topologies.
>
> I'm slightly concerned that forcing this to support the "generic" PCI
> case makes this rather complicated. This is something that never appears
> in the regular PCI based systems because we never distribute resources
> for those in the first place (->is_hotplug_bridge needs to be set).
This code is fairly complicated in any case :)
I understand why this is useful for Thunderbolt topologies, but it
should be equally useful for other hotplug topologies because at this
level we're purely talking about the address space needed by devices
and how that space is assigned and routed through bridges. Nothing
unique to Thunderbolt here.
I don't think we should make this PCIe-specific. ->is_hotplug_bridge
is set by a PCIe path (set_pcie_hotplug_bridge()), but also by
check_hotplug_bridge() in acpiphp, which could be any flavor of PCI,
and I don't think there's anything intrinsically PCIe-specific about
it.
> > I don't understand the "bridge" part; it looks like that's basically
> > to use 4K alignment for I/O windows and 1M for memory windows?
> > Using "bridge" seems like a clunky way to figure that out.
>
> Okay, but if not using "bridge", how exactly you suggest to doing the
> calculation?
I was thinking it would always be 4K or 1M, but I guess that's
actually not true. There are some Intel bridges that support 1K
alignment for I/O windows, and some powerpc hypervisor stuff that can
also influence the alignment. And it looks like we still need to
figure out which b_res to use, so we couldn't get rid of the IO/MEM
case analysis. So never mind, I guess ...
Bjorn
next prev parent reply other threads:[~2022-12-05 22:46 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-30 11:22 [PATCH v3 0/2] PCI: Distribute resources for root buses Mika Westerberg
2022-11-30 11:22 ` [PATCH v3 1/2] PCI: Take other bus devices into account when distributing resources Mika Westerberg
2022-12-02 17:45 ` Jonathan Cameron
2022-12-02 23:35 ` Bjorn Helgaas
2022-12-02 23:34 ` Bjorn Helgaas
2022-12-05 7:28 ` Mika Westerberg
2022-12-05 22:46 ` Bjorn Helgaas [this message]
2022-11-30 11:22 ` [PATCH v3 2/2] PCI: Distribute available resources for root buses too Mika Westerberg
2022-12-02 18:01 ` Jonathan Cameron
2022-12-02 17:07 ` [PATCH v3 0/2] PCI: Distribute resources for root buses Jonathan Cameron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221205224634.GA1257222@bhelgaas \
--to=helgaas@kernel.org \
--cc=Jonathan.Cameron@huawei.com \
--cc=andriy.shevchenko@linux.intel.com \
--cc=bhelgaas@google.com \
--cc=chris.chiu@canonical.com \
--cc=linux-pci@vger.kernel.org \
--cc=lukas@wunner.de \
--cc=mika.westerberg@linux.intel.com \
--cc=rafael.j.wysocki@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox