linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bjorn Helgaas <helgaas@kernel.org>
To: Rob Herring <robh@kernel.org>
Cc: Lizhi Hou <lizhi.hou@amd.com>,
	linux-pci@vger.kernel.org, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org, max.zhen@amd.com,
	sonal.santan@amd.com, stefano.stabellini@xilinx.com
Subject: Re: [PATCH V10 2/5] PCI: Create device tree node for bridge
Date: Fri, 30 Jun 2023 11:48:21 -0500	[thread overview]
Message-ID: <20230630164821.GA483874@bhelgaas> (raw)
In-Reply-To: <20230629235226.GA92592-robh@kernel.org>

On Thu, Jun 29, 2023 at 05:52:26PM -0600, Rob Herring wrote:
> On Thu, Jun 29, 2023 at 05:56:31PM -0500, Bjorn Helgaas wrote:
> > On Thu, Jun 29, 2023 at 10:19:47AM -0700, Lizhi Hou wrote:
> > > The PCI endpoint device such as Xilinx Alveo PCI card maps the register
> > > spaces from multiple hardware peripherals to its PCI BAR. Normally,
> > > the PCI core discovers devices and BARs using the PCI enumeration process.
> > > There is no infrastructure to discover the hardware peripherals that are
> > > present in a PCI device, and which can be accessed through the PCI BARs.
> > 
> > IIUC this is basically a multi-function device except that instead of
> > each device being a separate PCI Function, they all appear in a single
> > Function.  That would mean all the devices share the same config space
> > so a single PCI Command register controls all of them, they all share
> > the same IRQs (either INTx or MSI/MSI-X), any MMIO registers are likely
> > in a shared BAR, etc., right?
> 
> Could be multiple BARs, but yes.

Where does the PCI glue live?  E.g., who ioremaps the BARs?  Who sets
up PCI interrupts?  Who enables bus mastering?  The platform driver
that claims the DT node wouldn't know that this is part of a PCI
device, so I guess the PCI driver must do all that stuff?  I don't see
it in the xmgmt-drv.c from
https://lore.kernel.org/all/20220305052304.726050-4-lizhi.hou@xilinx.com/

> > Obviously PCI enumeration only sees the single Function and binds a
> > single driver to it.  But IIUC, you want to use existing drivers for
> > each of these sub-devices, so this series adds a DT node for the
> > single Function (using the quirks that call of_pci_make_dev_node()).
> > And I assume that when the PCI driver claims the single Function, it
> > will use that DT node to add platform devices, and those existing
> > drivers can claim those?
> 
> Yes. It will call some variant of of_platform_populate().
> 
> > I don't see the PCI driver for the single Function in this series.  Is
> > that coming?  Is this series useful without it?
> 
> https://lore.kernel.org/all/20220305052304.726050-4-lizhi.hou@xilinx.com/
> 
> I asked for things to be split up as the original series did a lot 
> of new things at once. This series only works with the QEMU PCI test 
> device which the DT unittest will use.
> 
> > > Apparently, the device tree framework requires a device tree node for the
> > > PCI device. Thus, it can generate the device tree nodes for hardware
> > > peripherals underneath. Because PCI is self discoverable bus, there might
> > > not be a device tree node created for PCI devices. Furthermore, if the PCI
> > > device is hot pluggable, when it is plugged in, the device tree nodes for
> > > its parent bridges are required. Add support to generate device tree node
> > > for PCI bridges.
> > 
> > Can you remind me why hot-adding a PCI device requires DT nodes for
> > parent bridges?
> 
> Because the PCI device needs a DT node and we can't just put PCI devices 
> in the DT root. We have to create the bus hierarchy.
> 
> > I don't think we have those today, so maybe the DT
> > node for the PCI device requires a DT parent?  How far up does that
> > go?
> 
> All the way.
> 
> >  From this patch, I guess a Root Port would be the top DT node on
> > a PCIe system, since that's the top PCI-to-PCI bridge?
> 
> Yes. Plus above the host bridge could have a hierarchy of nodes.

I'm missing something if it goes "all the way up," i.e., to a single
system root, but a Root Port is the top DT node.  If a Root Port is
the top, there would be several roots.

> > This patch adds a DT node for *every* PCI bridge in the system.  We
> > only actually need that node for these unusual devices.  Is there some
> > way the driver for the single PCI Function could add that node when it
> > is needed?  Sorry if you've answered this in the past; maybe the
> > answer could be in the commit log or a code comment in case somebody
> > else wonders.
> 
> This was discussed early on. I don't think it would work to create the 
> nodes at the time we discover we have a device that wants a DT node. The 
> issue is decisions are made in the code based on whether there's a DT 
> node for a PCI device or not. It might work, but I think it's fragile to 
> have nodes attached to devices at different points in time.

Ah.  So I guess the problem is we enumerate a PCI bridge, we might do
something based on the fact that it doesn't have a DT node, then add a
DT node for it later.

Bjorn

  reply	other threads:[~2023-06-30 16:48 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-29 17:19 [PATCH V10 0/5] Generate device tree node for pci devices Lizhi Hou
2023-06-29 17:19 ` [PATCH V10 1/5] of: dynamic: Add interfaces for creating device node dynamically Lizhi Hou
2023-06-29 17:19 ` [PATCH V10 2/5] PCI: Create device tree node for bridge Lizhi Hou
2023-06-29 22:56   ` Bjorn Helgaas
2023-06-29 23:52     ` Rob Herring
2023-06-30 16:48       ` Bjorn Helgaas [this message]
2023-06-30 19:59         ` Lizhi Hou
2023-06-30 18:24       ` Lizhi Hou
2023-07-18 15:50         ` Lizhi Hou
2023-07-18 18:15         ` Rob Herring
2023-07-24 18:18           ` Lizhi Hou
2023-06-29 23:55     ` Rob Herring
2023-06-30 14:42       ` Bjorn Helgaas
2023-06-29 17:19 ` [PATCH V10 3/5] PCI: Add quirks to generate device tree node for Xilinx Alveo U50 Lizhi Hou
2023-06-29 20:37   ` Bjorn Helgaas
2023-06-29 17:19 ` [PATCH V10 4/5] of: overlay: Extend of_overlay_fdt_apply() to specify the target node Lizhi Hou
2023-06-29 17:19 ` [PATCH V10 5/5] of: unittest: Add pci_dt_testdrv pci driver Lizhi Hou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230630164821.GA483874@bhelgaas \
    --to=helgaas@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lizhi.hou@amd.com \
    --cc=max.zhen@amd.com \
    --cc=robh@kernel.org \
    --cc=sonal.santan@amd.com \
    --cc=stefano.stabellini@xilinx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).