From: Philipp Stanner <phasta@mailbox.org>
To: Shawn Lin <shawn.lin@rock-chips.com>, phasta@kernel.org
Cc: Nirmal Patel <nirmal.patel@linux.intel.com>,
Jonathan Derrick <jonathan.derrick@linux.dev>,
Kurt Schwemmer <kurt.schwemmer@microsemi.com>,
Logan Gunthorpe <logang@deltatee.com>,
linux-pci@vger.kernel.org, Bjorn Helgaas <bhelgaas@google.com>
Subject: Re: [PATCH v2 1/3] PCI/MSI: Add Devres managed IRQ vectors allocation
Date: Mon, 20 Apr 2026 11:28:59 +0200 [thread overview]
Message-ID: <cffc558c0110ba06e26912ea10df30ea1fb288cb.camel@mailbox.org> (raw)
In-Reply-To: <c2f0d971-2150-7771-f9c6-cd02f05a5441@rock-chips.com>
On Fri, 2026-04-17 at 17:33 +0800, Shawn Lin wrote:
> Hi Philipp
>
> 在 2026/04/17 星期五 16:44, Philipp Stanner 写道:
> > Hello Shawn,
> >
> > On Fri, 2026-04-17 at 10:26 +0800, Shawn Lin wrote:
> > > pcim_alloc_irq_vectors() and pcim_alloc_irq_vectors_affinity() are created for
> > > pci device drivers which rely on the devres machinery to help cleanup the IRQ
> > > vectors.
> >
> > The commit message doesn't really detail *why* you add this. The rule
> > of thumb is to first describe the problem and then describe roughly
> > what the patch does about the problem. That also makes it easier for
> > reviewers to quickly match code to intent
> >
>
> Sure, will do.
> > `
[…]
> > > +int pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
> > > + unsigned int max_vecs, unsigned int flags,
> > > + struct irq_affinity *affd)
> > > +{
> > > + dev->is_msi_managed = true;
> >
> > Do you have to set this to 'false' again? If so, who does that? A small
> > comment here could describe that.
> >
> >
> > Note that for your cleanup, in the long run, once all users are ported,
> > these helper bools should not be necessary anymore because devres
> > already stores all the relevant state.
> >
> > Likewise, I would like to remove the "is_managed" and "is_pinned"
> > flags, but that can only be done once all API users are ported.
> >
> > If this is the case with 'is_msi_managed', too, there should be a
> > cleanup TODO comment in my opinion. "This can be removed once…"
> >
>
> My initial plan was to make it into 3 steps:
> (1) Set is_msi_managed true via pcim_alloc_irq_vectors*();
> (2) All pcim_enable_device() + pci_alloc_irq_vectors*() users are ported
> (3) remove is_msi_managed assignment from pcim_enable_device()
>
> As you could see, on step 1 & 2, pcim_enable_device() +
> pci_alloc_irq_vectors*() users uncondictionly set is_msi_managed true.
pcim_enable_device() does not set that flag (if you meant that; I'm not
sure).
The flag is already set by pcim_setup_msi_release(). Why do you need a
second place to set it?
> So they don't have to set it false again. At least it keeps the previous
> behaviour. For potential new pci_enable_device() +
> pcim_alloc_irq_vectors*() users before all the convertion is done, It's
> indeed a problem here.
The thing is that I believe that this should be orthogonal.
1. You'd keep the old functions exactly as they are now.
2. Then, you add new pcim_irq_vector() et. al. functions. They won't
need any flag, because the managed state is stored by the devres
backend.
3. You port a few users of the pcim_enable_device() + alloc_irq()
function combination
4. Once you have ported all of them, you can completely remove the
is_managed flag and related devres functionality from the old
interfaces.
I don't see why new code should interact with that flag mechanism. That
seems incorrect to me. Can you explain to me why you think it's
necessary?
>
> Does something below make sense to you?
>
> int pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int
> min_vecs, unsigned int max_vecs,
> unsigned int flags,
> struct irq_affinity *affd)
> {
> int nvecs;
> bool already_set_msi_managed = dev->is_msi_managed;
>
> if (!already_set_msi_managed)
> dev->is_msi_managed= true;
>
> nvecs = pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,
> flags, affd);
> if (nvecs < 0 && !already_set_msi_managed)
> dev->is_msi_managed = false;
Hmm, not sure. See above.
P.
>
> return nvecs;
> }
>
>
> Thanks.
>
> > (But I'm not super deep in PCI devres since many months; but I think I
> > remember that not properly dealing with that flag state even caused us
> > a bug or two in the past)
> >
> >
> > Regards,
> > Philipp
> >
> > > + return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,
> > > + flags, affd);
> > > +}
> > > +EXPORT_SYMBOL(pcim_alloc_irq_vectors_affinity);
> > > +
> > > +/**
> > > * pci_irq_vector() - Get Linux IRQ number of a device interrupt vector
> > > * @dev: the PCI device to operate on
> > > * @nr: device-relative interrupt vector index (0-based); has different
> > > diff --git a/include/linux/pci.h b/include/linux/pci.h
> > > index 2c44545..3716c67 100644
> > > --- a/include/linux/pci.h
> > > +++ b/include/linux/pci.h
> > > @@ -1770,6 +1770,12 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
> > > unsigned int max_vecs, unsigned int flags,
> > > struct irq_affinity *affd);
> > >
> > > +int pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
> > > + unsigned int max_vecs, unsigned int flags);
> > > +int pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
> > > + unsigned int max_vecs, unsigned int flags,
> > > + struct irq_affinity *affd);
> > > +
> > > bool pci_msix_can_alloc_dyn(struct pci_dev *dev);
> > > struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,
> > > const struct irq_affinity_desc *affdesc);
> > > @@ -1812,6 +1818,22 @@ pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
> > > flags, NULL);
> > > }
> > >
> > > +static inline int
> > > +pcim_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
> > > + unsigned int max_vecs, unsigned int flags,
> > > + struct irq_affinity *aff_desc)
> > > +{
> > > + return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,
> > > + flags, aff_desc);
> > > +}
> > > +static inline int
> > > +pcim_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
> > > + unsigned int max_vecs, unsigned int flags)
> > > +{
> > > + return pcim_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs,
> > > + flags, NULL);
> > > +}
> > > +
> > > static inline bool pci_msix_can_alloc_dyn(struct pci_dev *dev)
> > > { return false; }
> > > static inline struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index,
> >
> >
next prev parent reply other threads:[~2026-04-20 9:29 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-17 2:26 [PATCH v2 0/3] Add Devres managed IRQ vectors allocation Shawn Lin
2026-04-17 2:26 ` [PATCH v2 1/3] PCI/MSI: " Shawn Lin
2026-04-17 6:10 ` Frank Li
2026-04-17 6:32 ` Shawn Lin
2026-04-17 8:44 ` Philipp Stanner
2026-04-17 9:33 ` Shawn Lin
2026-04-20 9:28 ` Philipp Stanner [this message]
2026-04-20 9:52 ` Shawn Lin
2026-04-17 2:26 ` [PATCH v2 2/3] PCI: switchtec: Replace pci_alloc_irq_vectors() with pcim_alloc_irq_vectors() Shawn Lin
2026-04-17 15:16 ` Logan Gunthorpe
2026-04-17 2:26 ` [PATCH v2 3/3] PCI: vmd: " Shawn Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cffc558c0110ba06e26912ea10df30ea1fb288cb.camel@mailbox.org \
--to=phasta@mailbox.org \
--cc=bhelgaas@google.com \
--cc=jonathan.derrick@linux.dev \
--cc=kurt.schwemmer@microsemi.com \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=nirmal.patel@linux.intel.com \
--cc=phasta@kernel.org \
--cc=shawn.lin@rock-chips.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox