From: Marc Zyngier <maz@kernel.org>
To: linux-pci@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Cc: "Toan Le" <toan@os.amperecomputing.com>,
"Lorenzo Pieralisi" <lpieralisi@kernel.org>,
"Krzysztof Wilczyński" <kwilczynski@kernel.org>,
"Manivannan Sadhasivam" <mani@kernel.org>,
"Rob Herring" <robh@kernel.org>,
"Bjorn Helgaas" <bhelgaas@google.com>,
"Thomas Gleixner" <tglx@linutronix.de>
Subject: [PATCH v2 06/13] PCI: xgene-msi: Drop superfluous fields from xgene_msi structure
Date: Tue, 8 Jul 2025 18:33:57 +0100 [thread overview]
Message-ID: <20250708173404.1278635-7-maz@kernel.org> (raw)
In-Reply-To: <20250708173404.1278635-1-maz@kernel.org>
The xgene_msi structure remembers both the of_node of the device
and the number of CPUs. All of which are perfectly useless.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
drivers/pci/controller/pci-xgene-msi.c | 23 ++++++++++-------------
1 file changed, 10 insertions(+), 13 deletions(-)
diff --git a/drivers/pci/controller/pci-xgene-msi.c b/drivers/pci/controller/pci-xgene-msi.c
index 5b69286689177..50a817920cfd9 100644
--- a/drivers/pci/controller/pci-xgene-msi.c
+++ b/drivers/pci/controller/pci-xgene-msi.c
@@ -31,14 +31,12 @@ struct xgene_msi_group {
};
struct xgene_msi {
- struct device_node *node;
struct irq_domain *inner_domain;
u64 msi_addr;
void __iomem *msi_regs;
unsigned long *bitmap;
struct mutex bitmap_lock;
struct xgene_msi_group *msi_groups;
- int num_cpus;
};
/* Global data */
@@ -147,7 +145,7 @@ static void xgene_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
*/
static int hwirq_to_cpu(unsigned long hwirq)
{
- return (hwirq % xgene_msi_ctrl.num_cpus);
+ return (hwirq % num_possible_cpus());
}
static unsigned long hwirq_to_canonical_hwirq(unsigned long hwirq)
@@ -186,9 +184,9 @@ static int xgene_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
mutex_lock(&msi->bitmap_lock);
msi_irq = bitmap_find_next_zero_area(msi->bitmap, NR_MSI_VEC, 0,
- msi->num_cpus, 0);
+ num_possible_cpus(), 0);
if (msi_irq < NR_MSI_VEC)
- bitmap_set(msi->bitmap, msi_irq, msi->num_cpus);
+ bitmap_set(msi->bitmap, msi_irq, num_possible_cpus());
else
msi_irq = -ENOSPC;
@@ -214,7 +212,7 @@ static void xgene_irq_domain_free(struct irq_domain *domain,
mutex_lock(&msi->bitmap_lock);
hwirq = hwirq_to_canonical_hwirq(d->hwirq);
- bitmap_clear(msi->bitmap, hwirq, msi->num_cpus);
+ bitmap_clear(msi->bitmap, hwirq, num_possible_cpus());
mutex_unlock(&msi->bitmap_lock);
@@ -235,10 +233,11 @@ static const struct msi_parent_ops xgene_msi_parent_ops = {
.init_dev_msi_info = msi_lib_init_dev_msi_info,
};
-static int xgene_allocate_domains(struct xgene_msi *msi)
+static int xgene_allocate_domains(struct device_node *node,
+ struct xgene_msi *msi)
{
struct irq_domain_info info = {
- .fwnode = of_fwnode_handle(msi->node),
+ .fwnode = of_fwnode_handle(node),
.ops = &xgene_msi_domain_ops,
.size = NR_MSI_VEC,
.host_data = msi,
@@ -358,7 +357,7 @@ static int xgene_msi_hwirq_alloc(unsigned int cpu)
int i;
int err;
- for (i = cpu; i < NR_HW_IRQS; i += msi->num_cpus) {
+ for (i = cpu; i < NR_HW_IRQS; i += num_possible_cpus()) {
msi_group = &msi->msi_groups[i];
/*
@@ -386,7 +385,7 @@ static int xgene_msi_hwirq_free(unsigned int cpu)
struct xgene_msi_group *msi_group;
int i;
- for (i = cpu; i < NR_HW_IRQS; i += msi->num_cpus) {
+ for (i = cpu; i < NR_HW_IRQS; i += num_possible_cpus()) {
msi_group = &msi->msi_groups[i];
irq_set_chained_handler_and_data(msi_group->gic_irq, NULL,
NULL);
@@ -417,8 +416,6 @@ static int xgene_msi_probe(struct platform_device *pdev)
goto error;
}
xgene_msi->msi_addr = res->start;
- xgene_msi->node = pdev->dev.of_node;
- xgene_msi->num_cpus = num_possible_cpus();
rc = xgene_msi_init_allocator(xgene_msi);
if (rc) {
@@ -426,7 +423,7 @@ static int xgene_msi_probe(struct platform_device *pdev)
goto error;
}
- rc = xgene_allocate_domains(xgene_msi);
+ rc = xgene_allocate_domains(dev_of_node(&pdev->dev), xgene_msi);
if (rc) {
dev_err(&pdev->dev, "Failed to allocate MSI domain\n");
goto error;
--
2.39.2
next prev parent reply other threads:[~2025-07-08 17:34 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-08 17:33 [PATCH v2 00/13] PCI: xgene: Fix and simplify the MSI driver Marc Zyngier
2025-07-08 17:33 ` [PATCH v2 01/13] genirq: Teach handle_simple_irq() to resend an in-progress interrupt Marc Zyngier
2025-07-08 17:33 ` [PATCH v2 02/13] PCI: xgene: Defer probing if the MSI widget driver hasn't probed yet Marc Zyngier
2025-07-08 17:33 ` [PATCH v2 03/13] PCI: xgene: Drop useless conditional compilation Marc Zyngier
2025-07-08 17:33 ` [PATCH v2 04/13] PCI: xgene: Drop XGENE_PCIE_IP_VER_UNKN Marc Zyngier
2025-07-08 17:33 ` [PATCH v2 05/13] PCI: xgene-msi: Make per-CPU interrupt setup robust Marc Zyngier
2025-07-08 17:33 ` Marc Zyngier [this message]
2025-07-17 11:24 ` [PATCH v2 06/13] PCI: xgene-msi: Drop superfluous fields from xgene_msi structure Markus Elfring
2025-07-08 17:33 ` [PATCH v2 07/13] PCI: xgene-msi: Use device-managed memory allocations Marc Zyngier
2025-07-08 17:33 ` [PATCH v2 08/13] PCI: xgene-msi: Get rid of intermediate tracking structure Marc Zyngier
2025-07-08 17:34 ` [PATCH v2 09/13] PCI: xgene-msi: Sanitise MSI allocation and affinity setting Marc Zyngier
2025-07-11 9:55 ` Lorenzo Pieralisi
2025-07-11 10:11 ` Lorenzo Pieralisi
2025-07-11 10:51 ` Marc Zyngier
2025-07-11 10:50 ` Marc Zyngier
2025-07-08 17:34 ` [PATCH v2 10/13] PCI: xgene-msi: Resend an MSI racing with itself on a different CPU Marc Zyngier
2025-07-08 17:34 ` [PATCH v2 11/13] PCI: xgene-msi: Probe as a standard platform driver Marc Zyngier
2025-07-17 11:45 ` Markus Elfring
2025-07-17 13:18 ` Lorenzo Pieralisi
2025-07-08 17:34 ` [PATCH v2 12/13] PCI: xgene-msi: Restructure handler setup/teardown Marc Zyngier
2025-07-08 17:34 ` [PATCH v2 13/13] cpu/hotplug: Remove unused cpuhp_state CPUHP_PCI_XGENE_DEAD Marc Zyngier
2025-07-11 13:15 ` Lorenzo Pieralisi
2025-07-17 9:52 ` [PATCH v2 00/13] PCI: xgene: Fix and simplify the MSI driver Lorenzo Pieralisi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250708173404.1278635-7-maz@kernel.org \
--to=maz@kernel.org \
--cc=bhelgaas@google.com \
--cc=kwilczynski@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lpieralisi@kernel.org \
--cc=mani@kernel.org \
--cc=robh@kernel.org \
--cc=tglx@linutronix.de \
--cc=toan@os.amperecomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).