linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
@ 2022-05-24 12:28 Pali Rohár
  2022-06-23 13:10 ` Pali Rohár
                   ` (3 more replies)
  0 siblings, 4 replies; 28+ messages in thread
From: Pali Rohár @ 2022-05-24 12:28 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel

Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
instead of chained IRQ handler in pci-mvebu.c driver.

This change fixes affinity support and allows to pin interrupts from
different PCIe controllers to different CPU cores.

Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
Signed-off-by: Pali Rohár <pali@kernel.org>
---
Hello Bjorn! This is basically same issue as for pci-aardvark.c:
https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t

I tested this patch with pci=nomsi in cmdline (to force kernel to use
legacy intx instead of MSI) on A385 and checked that I can set affinity
via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
different CPU and legacy interrupts from different cards/controllers
were handled by different CPUs.

I think that this is important on Armada XP platforms which have many
independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
---
 drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
index 8f76d4bda356..de67ea39fea5 100644
--- a/drivers/pci/controller/pci-mvebu.c
+++ b/drivers/pci/controller/pci-mvebu.c
@@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
 	return 0;
 }
 
-static void mvebu_pcie_irq_handler(struct irq_desc *desc)
+static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
 {
-	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
-	struct irq_chip *chip = irq_desc_get_chip(desc);
+	struct mvebu_pcie_port *port = arg;
 	struct device *dev = &port->pcie->pdev->dev;
 	u32 cause, unmask, status;
 	int i;
 
-	chained_irq_enter(chip, desc);
-
 	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
 	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
 	status = cause & unmask;
@@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
 			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
 	}
 
-	chained_irq_exit(chip, desc);
+	return status ? IRQ_HANDLED : IRQ_NONE;
 }
 
 static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
@@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
 				mvebu_pcie_powerdown(port);
 				continue;
 			}
-			irq_set_chained_handler_and_data(irq,
-							 mvebu_pcie_irq_handler,
-							 port);
+
+			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
+					       IRQF_SHARED | IRQF_NO_THREAD,
+					       port->name, port);
+			if (ret) {
+				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
+					port->name, ret);
+				irq_domain_remove(port->intx_irq_domain);
+				pci_bridge_emul_cleanup(&port->bridge);
+				devm_iounmap(dev, port->base);
+				port->base = NULL;
+				mvebu_pcie_powerdown(port);
+				continue;
+			}
 		}
 
 		/*
@@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
 
 	for (i = 0; i < pcie->nports; i++) {
 		struct mvebu_pcie_port *port = &pcie->ports[i];
-		int irq = port->intx_irq;
 
 		if (!port->base)
 			continue;
@@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
 		/* Clear all interrupt causes. */
 		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
 
-		if (irq > 0)
-			irq_set_chained_handler_and_data(irq, NULL, NULL);
-
 		/* Remove IRQ domains. */
 		if (port->intx_irq_domain)
 			irq_domain_remove(port->intx_irq_domain);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-05-24 12:28 [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler Pali Rohár
@ 2022-06-23 13:10 ` Pali Rohár
  2022-06-23 16:27 ` Bjorn Helgaas
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 28+ messages in thread
From: Pali Rohár @ 2022-06-23 13:10 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel

On Tuesday 24 May 2022 14:28:17 Pali Rohár wrote:
> Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> instead of chained IRQ handler in pci-mvebu.c driver.
> 
> This change fixes affinity support and allows to pin interrupts from
> different PCIe controllers to different CPU cores.
> 
> Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> Signed-off-by: Pali Rohár <pali@kernel.org>
> ---

PING?

> Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> 
> I tested this patch with pci=nomsi in cmdline (to force kernel to use
> legacy intx instead of MSI) on A385 and checked that I can set affinity
> via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> different CPU and legacy interrupts from different cards/controllers
> were handled by different CPUs.
> 
> I think that this is important on Armada XP platforms which have many
> independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> ---
>  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
>  1 file changed, 17 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> index 8f76d4bda356..de67ea39fea5 100644
> --- a/drivers/pci/controller/pci-mvebu.c
> +++ b/drivers/pci/controller/pci-mvebu.c
> @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
>  	return 0;
>  }
>  
> -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
>  {
> -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> -	struct irq_chip *chip = irq_desc_get_chip(desc);
> +	struct mvebu_pcie_port *port = arg;
>  	struct device *dev = &port->pcie->pdev->dev;
>  	u32 cause, unmask, status;
>  	int i;
>  
> -	chained_irq_enter(chip, desc);
> -
>  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
>  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
>  	status = cause & unmask;
> @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
>  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
>  	}
>  
> -	chained_irq_exit(chip, desc);
> +	return status ? IRQ_HANDLED : IRQ_NONE;
>  }
>  
>  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
>  				mvebu_pcie_powerdown(port);
>  				continue;
>  			}
> -			irq_set_chained_handler_and_data(irq,
> -							 mvebu_pcie_irq_handler,
> -							 port);
> +
> +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> +					       IRQF_SHARED | IRQF_NO_THREAD,
> +					       port->name, port);
> +			if (ret) {
> +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> +					port->name, ret);
> +				irq_domain_remove(port->intx_irq_domain);
> +				pci_bridge_emul_cleanup(&port->bridge);
> +				devm_iounmap(dev, port->base);
> +				port->base = NULL;
> +				mvebu_pcie_powerdown(port);
> +				continue;
> +			}
>  		}
>  
>  		/*
> @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
>  
>  	for (i = 0; i < pcie->nports; i++) {
>  		struct mvebu_pcie_port *port = &pcie->ports[i];
> -		int irq = port->intx_irq;
>  
>  		if (!port->base)
>  			continue;
> @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
>  		/* Clear all interrupt causes. */
>  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
>  
> -		if (irq > 0)
> -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> -
>  		/* Remove IRQ domains. */
>  		if (port->intx_irq_domain)
>  			irq_domain_remove(port->intx_irq_domain);
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-05-24 12:28 [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler Pali Rohár
  2022-06-23 13:10 ` Pali Rohár
@ 2022-06-23 16:27 ` Bjorn Helgaas
  2022-06-23 16:32   ` Pali Rohár
  2022-07-01 14:29   ` Pali Rohár
  2022-11-06 23:25 ` Pali Rohár
  2025-02-25 16:28 ` Bjorn Helgaas
  3 siblings, 2 replies; 28+ messages in thread
From: Bjorn Helgaas @ 2022-06-23 16:27 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel

On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> instead of chained IRQ handler in pci-mvebu.c driver.
>
> This change fixes affinity support and allows to pin interrupts from
> different PCIe controllers to different CPU cores.

Several other drivers use irq_set_chained_handler_and_data().  Do any
of them need similar changes?  The commit log suggests that using
chained IRQ handlers breaks affinity support.  But perhaps that's not
the case and the real culprit is some other difference between mvebu
and the other drivers.

> Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> Signed-off-by: Pali Rohár <pali@kernel.org>
> ---
> Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> 
> I tested this patch with pci=nomsi in cmdline (to force kernel to use
> legacy intx instead of MSI) on A385 and checked that I can set affinity
> via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> different CPU and legacy interrupts from different cards/controllers
> were handled by different CPUs.
> 
> I think that this is important on Armada XP platforms which have many
> independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> ---
>  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
>  1 file changed, 17 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> index 8f76d4bda356..de67ea39fea5 100644
> --- a/drivers/pci/controller/pci-mvebu.c
> +++ b/drivers/pci/controller/pci-mvebu.c
> @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
>  	return 0;
>  }
>  
> -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
>  {
> -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> -	struct irq_chip *chip = irq_desc_get_chip(desc);
> +	struct mvebu_pcie_port *port = arg;
>  	struct device *dev = &port->pcie->pdev->dev;
>  	u32 cause, unmask, status;
>  	int i;
>  
> -	chained_irq_enter(chip, desc);
> -
>  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
>  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
>  	status = cause & unmask;
> @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
>  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
>  	}
>  
> -	chained_irq_exit(chip, desc);
> +	return status ? IRQ_HANDLED : IRQ_NONE;
>  }
>  
>  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
>  				mvebu_pcie_powerdown(port);
>  				continue;
>  			}
> -			irq_set_chained_handler_and_data(irq,
> -							 mvebu_pcie_irq_handler,
> -							 port);
> +
> +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> +					       IRQF_SHARED | IRQF_NO_THREAD,
> +					       port->name, port);
> +			if (ret) {
> +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> +					port->name, ret);
> +				irq_domain_remove(port->intx_irq_domain);
> +				pci_bridge_emul_cleanup(&port->bridge);
> +				devm_iounmap(dev, port->base);
> +				port->base = NULL;
> +				mvebu_pcie_powerdown(port);
> +				continue;
> +			}
>  		}
>  
>  		/*
> @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
>  
>  	for (i = 0; i < pcie->nports; i++) {
>  		struct mvebu_pcie_port *port = &pcie->ports[i];
> -		int irq = port->intx_irq;
>  
>  		if (!port->base)
>  			continue;
> @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
>  		/* Clear all interrupt causes. */
>  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
>  
> -		if (irq > 0)
> -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> -
>  		/* Remove IRQ domains. */
>  		if (port->intx_irq_domain)
>  			irq_domain_remove(port->intx_irq_domain);
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-06-23 16:27 ` Bjorn Helgaas
@ 2022-06-23 16:32   ` Pali Rohár
  2022-06-23 16:49     ` Bjorn Helgaas
  2022-07-01 14:29   ` Pali Rohár
  1 sibling, 1 reply; 28+ messages in thread
From: Pali Rohár @ 2022-06-23 16:32 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel

On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > instead of chained IRQ handler in pci-mvebu.c driver.
> >
> > This change fixes affinity support and allows to pin interrupts from
> > different PCIe controllers to different CPU cores.
> 
> Several other drivers use irq_set_chained_handler_and_data().  Do any
> of them need similar changes?

I do not know. This needs testing on HW which use those other drivers.

> The commit log suggests that using
> chained IRQ handlers breaks affinity support.  But perhaps that's not
> the case and the real culprit is some other difference between mvebu
> and the other drivers.

It is possible. But similar patch (revert; linked below) was required
for aardvark. I tested same approach on mvebu and it fixed affinity
support.

> > Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> > Signed-off-by: Pali Rohár <pali@kernel.org>
> > ---
> > Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > 
> > I tested this patch with pci=nomsi in cmdline (to force kernel to use
> > legacy intx instead of MSI) on A385 and checked that I can set affinity
> > via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> > different CPU and legacy interrupts from different cards/controllers
> > were handled by different CPUs.
> > 
> > I think that this is important on Armada XP platforms which have many
> > independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> > ---
> >  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
> >  1 file changed, 17 insertions(+), 13 deletions(-)
> > 
> > diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> > index 8f76d4bda356..de67ea39fea5 100644
> > --- a/drivers/pci/controller/pci-mvebu.c
> > +++ b/drivers/pci/controller/pci-mvebu.c
> > @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
> >  	return 0;
> >  }
> >  
> > -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
> >  {
> > -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> > -	struct irq_chip *chip = irq_desc_get_chip(desc);
> > +	struct mvebu_pcie_port *port = arg;
> >  	struct device *dev = &port->pcie->pdev->dev;
> >  	u32 cause, unmask, status;
> >  	int i;
> >  
> > -	chained_irq_enter(chip, desc);
> > -
> >  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
> >  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
> >  	status = cause & unmask;
> > @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> >  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
> >  	}
> >  
> > -	chained_irq_exit(chip, desc);
> > +	return status ? IRQ_HANDLED : IRQ_NONE;
> >  }
> >  
> >  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> > @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> >  				mvebu_pcie_powerdown(port);
> >  				continue;
> >  			}
> > -			irq_set_chained_handler_and_data(irq,
> > -							 mvebu_pcie_irq_handler,
> > -							 port);
> > +
> > +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> > +					       IRQF_SHARED | IRQF_NO_THREAD,
> > +					       port->name, port);
> > +			if (ret) {
> > +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> > +					port->name, ret);
> > +				irq_domain_remove(port->intx_irq_domain);
> > +				pci_bridge_emul_cleanup(&port->bridge);
> > +				devm_iounmap(dev, port->base);
> > +				port->base = NULL;
> > +				mvebu_pcie_powerdown(port);
> > +				continue;
> > +			}
> >  		}
> >  
> >  		/*
> > @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> >  
> >  	for (i = 0; i < pcie->nports; i++) {
> >  		struct mvebu_pcie_port *port = &pcie->ports[i];
> > -		int irq = port->intx_irq;
> >  
> >  		if (!port->base)
> >  			continue;
> > @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> >  		/* Clear all interrupt causes. */
> >  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
> >  
> > -		if (irq > 0)
> > -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> > -
> >  		/* Remove IRQ domains. */
> >  		if (port->intx_irq_domain)
> >  			irq_domain_remove(port->intx_irq_domain);
> > -- 
> > 2.20.1
> > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-06-23 16:32   ` Pali Rohár
@ 2022-06-23 16:49     ` Bjorn Helgaas
  2022-06-23 20:31       ` Marc Zyngier
  0 siblings, 1 reply; 28+ messages in thread
From: Bjorn Helgaas @ 2022-06-23 16:49 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel, Marc Zyngier

[+cc Marc, IRQ affinity vs chained IRQ handlers]

On Thu, Jun 23, 2022 at 06:32:40PM +0200, Pali Rohár wrote:
> On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite
> > > IRQ code to chained IRQ handler"") for pci-aardvark driver, use
> > > devm_request_irq() instead of chained IRQ handler in pci-mvebu.c
> > > driver.
> > >
> > > This change fixes affinity support and allows to pin interrupts
> > > from different PCIe controllers to different CPU cores.
> > 
> > Several other drivers use irq_set_chained_handler_and_data().  Do
> > any of them need similar changes?
> 
> I do not know. This needs testing on HW which use those other
> drivers.
> 
> > The commit log suggests that using chained IRQ handlers breaks
> > affinity support.  But perhaps that's not the case and the real
> > culprit is some other difference between mvebu and the other
> > drivers.
> 
> It is possible. But similar patch (revert; linked below) was
> required for aardvark. I tested same approach on mvebu and it fixed
> affinity support.

This feels like something we should understand better.  If
irq_set_chained_handler_and_data() is a problem for affinity, we
should fix it across the board in all the drivers at once.

If the real problem is something different, we should figure that out
and document it in the commit log.

I cc'd Marc in case he has time to educate us.

> > > Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> > > Signed-off-by: Pali Rohár <pali@kernel.org>
> > > ---
> > > Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > 
> > > I tested this patch with pci=nomsi in cmdline (to force kernel to use
> > > legacy intx instead of MSI) on A385 and checked that I can set affinity
> > > via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> > > different CPU and legacy interrupts from different cards/controllers
> > > were handled by different CPUs.
> > > 
> > > I think that this is important on Armada XP platforms which have many
> > > independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> > > ---
> > >  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
> > >  1 file changed, 17 insertions(+), 13 deletions(-)
> > > 
> > > diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> > > index 8f76d4bda356..de67ea39fea5 100644
> > > --- a/drivers/pci/controller/pci-mvebu.c
> > > +++ b/drivers/pci/controller/pci-mvebu.c
> > > @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
> > >  	return 0;
> > >  }
> > >  
> > > -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
> > >  {
> > > -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> > > -	struct irq_chip *chip = irq_desc_get_chip(desc);
> > > +	struct mvebu_pcie_port *port = arg;
> > >  	struct device *dev = &port->pcie->pdev->dev;
> > >  	u32 cause, unmask, status;
> > >  	int i;
> > >  
> > > -	chained_irq_enter(chip, desc);
> > > -
> > >  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
> > >  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
> > >  	status = cause & unmask;
> > > @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > >  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
> > >  	}
> > >  
> > > -	chained_irq_exit(chip, desc);
> > > +	return status ? IRQ_HANDLED : IRQ_NONE;
> > >  }
> > >  
> > >  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> > > @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> > >  				mvebu_pcie_powerdown(port);
> > >  				continue;
> > >  			}
> > > -			irq_set_chained_handler_and_data(irq,
> > > -							 mvebu_pcie_irq_handler,
> > > -							 port);
> > > +
> > > +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> > > +					       IRQF_SHARED | IRQF_NO_THREAD,
> > > +					       port->name, port);
> > > +			if (ret) {
> > > +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> > > +					port->name, ret);
> > > +				irq_domain_remove(port->intx_irq_domain);
> > > +				pci_bridge_emul_cleanup(&port->bridge);
> > > +				devm_iounmap(dev, port->base);
> > > +				port->base = NULL;
> > > +				mvebu_pcie_powerdown(port);
> > > +				continue;
> > > +			}
> > >  		}
> > >  
> > >  		/*
> > > @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > >  
> > >  	for (i = 0; i < pcie->nports; i++) {
> > >  		struct mvebu_pcie_port *port = &pcie->ports[i];
> > > -		int irq = port->intx_irq;
> > >  
> > >  		if (!port->base)
> > >  			continue;
> > > @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > >  		/* Clear all interrupt causes. */
> > >  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
> > >  
> > > -		if (irq > 0)
> > > -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> > > -
> > >  		/* Remove IRQ domains. */
> > >  		if (port->intx_irq_domain)
> > >  			irq_domain_remove(port->intx_irq_domain);
> > > -- 
> > > 2.20.1
> > > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-06-23 16:49     ` Bjorn Helgaas
@ 2022-06-23 20:31       ` Marc Zyngier
  2025-02-26 21:50         ` Bjorn Helgaas
  0 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-06-23 20:31 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Pali Rohár, Bjorn Helgaas, Thomas Petazzoni,
	Lorenzo Pieralisi, Rob Herring, Krzysztof Wilczyński,
	linux-pci, linux-kernel

Hi Bjorn,

On Thu, 23 Jun 2022 17:49:42 +0100,
Bjorn Helgaas <helgaas@kernel.org> wrote:
> 
> [+cc Marc, IRQ affinity vs chained IRQ handlers]
> 
> On Thu, Jun 23, 2022 at 06:32:40PM +0200, Pali Rohár wrote:
> > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite
> > > > IRQ code to chained IRQ handler"") for pci-aardvark driver, use
> > > > devm_request_irq() instead of chained IRQ handler in pci-mvebu.c
> > > > driver.
> > > >
> > > > This change fixes affinity support and allows to pin interrupts
> > > > from different PCIe controllers to different CPU cores.
> > > 
> > > Several other drivers use irq_set_chained_handler_and_data().  Do
> > > any of them need similar changes?
> > 
> > I do not know. This needs testing on HW which use those other
> > drivers.
> > 
> > > The commit log suggests that using chained IRQ handlers breaks
> > > affinity support.  But perhaps that's not the case and the real
> > > culprit is some other difference between mvebu and the other
> > > drivers.
> > 
> > It is possible. But similar patch (revert; linked below) was
> > required for aardvark. I tested same approach on mvebu and it fixed
> > affinity support.
> 
> This feels like something we should understand better.  If
> irq_set_chained_handler_and_data() is a problem for affinity, we
> should fix it across the board in all the drivers at once.
> 
> If the real problem is something different, we should figure that out
> and document it in the commit log.
> 
> I cc'd Marc in case he has time to educate us.

Thanks for roping me in.

The problem of changing affinities for chained (or multiplexing)
interrupts is, to make it short, that it breaks the existing userspace
ABI that a change in affinity affects only the interrupt userspace
acts upon, and no other. Which is why we don't expose any affinity
setting for such an interrupt, as by definition changing its affinity
affects all the interrupts that are muxed onto it.

By turning a chained interrupt into a normal handler, people work
around the above rule and break the contract the kernel has with
userspace.

Do I think this is acceptable? No. Can something be done about this?
Maybe.

Marek asked this exact question last month[1], and I gave a detailed
explanation of what could be done to improve matters, allowing this to
happen as long as userspace is made aware of the effects, which means
creating a new userspace ABI.

I would rather see people work on a proper solution than add bad hacks
that only work in environments where the userspace ABI can be safely
ignored, such as on an closed, embedded device.

HTH,

	M.

[1] https://lore.kernel.org/all/20220502102137.764606ee@thinkpad/

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-06-23 16:27 ` Bjorn Helgaas
  2022-06-23 16:32   ` Pali Rohár
@ 2022-07-01 14:29   ` Pali Rohár
  2022-07-09 14:31     ` Pali Rohár
  1 sibling, 1 reply; 28+ messages in thread
From: Pali Rohár @ 2022-07-01 14:29 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel

On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > instead of chained IRQ handler in pci-mvebu.c driver.
> >
> > This change fixes affinity support and allows to pin interrupts from
> > different PCIe controllers to different CPU cores.
> 
> Several other drivers use irq_set_chained_handler_and_data().  Do any
> of them need similar changes?  The commit log suggests that using
> chained IRQ handlers breaks affinity support.  But perhaps that's not
> the case and the real culprit is some other difference between mvebu
> and the other drivers.

And there is another reason to not use irq_set_chained_handler_and_data
and instead use devm_request_irq(). Armada XP has some interrupts
shared and it looks like that irq_set_chained_handler_and_data() API
does not handle shared interrupt sources too.

I can update commit message to mention also this fact.

> > Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> > Signed-off-by: Pali Rohár <pali@kernel.org>
> > ---
> > Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > 
> > I tested this patch with pci=nomsi in cmdline (to force kernel to use
> > legacy intx instead of MSI) on A385 and checked that I can set affinity
> > via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> > different CPU and legacy interrupts from different cards/controllers
> > were handled by different CPUs.
> > 
> > I think that this is important on Armada XP platforms which have many
> > independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> > ---
> >  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
> >  1 file changed, 17 insertions(+), 13 deletions(-)
> > 
> > diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> > index 8f76d4bda356..de67ea39fea5 100644
> > --- a/drivers/pci/controller/pci-mvebu.c
> > +++ b/drivers/pci/controller/pci-mvebu.c
> > @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
> >  	return 0;
> >  }
> >  
> > -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
> >  {
> > -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> > -	struct irq_chip *chip = irq_desc_get_chip(desc);
> > +	struct mvebu_pcie_port *port = arg;
> >  	struct device *dev = &port->pcie->pdev->dev;
> >  	u32 cause, unmask, status;
> >  	int i;
> >  
> > -	chained_irq_enter(chip, desc);
> > -
> >  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
> >  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
> >  	status = cause & unmask;
> > @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> >  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
> >  	}
> >  
> > -	chained_irq_exit(chip, desc);
> > +	return status ? IRQ_HANDLED : IRQ_NONE;
> >  }
> >  
> >  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> > @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> >  				mvebu_pcie_powerdown(port);
> >  				continue;
> >  			}
> > -			irq_set_chained_handler_and_data(irq,
> > -							 mvebu_pcie_irq_handler,
> > -							 port);
> > +
> > +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> > +					       IRQF_SHARED | IRQF_NO_THREAD,
> > +					       port->name, port);
> > +			if (ret) {
> > +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> > +					port->name, ret);
> > +				irq_domain_remove(port->intx_irq_domain);
> > +				pci_bridge_emul_cleanup(&port->bridge);
> > +				devm_iounmap(dev, port->base);
> > +				port->base = NULL;
> > +				mvebu_pcie_powerdown(port);
> > +				continue;
> > +			}
> >  		}
> >  
> >  		/*
> > @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> >  
> >  	for (i = 0; i < pcie->nports; i++) {
> >  		struct mvebu_pcie_port *port = &pcie->ports[i];
> > -		int irq = port->intx_irq;
> >  
> >  		if (!port->base)
> >  			continue;
> > @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> >  		/* Clear all interrupt causes. */
> >  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
> >  
> > -		if (irq > 0)
> > -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> > -
> >  		/* Remove IRQ domains. */
> >  		if (port->intx_irq_domain)
> >  			irq_domain_remove(port->intx_irq_domain);
> > -- 
> > 2.20.1
> > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-07-01 14:29   ` Pali Rohár
@ 2022-07-09 14:31     ` Pali Rohár
  2022-07-09 23:44       ` Bjorn Helgaas
  0 siblings, 1 reply; 28+ messages in thread
From: Pali Rohár @ 2022-07-09 14:31 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel

On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > instead of chained IRQ handler in pci-mvebu.c driver.
> > >
> > > This change fixes affinity support and allows to pin interrupts from
> > > different PCIe controllers to different CPU cores.
> > 
> > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > of them need similar changes?  The commit log suggests that using
> > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > the case and the real culprit is some other difference between mvebu
> > and the other drivers.
> 
> And there is another reason to not use irq_set_chained_handler_and_data
> and instead use devm_request_irq(). Armada XP has some interrupts
> shared and it looks like that irq_set_chained_handler_and_data() API
> does not handle shared interrupt sources too.
> 
> I can update commit message to mention also this fact.

Anything needed from me to improve this fix?

> > > Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> > > Signed-off-by: Pali Rohár <pali@kernel.org>
> > > ---
> > > Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > 
> > > I tested this patch with pci=nomsi in cmdline (to force kernel to use
> > > legacy intx instead of MSI) on A385 and checked that I can set affinity
> > > via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> > > different CPU and legacy interrupts from different cards/controllers
> > > were handled by different CPUs.
> > > 
> > > I think that this is important on Armada XP platforms which have many
> > > independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> > > ---
> > >  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
> > >  1 file changed, 17 insertions(+), 13 deletions(-)
> > > 
> > > diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> > > index 8f76d4bda356..de67ea39fea5 100644
> > > --- a/drivers/pci/controller/pci-mvebu.c
> > > +++ b/drivers/pci/controller/pci-mvebu.c
> > > @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
> > >  	return 0;
> > >  }
> > >  
> > > -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
> > >  {
> > > -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> > > -	struct irq_chip *chip = irq_desc_get_chip(desc);
> > > +	struct mvebu_pcie_port *port = arg;
> > >  	struct device *dev = &port->pcie->pdev->dev;
> > >  	u32 cause, unmask, status;
> > >  	int i;
> > >  
> > > -	chained_irq_enter(chip, desc);
> > > -
> > >  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
> > >  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
> > >  	status = cause & unmask;
> > > @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > >  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
> > >  	}
> > >  
> > > -	chained_irq_exit(chip, desc);
> > > +	return status ? IRQ_HANDLED : IRQ_NONE;
> > >  }
> > >  
> > >  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> > > @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> > >  				mvebu_pcie_powerdown(port);
> > >  				continue;
> > >  			}
> > > -			irq_set_chained_handler_and_data(irq,
> > > -							 mvebu_pcie_irq_handler,
> > > -							 port);
> > > +
> > > +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> > > +					       IRQF_SHARED | IRQF_NO_THREAD,
> > > +					       port->name, port);
> > > +			if (ret) {
> > > +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> > > +					port->name, ret);
> > > +				irq_domain_remove(port->intx_irq_domain);
> > > +				pci_bridge_emul_cleanup(&port->bridge);
> > > +				devm_iounmap(dev, port->base);
> > > +				port->base = NULL;
> > > +				mvebu_pcie_powerdown(port);
> > > +				continue;
> > > +			}
> > >  		}
> > >  
> > >  		/*
> > > @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > >  
> > >  	for (i = 0; i < pcie->nports; i++) {
> > >  		struct mvebu_pcie_port *port = &pcie->ports[i];
> > > -		int irq = port->intx_irq;
> > >  
> > >  		if (!port->base)
> > >  			continue;
> > > @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > >  		/* Clear all interrupt causes. */
> > >  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
> > >  
> > > -		if (irq > 0)
> > > -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> > > -
> > >  		/* Remove IRQ domains. */
> > >  		if (port->intx_irq_domain)
> > >  			irq_domain_remove(port->intx_irq_domain);
> > > -- 
> > > 2.20.1
> > > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-07-09 14:31     ` Pali Rohár
@ 2022-07-09 23:44       ` Bjorn Helgaas
  2022-07-10  0:06         ` Pali Rohár
  0 siblings, 1 reply; 28+ messages in thread
From: Bjorn Helgaas @ 2022-07-09 23:44 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel, Marc Zyngier

[+cc Marc, since he commented on this]

On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > >
> > > > This change fixes affinity support and allows to pin interrupts from
> > > > different PCIe controllers to different CPU cores.
> > > 
> > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > of them need similar changes?  The commit log suggests that using
> > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > the case and the real culprit is some other difference between mvebu
> > > and the other drivers.
> > 
> > And there is another reason to not use irq_set_chained_handler_and_data
> > and instead use devm_request_irq(). Armada XP has some interrupts
> > shared and it looks like that irq_set_chained_handler_and_data() API
> > does not handle shared interrupt sources too.
> > 
> > I can update commit message to mention also this fact.
> 
> Anything needed from me to improve this fix?

My impression from Marc's response [1] was that this patch would
"break the contract the kernel has with userspace" and he didn't think
this was acceptable.  But maybe I'm not understanding it correctly.

In any event, I'm waiting for you to continue that discussion.  Maybe
there's an argument for doing this even though it breaks some
userspace expectations.  If so, that should be acknowledged and
explained.  Or maybe there's an alternative implementation.  Marc
gave a link to some suggestions [2], which I haven't looked into, but
maybe you could.

[1] https://lore.kernel.org/r/874k0bf7f7.wl-maz@kernel.org
[2] https://lore.kernel.org/all/20220502102137.764606ee@thinkpad/

> > > > Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> > > > Signed-off-by: Pali Rohár <pali@kernel.org>
> > > > ---
> > > > Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> > > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > > 
> > > > I tested this patch with pci=nomsi in cmdline (to force kernel to use
> > > > legacy intx instead of MSI) on A385 and checked that I can set affinity
> > > > via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> > > > different CPU and legacy interrupts from different cards/controllers
> > > > were handled by different CPUs.
> > > > 
> > > > I think that this is important on Armada XP platforms which have many
> > > > independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> > > > ---
> > > >  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
> > > >  1 file changed, 17 insertions(+), 13 deletions(-)
> > > > 
> > > > diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> > > > index 8f76d4bda356..de67ea39fea5 100644
> > > > --- a/drivers/pci/controller/pci-mvebu.c
> > > > +++ b/drivers/pci/controller/pci-mvebu.c
> > > > @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
> > > >  	return 0;
> > > >  }
> > > >  
> > > > -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > > +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
> > > >  {
> > > > -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> > > > -	struct irq_chip *chip = irq_desc_get_chip(desc);
> > > > +	struct mvebu_pcie_port *port = arg;
> > > >  	struct device *dev = &port->pcie->pdev->dev;
> > > >  	u32 cause, unmask, status;
> > > >  	int i;
> > > >  
> > > > -	chained_irq_enter(chip, desc);
> > > > -
> > > >  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
> > > >  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
> > > >  	status = cause & unmask;
> > > > @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > >  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
> > > >  	}
> > > >  
> > > > -	chained_irq_exit(chip, desc);
> > > > +	return status ? IRQ_HANDLED : IRQ_NONE;
> > > >  }
> > > >  
> > > >  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> > > > @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> > > >  				mvebu_pcie_powerdown(port);
> > > >  				continue;
> > > >  			}
> > > > -			irq_set_chained_handler_and_data(irq,
> > > > -							 mvebu_pcie_irq_handler,
> > > > -							 port);
> > > > +
> > > > +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> > > > +					       IRQF_SHARED | IRQF_NO_THREAD,
> > > > +					       port->name, port);
> > > > +			if (ret) {
> > > > +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> > > > +					port->name, ret);
> > > > +				irq_domain_remove(port->intx_irq_domain);
> > > > +				pci_bridge_emul_cleanup(&port->bridge);
> > > > +				devm_iounmap(dev, port->base);
> > > > +				port->base = NULL;
> > > > +				mvebu_pcie_powerdown(port);
> > > > +				continue;
> > > > +			}
> > > >  		}
> > > >  
> > > >  		/*
> > > > @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > > >  
> > > >  	for (i = 0; i < pcie->nports; i++) {
> > > >  		struct mvebu_pcie_port *port = &pcie->ports[i];
> > > > -		int irq = port->intx_irq;
> > > >  
> > > >  		if (!port->base)
> > > >  			continue;
> > > > @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > > >  		/* Clear all interrupt causes. */
> > > >  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
> > > >  
> > > > -		if (irq > 0)
> > > > -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> > > > -
> > > >  		/* Remove IRQ domains. */
> > > >  		if (port->intx_irq_domain)
> > > >  			irq_domain_remove(port->intx_irq_domain);
> > > > -- 
> > > > 2.20.1
> > > > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-07-09 23:44       ` Bjorn Helgaas
@ 2022-07-10  0:06         ` Pali Rohár
  2022-08-29 16:51           ` Pali Rohár
  0 siblings, 1 reply; 28+ messages in thread
From: Pali Rohár @ 2022-07-10  0:06 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel, Marc Zyngier

On Saturday 09 July 2022 18:44:30 Bjorn Helgaas wrote:
> [+cc Marc, since he commented on this]
> 
> On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> > On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > > >
> > > > > This change fixes affinity support and allows to pin interrupts from
> > > > > different PCIe controllers to different CPU cores.
> > > > 
> > > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > > of them need similar changes?  The commit log suggests that using
> > > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > > the case and the real culprit is some other difference between mvebu
> > > > and the other drivers.
> > > 
> > > And there is another reason to not use irq_set_chained_handler_and_data
> > > and instead use devm_request_irq(). Armada XP has some interrupts
> > > shared and it looks like that irq_set_chained_handler_and_data() API
> > > does not handle shared interrupt sources too.
> > > 
> > > I can update commit message to mention also this fact.
> > 
> > Anything needed from me to improve this fix?
> 
> My impression from Marc's response [1] was that this patch would
> "break the contract the kernel has with userspace" and he didn't think
> this was acceptable.  But maybe I'm not understanding it correctly.

This is argument which Marc use when he does not have any argument.

Support for dedicated INTx into pci-mvebu.c was introduced just recently
and I used irq_set_chained_handler_and_data() just because I thought it
is a good idea and did not know about all those issues with it. So there
cannot be any breakage by this patch.

I already converted other pci-aardvark.c driver to use
irq_set_chained_handler_and_data() API because wanted it... But at the
end _that conversion_ caused breakage of afinity support and so this
conversion had to be reverted:
https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t

Based on his past decisions, above suggestions which cause _real_
breakage and his expressions like mvebu should be put into the trash,
I'm not going to listen him anymore. The only breaking is done by him.


There are two arguments why to not use irq_set_chained_handler_and_data:

1) It does not support afinity and therefore has negative performance
   impact on Armada platforms with more CPUs and more PCIe ports.

2) It does not support shared interrupts and therefore it will break
   hardware on which interrupt lines are shares (mostly Armada XP).

So these issues have to be fixed and currently I see only option to
switch irq_set_chained_handler_and_data() to devm_request_irq() which I
did in this fixup patch.

> In any event, I'm waiting for you to continue that discussion.  Maybe
> there's an argument for doing this even though it breaks some
> userspace expectations.  If so, that should be acknowledged and
> explained.  Or maybe there's an alternative implementation.  Marc
> gave a link to some suggestions [2], which I haven't looked into, but
> maybe you could.

Once Marc fix/implement that alternative implementation in his codebase
then we can continue discuss this direction. Until that happens I think
there is no other way, at least I do not see them.

And I'm not going to work again any patch for him and his codebase as he
explicitly expressed that is against any improvements in mvebu drivers
and is rejecting (my) patches. This is just waste of my time. So sorry.

> [1] https://lore.kernel.org/r/874k0bf7f7.wl-maz@kernel.org
> [2] https://lore.kernel.org/all/20220502102137.764606ee@thinkpad/
> 
> > > > > Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> > > > > Signed-off-by: Pali Rohár <pali@kernel.org>
> > > > > ---
> > > > > Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> > > > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > > > 
> > > > > I tested this patch with pci=nomsi in cmdline (to force kernel to use
> > > > > legacy intx instead of MSI) on A385 and checked that I can set affinity
> > > > > via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> > > > > different CPU and legacy interrupts from different cards/controllers
> > > > > were handled by different CPUs.
> > > > > 
> > > > > I think that this is important on Armada XP platforms which have many
> > > > > independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> > > > > ---
> > > > >  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
> > > > >  1 file changed, 17 insertions(+), 13 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> > > > > index 8f76d4bda356..de67ea39fea5 100644
> > > > > --- a/drivers/pci/controller/pci-mvebu.c
> > > > > +++ b/drivers/pci/controller/pci-mvebu.c
> > > > > @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
> > > > >  	return 0;
> > > > >  }
> > > > >  
> > > > > -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > > > +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
> > > > >  {
> > > > > -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> > > > > -	struct irq_chip *chip = irq_desc_get_chip(desc);
> > > > > +	struct mvebu_pcie_port *port = arg;
> > > > >  	struct device *dev = &port->pcie->pdev->dev;
> > > > >  	u32 cause, unmask, status;
> > > > >  	int i;
> > > > >  
> > > > > -	chained_irq_enter(chip, desc);
> > > > > -
> > > > >  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
> > > > >  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
> > > > >  	status = cause & unmask;
> > > > > @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > > >  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
> > > > >  	}
> > > > >  
> > > > > -	chained_irq_exit(chip, desc);
> > > > > +	return status ? IRQ_HANDLED : IRQ_NONE;
> > > > >  }
> > > > >  
> > > > >  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> > > > > @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> > > > >  				mvebu_pcie_powerdown(port);
> > > > >  				continue;
> > > > >  			}
> > > > > -			irq_set_chained_handler_and_data(irq,
> > > > > -							 mvebu_pcie_irq_handler,
> > > > > -							 port);
> > > > > +
> > > > > +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> > > > > +					       IRQF_SHARED | IRQF_NO_THREAD,
> > > > > +					       port->name, port);
> > > > > +			if (ret) {
> > > > > +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> > > > > +					port->name, ret);
> > > > > +				irq_domain_remove(port->intx_irq_domain);
> > > > > +				pci_bridge_emul_cleanup(&port->bridge);
> > > > > +				devm_iounmap(dev, port->base);
> > > > > +				port->base = NULL;
> > > > > +				mvebu_pcie_powerdown(port);
> > > > > +				continue;
> > > > > +			}
> > > > >  		}
> > > > >  
> > > > >  		/*
> > > > > @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > > > >  
> > > > >  	for (i = 0; i < pcie->nports; i++) {
> > > > >  		struct mvebu_pcie_port *port = &pcie->ports[i];
> > > > > -		int irq = port->intx_irq;
> > > > >  
> > > > >  		if (!port->base)
> > > > >  			continue;
> > > > > @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > > > >  		/* Clear all interrupt causes. */
> > > > >  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
> > > > >  
> > > > > -		if (irq > 0)
> > > > > -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> > > > > -
> > > > >  		/* Remove IRQ domains. */
> > > > >  		if (port->intx_irq_domain)
> > > > >  			irq_domain_remove(port->intx_irq_domain);
> > > > > -- 
> > > > > 2.20.1
> > > > > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-07-10  0:06         ` Pali Rohár
@ 2022-08-29 16:51           ` Pali Rohár
  2022-09-11 15:45             ` Pali Rohár
  0 siblings, 1 reply; 28+ messages in thread
From: Pali Rohár @ 2022-08-29 16:51 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel, Marc Zyngier

On Sunday 10 July 2022 02:06:59 Pali Rohár wrote:
> On Saturday 09 July 2022 18:44:30 Bjorn Helgaas wrote:
> > [+cc Marc, since he commented on this]
> > 
> > On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> > > On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > > > >
> > > > > > This change fixes affinity support and allows to pin interrupts from
> > > > > > different PCIe controllers to different CPU cores.
> > > > > 
> > > > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > > > of them need similar changes?  The commit log suggests that using
> > > > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > > > the case and the real culprit is some other difference between mvebu
> > > > > and the other drivers.
> > > > 
> > > > And there is another reason to not use irq_set_chained_handler_and_data
> > > > and instead use devm_request_irq(). Armada XP has some interrupts
> > > > shared and it looks like that irq_set_chained_handler_and_data() API
> > > > does not handle shared interrupt sources too.
> > > > 
> > > > I can update commit message to mention also this fact.
> > > 
> > > Anything needed from me to improve this fix?
> > 
> > My impression from Marc's response [1] was that this patch would
> > "break the contract the kernel has with userspace" and he didn't think
> > this was acceptable.  But maybe I'm not understanding it correctly.
> 
> This is argument which Marc use when he does not have any argument.
> 
> Support for dedicated INTx into pci-mvebu.c was introduced just recently
> and I used irq_set_chained_handler_and_data() just because I thought it
> is a good idea and did not know about all those issues with it. So there
> cannot be any breakage by this patch.
> 
> I already converted other pci-aardvark.c driver to use
> irq_set_chained_handler_and_data() API because wanted it... But at the
> end _that conversion_ caused breakage of afinity support and so this
> conversion had to be reverted:
> https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> 
> Based on his past decisions, above suggestions which cause _real_
> breakage and his expressions like mvebu should be put into the trash,
> I'm not going to listen him anymore. The only breaking is done by him.
> 
> 
> There are two arguments why to not use irq_set_chained_handler_and_data:
> 
> 1) It does not support afinity and therefore has negative performance
>    impact on Armada platforms with more CPUs and more PCIe ports.
> 
> 2) It does not support shared interrupts and therefore it will break
>    hardware on which interrupt lines are shares (mostly Armada XP).
> 
> So these issues have to be fixed and currently I see only option to
> switch irq_set_chained_handler_and_data() to devm_request_irq() which I
> did in this fixup patch.

Any progress here? This patch is waiting here since end of May and if
something is going to be broken then it is this fact of ignoring reported
issues and proposed patch. Do you better solution how to fix commit
ec075262648f?

> > In any event, I'm waiting for you to continue that discussion.  Maybe
> > there's an argument for doing this even though it breaks some
> > userspace expectations.  If so, that should be acknowledged and
> > explained.  Or maybe there's an alternative implementation.  Marc
> > gave a link to some suggestions [2], which I haven't looked into, but
> > maybe you could.
> 
> Once Marc fix/implement that alternative implementation in his codebase
> then we can continue discuss this direction. Until that happens I think
> there is no other way, at least I do not see them.
> 
> And I'm not going to work again any patch for him and his codebase as he
> explicitly expressed that is against any improvements in mvebu drivers
> and is rejecting (my) patches. This is just waste of my time. So sorry.
> 
> > [1] https://lore.kernel.org/r/874k0bf7f7.wl-maz@kernel.org
> > [2] https://lore.kernel.org/all/20220502102137.764606ee@thinkpad/
> > 
> > > > > > Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> > > > > > Signed-off-by: Pali Rohár <pali@kernel.org>
> > > > > > ---
> > > > > > Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> > > > > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > > > > 
> > > > > > I tested this patch with pci=nomsi in cmdline (to force kernel to use
> > > > > > legacy intx instead of MSI) on A385 and checked that I can set affinity
> > > > > > via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> > > > > > different CPU and legacy interrupts from different cards/controllers
> > > > > > were handled by different CPUs.
> > > > > > 
> > > > > > I think that this is important on Armada XP platforms which have many
> > > > > > independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> > > > > > ---
> > > > > >  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
> > > > > >  1 file changed, 17 insertions(+), 13 deletions(-)
> > > > > > 
> > > > > > diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> > > > > > index 8f76d4bda356..de67ea39fea5 100644
> > > > > > --- a/drivers/pci/controller/pci-mvebu.c
> > > > > > +++ b/drivers/pci/controller/pci-mvebu.c
> > > > > > @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
> > > > > >  	return 0;
> > > > > >  }
> > > > > >  
> > > > > > -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > > > > +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
> > > > > >  {
> > > > > > -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> > > > > > -	struct irq_chip *chip = irq_desc_get_chip(desc);
> > > > > > +	struct mvebu_pcie_port *port = arg;
> > > > > >  	struct device *dev = &port->pcie->pdev->dev;
> > > > > >  	u32 cause, unmask, status;
> > > > > >  	int i;
> > > > > >  
> > > > > > -	chained_irq_enter(chip, desc);
> > > > > > -
> > > > > >  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
> > > > > >  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
> > > > > >  	status = cause & unmask;
> > > > > > @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > > > >  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
> > > > > >  	}
> > > > > >  
> > > > > > -	chained_irq_exit(chip, desc);
> > > > > > +	return status ? IRQ_HANDLED : IRQ_NONE;
> > > > > >  }
> > > > > >  
> > > > > >  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> > > > > > @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> > > > > >  				mvebu_pcie_powerdown(port);
> > > > > >  				continue;
> > > > > >  			}
> > > > > > -			irq_set_chained_handler_and_data(irq,
> > > > > > -							 mvebu_pcie_irq_handler,
> > > > > > -							 port);
> > > > > > +
> > > > > > +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> > > > > > +					       IRQF_SHARED | IRQF_NO_THREAD,
> > > > > > +					       port->name, port);
> > > > > > +			if (ret) {
> > > > > > +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> > > > > > +					port->name, ret);
> > > > > > +				irq_domain_remove(port->intx_irq_domain);
> > > > > > +				pci_bridge_emul_cleanup(&port->bridge);
> > > > > > +				devm_iounmap(dev, port->base);
> > > > > > +				port->base = NULL;
> > > > > > +				mvebu_pcie_powerdown(port);
> > > > > > +				continue;
> > > > > > +			}
> > > > > >  		}
> > > > > >  
> > > > > >  		/*
> > > > > > @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > > > > >  
> > > > > >  	for (i = 0; i < pcie->nports; i++) {
> > > > > >  		struct mvebu_pcie_port *port = &pcie->ports[i];
> > > > > > -		int irq = port->intx_irq;
> > > > > >  
> > > > > >  		if (!port->base)
> > > > > >  			continue;
> > > > > > @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > > > > >  		/* Clear all interrupt causes. */
> > > > > >  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
> > > > > >  
> > > > > > -		if (irq > 0)
> > > > > > -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> > > > > > -
> > > > > >  		/* Remove IRQ domains. */
> > > > > >  		if (port->intx_irq_domain)
> > > > > >  			irq_domain_remove(port->intx_irq_domain);
> > > > > > -- 
> > > > > > 2.20.1
> > > > > > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-08-29 16:51           ` Pali Rohár
@ 2022-09-11 15:45             ` Pali Rohár
  2022-09-12  8:01               ` Lorenzo Pieralisi
  0 siblings, 1 reply; 28+ messages in thread
From: Pali Rohár @ 2022-09-11 15:45 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel, Marc Zyngier

On Monday 29 August 2022 18:51:09 Pali Rohár wrote:
> On Sunday 10 July 2022 02:06:59 Pali Rohár wrote:
> > On Saturday 09 July 2022 18:44:30 Bjorn Helgaas wrote:
> > > [+cc Marc, since he commented on this]
> > > 
> > > On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> > > > On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > > > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > > > > >
> > > > > > > This change fixes affinity support and allows to pin interrupts from
> > > > > > > different PCIe controllers to different CPU cores.
> > > > > > 
> > > > > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > > > > of them need similar changes?  The commit log suggests that using
> > > > > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > > > > the case and the real culprit is some other difference between mvebu
> > > > > > and the other drivers.
> > > > > 
> > > > > And there is another reason to not use irq_set_chained_handler_and_data
> > > > > and instead use devm_request_irq(). Armada XP has some interrupts
> > > > > shared and it looks like that irq_set_chained_handler_and_data() API
> > > > > does not handle shared interrupt sources too.
> > > > > 
> > > > > I can update commit message to mention also this fact.
> > > > 
> > > > Anything needed from me to improve this fix?
> > > 
> > > My impression from Marc's response [1] was that this patch would
> > > "break the contract the kernel has with userspace" and he didn't think
> > > this was acceptable.  But maybe I'm not understanding it correctly.
> > 
> > This is argument which Marc use when he does not have any argument.
> > 
> > Support for dedicated INTx into pci-mvebu.c was introduced just recently
> > and I used irq_set_chained_handler_and_data() just because I thought it
> > is a good idea and did not know about all those issues with it. So there
> > cannot be any breakage by this patch.
> > 
> > I already converted other pci-aardvark.c driver to use
> > irq_set_chained_handler_and_data() API because wanted it... But at the
> > end _that conversion_ caused breakage of afinity support and so this
> > conversion had to be reverted:
> > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > 
> > Based on his past decisions, above suggestions which cause _real_
> > breakage and his expressions like mvebu should be put into the trash,
> > I'm not going to listen him anymore. The only breaking is done by him.
> > 
> > 
> > There are two arguments why to not use irq_set_chained_handler_and_data:
> > 
> > 1) It does not support afinity and therefore has negative performance
> >    impact on Armada platforms with more CPUs and more PCIe ports.
> > 
> > 2) It does not support shared interrupts and therefore it will break
> >    hardware on which interrupt lines are shares (mostly Armada XP).
> > 
> > So these issues have to be fixed and currently I see only option to
> > switch irq_set_chained_handler_and_data() to devm_request_irq() which I
> > did in this fixup patch.
> 
> Any progress here? This patch is waiting here since end of May and if
> something is going to be broken then it is this fact of ignoring reported
> issues and proposed patch. Do you better solution how to fix commit
> ec075262648f?

After two weeks I'm reminding this fix patch again...

> > > In any event, I'm waiting for you to continue that discussion.  Maybe
> > > there's an argument for doing this even though it breaks some
> > > userspace expectations.  If so, that should be acknowledged and
> > > explained.  Or maybe there's an alternative implementation.  Marc
> > > gave a link to some suggestions [2], which I haven't looked into, but
> > > maybe you could.
> > 
> > Once Marc fix/implement that alternative implementation in his codebase
> > then we can continue discuss this direction. Until that happens I think
> > there is no other way, at least I do not see them.
> > 
> > And I'm not going to work again any patch for him and his codebase as he
> > explicitly expressed that is against any improvements in mvebu drivers
> > and is rejecting (my) patches. This is just waste of my time. So sorry.
> > 
> > > [1] https://lore.kernel.org/r/874k0bf7f7.wl-maz@kernel.org
> > > [2] https://lore.kernel.org/all/20220502102137.764606ee@thinkpad/
> > > 
> > > > > > > Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> > > > > > > Signed-off-by: Pali Rohár <pali@kernel.org>
> > > > > > > ---
> > > > > > > Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> > > > > > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > > > > > 
> > > > > > > I tested this patch with pci=nomsi in cmdline (to force kernel to use
> > > > > > > legacy intx instead of MSI) on A385 and checked that I can set affinity
> > > > > > > via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> > > > > > > different CPU and legacy interrupts from different cards/controllers
> > > > > > > were handled by different CPUs.
> > > > > > > 
> > > > > > > I think that this is important on Armada XP platforms which have many
> > > > > > > independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> > > > > > > ---
> > > > > > >  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
> > > > > > >  1 file changed, 17 insertions(+), 13 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> > > > > > > index 8f76d4bda356..de67ea39fea5 100644
> > > > > > > --- a/drivers/pci/controller/pci-mvebu.c
> > > > > > > +++ b/drivers/pci/controller/pci-mvebu.c
> > > > > > > @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
> > > > > > >  	return 0;
> > > > > > >  }
> > > > > > >  
> > > > > > > -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > > > > > +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
> > > > > > >  {
> > > > > > > -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> > > > > > > -	struct irq_chip *chip = irq_desc_get_chip(desc);
> > > > > > > +	struct mvebu_pcie_port *port = arg;
> > > > > > >  	struct device *dev = &port->pcie->pdev->dev;
> > > > > > >  	u32 cause, unmask, status;
> > > > > > >  	int i;
> > > > > > >  
> > > > > > > -	chained_irq_enter(chip, desc);
> > > > > > > -
> > > > > > >  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
> > > > > > >  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
> > > > > > >  	status = cause & unmask;
> > > > > > > @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> > > > > > >  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
> > > > > > >  	}
> > > > > > >  
> > > > > > > -	chained_irq_exit(chip, desc);
> > > > > > > +	return status ? IRQ_HANDLED : IRQ_NONE;
> > > > > > >  }
> > > > > > >  
> > > > > > >  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> > > > > > > @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> > > > > > >  				mvebu_pcie_powerdown(port);
> > > > > > >  				continue;
> > > > > > >  			}
> > > > > > > -			irq_set_chained_handler_and_data(irq,
> > > > > > > -							 mvebu_pcie_irq_handler,
> > > > > > > -							 port);
> > > > > > > +
> > > > > > > +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> > > > > > > +					       IRQF_SHARED | IRQF_NO_THREAD,
> > > > > > > +					       port->name, port);
> > > > > > > +			if (ret) {
> > > > > > > +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> > > > > > > +					port->name, ret);
> > > > > > > +				irq_domain_remove(port->intx_irq_domain);
> > > > > > > +				pci_bridge_emul_cleanup(&port->bridge);
> > > > > > > +				devm_iounmap(dev, port->base);
> > > > > > > +				port->base = NULL;
> > > > > > > +				mvebu_pcie_powerdown(port);
> > > > > > > +				continue;
> > > > > > > +			}
> > > > > > >  		}
> > > > > > >  
> > > > > > >  		/*
> > > > > > > @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > > > > > >  
> > > > > > >  	for (i = 0; i < pcie->nports; i++) {
> > > > > > >  		struct mvebu_pcie_port *port = &pcie->ports[i];
> > > > > > > -		int irq = port->intx_irq;
> > > > > > >  
> > > > > > >  		if (!port->base)
> > > > > > >  			continue;
> > > > > > > @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > > > > > >  		/* Clear all interrupt causes. */
> > > > > > >  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
> > > > > > >  
> > > > > > > -		if (irq > 0)
> > > > > > > -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> > > > > > > -
> > > > > > >  		/* Remove IRQ domains. */
> > > > > > >  		if (port->intx_irq_domain)
> > > > > > >  			irq_domain_remove(port->intx_irq_domain);
> > > > > > > -- 
> > > > > > > 2.20.1
> > > > > > > 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-09-11 15:45             ` Pali Rohár
@ 2022-09-12  8:01               ` Lorenzo Pieralisi
  2022-09-12  8:48                 ` Pali Rohár
  0 siblings, 1 reply; 28+ messages in thread
From: Lorenzo Pieralisi @ 2022-09-12  8:01 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Bjorn Helgaas, Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, linux-pci, linux-kernel,
	Marc Zyngier

On Sun, Sep 11, 2022 at 05:45:16PM +0200, Pali Rohár wrote:
> On Monday 29 August 2022 18:51:09 Pali Rohár wrote:
> > On Sunday 10 July 2022 02:06:59 Pali Rohár wrote:
> > > On Saturday 09 July 2022 18:44:30 Bjorn Helgaas wrote:
> > > > [+cc Marc, since he commented on this]
> > > > 
> > > > On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> > > > > On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > > > > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > > > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > > > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > > > > > >
> > > > > > > > This change fixes affinity support and allows to pin interrupts from
> > > > > > > > different PCIe controllers to different CPU cores.
> > > > > > > 
> > > > > > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > > > > > of them need similar changes?  The commit log suggests that using
> > > > > > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > > > > > the case and the real culprit is some other difference between mvebu
> > > > > > > and the other drivers.
> > > > > > 
> > > > > > And there is another reason to not use irq_set_chained_handler_and_data
> > > > > > and instead use devm_request_irq(). Armada XP has some interrupts
> > > > > > shared and it looks like that irq_set_chained_handler_and_data() API
> > > > > > does not handle shared interrupt sources too.
> > > > > > 
> > > > > > I can update commit message to mention also this fact.
> > > > > 
> > > > > Anything needed from me to improve this fix?
> > > > 
> > > > My impression from Marc's response [1] was that this patch would
> > > > "break the contract the kernel has with userspace" and he didn't think
> > > > this was acceptable.  But maybe I'm not understanding it correctly.
> > > 
> > > This is argument which Marc use when he does not have any argument.
> > > 
> > > Support for dedicated INTx into pci-mvebu.c was introduced just recently
> > > and I used irq_set_chained_handler_and_data() just because I thought it
> > > is a good idea and did not know about all those issues with it. So there
> > > cannot be any breakage by this patch.
> > > 
> > > I already converted other pci-aardvark.c driver to use
> > > irq_set_chained_handler_and_data() API because wanted it... But at the
> > > end _that conversion_ caused breakage of afinity support and so this
> > > conversion had to be reverted:
> > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > 
> > > Based on his past decisions, above suggestions which cause _real_
> > > breakage and his expressions like mvebu should be put into the trash,
> > > I'm not going to listen him anymore. The only breaking is done by him.
> > > 
> > > 
> > > There are two arguments why to not use irq_set_chained_handler_and_data:
> > > 
> > > 1) It does not support afinity and therefore has negative performance
> > >    impact on Armada platforms with more CPUs and more PCIe ports.
> > > 
> > > 2) It does not support shared interrupts and therefore it will break
> > >    hardware on which interrupt lines are shares (mostly Armada XP).
> > > 
> > > So these issues have to be fixed and currently I see only option to
> > > switch irq_set_chained_handler_and_data() to devm_request_irq() which I
> > > did in this fixup patch.
> > 
> > Any progress here? This patch is waiting here since end of May and if
> > something is going to be broken then it is this fact of ignoring reported
> > issues and proposed patch. Do you better solution how to fix commit
> > ec075262648f?
> 
> After two weeks I'm reminding this fix patch again...

There is no point complaining about something you were asked
to change, really - there is not.

You were given feedback, feel free to ignore it, it won't help
getting this patch upstream - it is as simple as that, sorry.

Thanks,
Lorenzo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-09-12  8:01               ` Lorenzo Pieralisi
@ 2022-09-12  8:48                 ` Pali Rohár
  2022-09-12  8:55                   ` Lorenzo Pieralisi
  0 siblings, 1 reply; 28+ messages in thread
From: Pali Rohár @ 2022-09-12  8:48 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Bjorn Helgaas, Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, linux-pci, linux-kernel,
	Marc Zyngier

On Monday 12 September 2022 10:01:32 Lorenzo Pieralisi wrote:
> On Sun, Sep 11, 2022 at 05:45:16PM +0200, Pali Rohár wrote:
> > On Monday 29 August 2022 18:51:09 Pali Rohár wrote:
> > > On Sunday 10 July 2022 02:06:59 Pali Rohár wrote:
> > > > On Saturday 09 July 2022 18:44:30 Bjorn Helgaas wrote:
> > > > > [+cc Marc, since he commented on this]
> > > > > 
> > > > > On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> > > > > > On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > > > > > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > > > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > > > > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > > > > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > > > > > > >
> > > > > > > > > This change fixes affinity support and allows to pin interrupts from
> > > > > > > > > different PCIe controllers to different CPU cores.
> > > > > > > > 
> > > > > > > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > > > > > > of them need similar changes?  The commit log suggests that using
> > > > > > > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > > > > > > the case and the real culprit is some other difference between mvebu
> > > > > > > > and the other drivers.
> > > > > > > 
> > > > > > > And there is another reason to not use irq_set_chained_handler_and_data
> > > > > > > and instead use devm_request_irq(). Armada XP has some interrupts
> > > > > > > shared and it looks like that irq_set_chained_handler_and_data() API
> > > > > > > does not handle shared interrupt sources too.
> > > > > > > 
> > > > > > > I can update commit message to mention also this fact.
> > > > > > 
> > > > > > Anything needed from me to improve this fix?
> > > > > 
> > > > > My impression from Marc's response [1] was that this patch would
> > > > > "break the contract the kernel has with userspace" and he didn't think
> > > > > this was acceptable.  But maybe I'm not understanding it correctly.
> > > > 
> > > > This is argument which Marc use when he does not have any argument.
> > > > 
> > > > Support for dedicated INTx into pci-mvebu.c was introduced just recently
> > > > and I used irq_set_chained_handler_and_data() just because I thought it
> > > > is a good idea and did not know about all those issues with it. So there
> > > > cannot be any breakage by this patch.
> > > > 
> > > > I already converted other pci-aardvark.c driver to use
> > > > irq_set_chained_handler_and_data() API because wanted it... But at the
> > > > end _that conversion_ caused breakage of afinity support and so this
> > > > conversion had to be reverted:
> > > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > > 
> > > > Based on his past decisions, above suggestions which cause _real_
> > > > breakage and his expressions like mvebu should be put into the trash,
> > > > I'm not going to listen him anymore. The only breaking is done by him.
> > > > 
> > > > 
> > > > There are two arguments why to not use irq_set_chained_handler_and_data:
> > > > 
> > > > 1) It does not support afinity and therefore has negative performance
> > > >    impact on Armada platforms with more CPUs and more PCIe ports.
> > > > 
> > > > 2) It does not support shared interrupts and therefore it will break
> > > >    hardware on which interrupt lines are shares (mostly Armada XP).
> > > > 
> > > > So these issues have to be fixed and currently I see only option to
> > > > switch irq_set_chained_handler_and_data() to devm_request_irq() which I
> > > > did in this fixup patch.
> > > 
> > > Any progress here? This patch is waiting here since end of May and if
> > > something is going to be broken then it is this fact of ignoring reported
> > > issues and proposed patch. Do you better solution how to fix commit
> > > ec075262648f?
> > 
> > After two weeks I'm reminding this fix patch again...
> 
> There is no point complaining about something you were asked
> to change, really - there is not.
> 
> You were given feedback, feel free to ignore it, it won't help
> getting this patch upstream - it is as simple as that, sorry.
> 
> Thanks,
> Lorenzo

I'm not sure if I understand you, what do you mean that all patches
which depends on this are now automatically rejected or what?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-09-12  8:48                 ` Pali Rohár
@ 2022-09-12  8:55                   ` Lorenzo Pieralisi
  2022-09-12  9:03                     ` Pali Rohár
  0 siblings, 1 reply; 28+ messages in thread
From: Lorenzo Pieralisi @ 2022-09-12  8:55 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Bjorn Helgaas, Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, linux-pci, linux-kernel,
	Marc Zyngier

On Mon, Sep 12, 2022 at 10:48:08AM +0200, Pali Rohár wrote:
> On Monday 12 September 2022 10:01:32 Lorenzo Pieralisi wrote:
> > On Sun, Sep 11, 2022 at 05:45:16PM +0200, Pali Rohár wrote:
> > > On Monday 29 August 2022 18:51:09 Pali Rohár wrote:
> > > > On Sunday 10 July 2022 02:06:59 Pali Rohár wrote:
> > > > > On Saturday 09 July 2022 18:44:30 Bjorn Helgaas wrote:
> > > > > > [+cc Marc, since he commented on this]
> > > > > > 
> > > > > > On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> > > > > > > On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > > > > > > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > > > > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > > > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > > > > > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > > > > > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > > > > > > > >
> > > > > > > > > > This change fixes affinity support and allows to pin interrupts from
> > > > > > > > > > different PCIe controllers to different CPU cores.
> > > > > > > > > 
> > > > > > > > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > > > > > > > of them need similar changes?  The commit log suggests that using
> > > > > > > > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > > > > > > > the case and the real culprit is some other difference between mvebu
> > > > > > > > > and the other drivers.
> > > > > > > > 
> > > > > > > > And there is another reason to not use irq_set_chained_handler_and_data
> > > > > > > > and instead use devm_request_irq(). Armada XP has some interrupts
> > > > > > > > shared and it looks like that irq_set_chained_handler_and_data() API
> > > > > > > > does not handle shared interrupt sources too.
> > > > > > > > 
> > > > > > > > I can update commit message to mention also this fact.
> > > > > > > 
> > > > > > > Anything needed from me to improve this fix?
> > > > > > 
> > > > > > My impression from Marc's response [1] was that this patch would
> > > > > > "break the contract the kernel has with userspace" and he didn't think
> > > > > > this was acceptable.  But maybe I'm not understanding it correctly.
> > > > > 
> > > > > This is argument which Marc use when he does not have any argument.
> > > > > 
> > > > > Support for dedicated INTx into pci-mvebu.c was introduced just recently
> > > > > and I used irq_set_chained_handler_and_data() just because I thought it
> > > > > is a good idea and did not know about all those issues with it. So there
> > > > > cannot be any breakage by this patch.
> > > > > 
> > > > > I already converted other pci-aardvark.c driver to use
> > > > > irq_set_chained_handler_and_data() API because wanted it... But at the
> > > > > end _that conversion_ caused breakage of afinity support and so this
> > > > > conversion had to be reverted:
> > > > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > > > 
> > > > > Based on his past decisions, above suggestions which cause _real_
> > > > > breakage and his expressions like mvebu should be put into the trash,
> > > > > I'm not going to listen him anymore. The only breaking is done by him.
> > > > > 
> > > > > 
> > > > > There are two arguments why to not use irq_set_chained_handler_and_data:
> > > > > 
> > > > > 1) It does not support afinity and therefore has negative performance
> > > > >    impact on Armada platforms with more CPUs and more PCIe ports.
> > > > > 
> > > > > 2) It does not support shared interrupts and therefore it will break
> > > > >    hardware on which interrupt lines are shares (mostly Armada XP).
> > > > > 
> > > > > So these issues have to be fixed and currently I see only option to
> > > > > switch irq_set_chained_handler_and_data() to devm_request_irq() which I
> > > > > did in this fixup patch.
> > > > 
> > > > Any progress here? This patch is waiting here since end of May and if
> > > > something is going to be broken then it is this fact of ignoring reported
> > > > issues and proposed patch. Do you better solution how to fix commit
> > > > ec075262648f?
> > > 
> > > After two weeks I'm reminding this fix patch again...
> > 
> > There is no point complaining about something you were asked
> > to change, really - there is not.
> > 
> > You were given feedback, feel free to ignore it, it won't help
> > getting this patch upstream - it is as simple as that, sorry.
> > 
> > Thanks,
> > Lorenzo
> 
> I'm not sure if I understand you, what do you mean that all patches
> which depends on this are now automatically rejected or what?

I am not merging this code unless it is acked by an IRQ maintainer.

Is it clear enough ?

Thanks,
Lorenzo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-09-12  8:55                   ` Lorenzo Pieralisi
@ 2022-09-12  9:03                     ` Pali Rohár
  2022-11-11 12:56                       ` Lorenzo Pieralisi
  0 siblings, 1 reply; 28+ messages in thread
From: Pali Rohár @ 2022-09-12  9:03 UTC (permalink / raw)
  To: Lorenzo Pieralisi
  Cc: Bjorn Helgaas, Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, linux-pci, linux-kernel,
	Marc Zyngier

On Monday 12 September 2022 10:55:47 Lorenzo Pieralisi wrote:
> On Mon, Sep 12, 2022 at 10:48:08AM +0200, Pali Rohár wrote:
> > On Monday 12 September 2022 10:01:32 Lorenzo Pieralisi wrote:
> > > On Sun, Sep 11, 2022 at 05:45:16PM +0200, Pali Rohár wrote:
> > > > On Monday 29 August 2022 18:51:09 Pali Rohár wrote:
> > > > > On Sunday 10 July 2022 02:06:59 Pali Rohár wrote:
> > > > > > On Saturday 09 July 2022 18:44:30 Bjorn Helgaas wrote:
> > > > > > > [+cc Marc, since he commented on this]
> > > > > > > 
> > > > > > > On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> > > > > > > > On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > > > > > > > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > > > > > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > > > > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > > > > > > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > > > > > > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > > > > > > > > >
> > > > > > > > > > > This change fixes affinity support and allows to pin interrupts from
> > > > > > > > > > > different PCIe controllers to different CPU cores.
> > > > > > > > > > 
> > > > > > > > > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > > > > > > > > of them need similar changes?  The commit log suggests that using
> > > > > > > > > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > > > > > > > > the case and the real culprit is some other difference between mvebu
> > > > > > > > > > and the other drivers.
> > > > > > > > > 
> > > > > > > > > And there is another reason to not use irq_set_chained_handler_and_data
> > > > > > > > > and instead use devm_request_irq(). Armada XP has some interrupts
> > > > > > > > > shared and it looks like that irq_set_chained_handler_and_data() API
> > > > > > > > > does not handle shared interrupt sources too.
> > > > > > > > > 
> > > > > > > > > I can update commit message to mention also this fact.
> > > > > > > > 
> > > > > > > > Anything needed from me to improve this fix?
> > > > > > > 
> > > > > > > My impression from Marc's response [1] was that this patch would
> > > > > > > "break the contract the kernel has with userspace" and he didn't think
> > > > > > > this was acceptable.  But maybe I'm not understanding it correctly.
> > > > > > 
> > > > > > This is argument which Marc use when he does not have any argument.
> > > > > > 
> > > > > > Support for dedicated INTx into pci-mvebu.c was introduced just recently
> > > > > > and I used irq_set_chained_handler_and_data() just because I thought it
> > > > > > is a good idea and did not know about all those issues with it. So there
> > > > > > cannot be any breakage by this patch.
> > > > > > 
> > > > > > I already converted other pci-aardvark.c driver to use
> > > > > > irq_set_chained_handler_and_data() API because wanted it... But at the
> > > > > > end _that conversion_ caused breakage of afinity support and so this
> > > > > > conversion had to be reverted:
> > > > > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > > > > 
> > > > > > Based on his past decisions, above suggestions which cause _real_
> > > > > > breakage and his expressions like mvebu should be put into the trash,
> > > > > > I'm not going to listen him anymore. The only breaking is done by him.
> > > > > > 
> > > > > > 
> > > > > > There are two arguments why to not use irq_set_chained_handler_and_data:
> > > > > > 
> > > > > > 1) It does not support afinity and therefore has negative performance
> > > > > >    impact on Armada platforms with more CPUs and more PCIe ports.
> > > > > > 
> > > > > > 2) It does not support shared interrupts and therefore it will break
> > > > > >    hardware on which interrupt lines are shares (mostly Armada XP).
> > > > > > 
> > > > > > So these issues have to be fixed and currently I see only option to
> > > > > > switch irq_set_chained_handler_and_data() to devm_request_irq() which I
> > > > > > did in this fixup patch.
> > > > > 
> > > > > Any progress here? This patch is waiting here since end of May and if
> > > > > something is going to be broken then it is this fact of ignoring reported
> > > > > issues and proposed patch. Do you better solution how to fix commit
> > > > > ec075262648f?
> > > > 
> > > > After two weeks I'm reminding this fix patch again...
> > > 
> > > There is no point complaining about something you were asked
> > > to change, really - there is not.
> > > 
> > > You were given feedback, feel free to ignore it, it won't help
> > > getting this patch upstream - it is as simple as that, sorry.
> > > 
> > > Thanks,
> > > Lorenzo
> > 
> > I'm not sure if I understand you, what do you mean that all patches
> > which depends on this are now automatically rejected or what?
> 
> I am not merging this code unless it is acked by an IRQ maintainer.
> 
> Is it clear enough ?
> 
> Thanks,
> Lorenzo

So, could you then propose a solution how to fix this issue to allow one
interrupt to be shared with more devices, like it is needed for some
Armada platforms? Because I do not see a way how to do it without
IRQF_SHARED and without shared interrupt it is not possible to implement
any other pending features.

I'm feeling that since May there is just conclusion that all development
on the mvebu must be stopped as more people here do not like this
hardware and trying to do remove it or at least make it orphan.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-05-24 12:28 [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler Pali Rohár
  2022-06-23 13:10 ` Pali Rohár
  2022-06-23 16:27 ` Bjorn Helgaas
@ 2022-11-06 23:25 ` Pali Rohár
  2025-02-25 16:28 ` Bjorn Helgaas
  3 siblings, 0 replies; 28+ messages in thread
From: Pali Rohár @ 2022-11-06 23:25 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel

On Tuesday 24 May 2022 14:28:17 Pali Rohár wrote:
> Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> instead of chained IRQ handler in pci-mvebu.c driver.
> 
> This change fixes affinity support and allows to pin interrupts from
> different PCIe controllers to different CPU cores.
> 
> Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> Signed-off-by: Pali Rohár <pali@kernel.org>
> ---
> Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t

Hello! Half year passed since introduced commit in Fixes line, issue is
still there and for pci-mvebu.c was not fixed yet. For pci-aardvark.c
was fixed.

I do not see any progress in alternative solution for this issue and
therefore this patch is still relevant.

Moreover requirement to use devm_request_irq() is needed also for other
pci-mvebu.c patches which are waiting on the list, due to nature of
shared interrupt line. For example:
https://lore.kernel.org/linux-pci/20220830123639.4zpvvvlrsaqs2rls@pali/

> I tested this patch with pci=nomsi in cmdline (to force kernel to use
> legacy intx instead of MSI) on A385 and checked that I can set affinity
> via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> different CPU and legacy interrupts from different cards/controllers
> were handled by different CPUs.
> 
> I think that this is important on Armada XP platforms which have many
> independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> ---
>  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
>  1 file changed, 17 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> index 8f76d4bda356..de67ea39fea5 100644
> --- a/drivers/pci/controller/pci-mvebu.c
> +++ b/drivers/pci/controller/pci-mvebu.c
> @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
>  	return 0;
>  }
>  
> -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
>  {
> -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> -	struct irq_chip *chip = irq_desc_get_chip(desc);
> +	struct mvebu_pcie_port *port = arg;
>  	struct device *dev = &port->pcie->pdev->dev;
>  	u32 cause, unmask, status;
>  	int i;
>  
> -	chained_irq_enter(chip, desc);
> -
>  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
>  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
>  	status = cause & unmask;
> @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
>  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
>  	}
>  
> -	chained_irq_exit(chip, desc);
> +	return status ? IRQ_HANDLED : IRQ_NONE;
>  }
>  
>  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
>  				mvebu_pcie_powerdown(port);
>  				continue;
>  			}
> -			irq_set_chained_handler_and_data(irq,
> -							 mvebu_pcie_irq_handler,
> -							 port);
> +
> +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> +					       IRQF_SHARED | IRQF_NO_THREAD,
> +					       port->name, port);
> +			if (ret) {
> +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> +					port->name, ret);
> +				irq_domain_remove(port->intx_irq_domain);
> +				pci_bridge_emul_cleanup(&port->bridge);
> +				devm_iounmap(dev, port->base);
> +				port->base = NULL;
> +				mvebu_pcie_powerdown(port);
> +				continue;
> +			}
>  		}
>  
>  		/*
> @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
>  
>  	for (i = 0; i < pcie->nports; i++) {
>  		struct mvebu_pcie_port *port = &pcie->ports[i];
> -		int irq = port->intx_irq;
>  
>  		if (!port->base)
>  			continue;
> @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
>  		/* Clear all interrupt causes. */
>  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
>  
> -		if (irq > 0)
> -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> -
>  		/* Remove IRQ domains. */
>  		if (port->intx_irq_domain)
>  			irq_domain_remove(port->intx_irq_domain);
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-09-12  9:03                     ` Pali Rohár
@ 2022-11-11 12:56                       ` Lorenzo Pieralisi
  2022-11-11 17:15                         ` Pali Rohár
  0 siblings, 1 reply; 28+ messages in thread
From: Lorenzo Pieralisi @ 2022-11-11 12:56 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Bjorn Helgaas, Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi,
	Rob Herring, Krzysztof Wilczyński, linux-pci, linux-kernel,
	Marc Zyngier

On Mon, Sep 12, 2022 at 11:03:06AM +0200, Pali Rohár wrote:
> On Monday 12 September 2022 10:55:47 Lorenzo Pieralisi wrote:
> > On Mon, Sep 12, 2022 at 10:48:08AM +0200, Pali Rohár wrote:
> > > On Monday 12 September 2022 10:01:32 Lorenzo Pieralisi wrote:
> > > > On Sun, Sep 11, 2022 at 05:45:16PM +0200, Pali Rohár wrote:
> > > > > On Monday 29 August 2022 18:51:09 Pali Rohár wrote:
> > > > > > On Sunday 10 July 2022 02:06:59 Pali Rohár wrote:
> > > > > > > On Saturday 09 July 2022 18:44:30 Bjorn Helgaas wrote:
> > > > > > > > [+cc Marc, since he commented on this]
> > > > > > > > 
> > > > > > > > On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> > > > > > > > > On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > > > > > > > > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > > > > > > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > > > > > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > > > > > > > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > > > > > > > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > > > > > > > > > >
> > > > > > > > > > > > This change fixes affinity support and allows to pin interrupts from
> > > > > > > > > > > > different PCIe controllers to different CPU cores.
> > > > > > > > > > > 
> > > > > > > > > > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > > > > > > > > > of them need similar changes?  The commit log suggests that using
> > > > > > > > > > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > > > > > > > > > the case and the real culprit is some other difference between mvebu
> > > > > > > > > > > and the other drivers.
> > > > > > > > > > 
> > > > > > > > > > And there is another reason to not use irq_set_chained_handler_and_data
> > > > > > > > > > and instead use devm_request_irq(). Armada XP has some interrupts
> > > > > > > > > > shared and it looks like that irq_set_chained_handler_and_data() API
> > > > > > > > > > does not handle shared interrupt sources too.
> > > > > > > > > > 
> > > > > > > > > > I can update commit message to mention also this fact.
> > > > > > > > > 
> > > > > > > > > Anything needed from me to improve this fix?
> > > > > > > > 
> > > > > > > > My impression from Marc's response [1] was that this patch would
> > > > > > > > "break the contract the kernel has with userspace" and he didn't think
> > > > > > > > this was acceptable.  But maybe I'm not understanding it correctly.
> > > > > > > 
> > > > > > > This is argument which Marc use when he does not have any argument.
> > > > > > > 
> > > > > > > Support for dedicated INTx into pci-mvebu.c was introduced just recently
> > > > > > > and I used irq_set_chained_handler_and_data() just because I thought it
> > > > > > > is a good idea and did not know about all those issues with it. So there
> > > > > > > cannot be any breakage by this patch.
> > > > > > > 
> > > > > > > I already converted other pci-aardvark.c driver to use
> > > > > > > irq_set_chained_handler_and_data() API because wanted it... But at the
> > > > > > > end _that conversion_ caused breakage of afinity support and so this
> > > > > > > conversion had to be reverted:
> > > > > > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > > > > > 
> > > > > > > Based on his past decisions, above suggestions which cause _real_
> > > > > > > breakage and his expressions like mvebu should be put into the trash,
> > > > > > > I'm not going to listen him anymore. The only breaking is done by him.
> > > > > > > 
> > > > > > > 
> > > > > > > There are two arguments why to not use irq_set_chained_handler_and_data:
> > > > > > > 
> > > > > > > 1) It does not support afinity and therefore has negative performance
> > > > > > >    impact on Armada platforms with more CPUs and more PCIe ports.
> > > > > > > 
> > > > > > > 2) It does not support shared interrupts and therefore it will break
> > > > > > >    hardware on which interrupt lines are shares (mostly Armada XP).
> > > > > > > 
> > > > > > > So these issues have to be fixed and currently I see only option to
> > > > > > > switch irq_set_chained_handler_and_data() to devm_request_irq() which I
> > > > > > > did in this fixup patch.
> > > > > > 
> > > > > > Any progress here? This patch is waiting here since end of May and if
> > > > > > something is going to be broken then it is this fact of ignoring reported
> > > > > > issues and proposed patch. Do you better solution how to fix commit
> > > > > > ec075262648f?
> > > > > 
> > > > > After two weeks I'm reminding this fix patch again...
> > > > 
> > > > There is no point complaining about something you were asked
> > > > to change, really - there is not.
> > > > 
> > > > You were given feedback, feel free to ignore it, it won't help
> > > > getting this patch upstream - it is as simple as that, sorry.
> > > > 
> > > > Thanks,
> > > > Lorenzo
> > > 
> > > I'm not sure if I understand you, what do you mean that all patches
> > > which depends on this are now automatically rejected or what?
> > 
> > I am not merging this code unless it is acked by an IRQ maintainer.
> > 
> > Is it clear enough ?
> > 
> > Thanks,
> > Lorenzo
> 
> So, could you then propose a solution how to fix this issue to allow one
> interrupt to be shared with more devices, like it is needed for some
> Armada platforms? Because I do not see a way how to do it without
> IRQF_SHARED and without shared interrupt it is not possible to implement
> any other pending features.
> 
> I'm feeling that since May there is just conclusion that all development
> on the mvebu must be stopped as more people here do not like this
> hardware and trying to do remove it or at least make it orphan.

Marc gave you a solution - we are going round in circles, it is getting
boring, honestly.

https://lore.kernel.org/linux-pci/874k0bf7f7.wl-maz@kernel.org

Either you implement what he asks for or I drop this patch and all
dependent ones from the PCI queue - apologies.

Lorenzo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-11-11 12:56                       ` Lorenzo Pieralisi
@ 2022-11-11 17:15                         ` Pali Rohár
  2022-11-14  9:33                           ` Lorenzo Pieralisi
  0 siblings, 1 reply; 28+ messages in thread
From: Pali Rohár @ 2022-11-11 17:15 UTC (permalink / raw)
  To: Lorenzo Pieralisi, Helgaas, Thomas Petazzoni,
	Krzysztof Wilczyński
  Cc: linux-pci, linux-kernel

On Friday 11 November 2022 13:56:36 Lorenzo Pieralisi wrote:
> On Mon, Sep 12, 2022 at 11:03:06AM +0200, Pali Rohár wrote:
> > On Monday 12 September 2022 10:55:47 Lorenzo Pieralisi wrote:
> > > On Mon, Sep 12, 2022 at 10:48:08AM +0200, Pali Rohár wrote:
> > > > On Monday 12 September 2022 10:01:32 Lorenzo Pieralisi wrote:
> > > > > On Sun, Sep 11, 2022 at 05:45:16PM +0200, Pali Rohár wrote:
> > > > > > On Monday 29 August 2022 18:51:09 Pali Rohár wrote:
> > > > > > > On Sunday 10 July 2022 02:06:59 Pali Rohár wrote:
> > > > > > > > On Saturday 09 July 2022 18:44:30 Bjorn Helgaas wrote:
> > > > > > > > > [+cc Marc, since he commented on this]
> > > > > > > > > 
> > > > > > > > > On Sat, Jul 09, 2022 at 04:31:51PM +0200, Pali Rohár wrote:
> > > > > > > > > > On Friday 01 July 2022 16:29:41 Pali Rohár wrote:
> > > > > > > > > > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > > > > > > > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > > > > > > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> > > > > > > > > > > > > chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> > > > > > > > > > > > > instead of chained IRQ handler in pci-mvebu.c driver.
> > > > > > > > > > > > >
> > > > > > > > > > > > > This change fixes affinity support and allows to pin interrupts from
> > > > > > > > > > > > > different PCIe controllers to different CPU cores.
> > > > > > > > > > > > 
> > > > > > > > > > > > Several other drivers use irq_set_chained_handler_and_data().  Do any
> > > > > > > > > > > > of them need similar changes?  The commit log suggests that using
> > > > > > > > > > > > chained IRQ handlers breaks affinity support.  But perhaps that's not
> > > > > > > > > > > > the case and the real culprit is some other difference between mvebu
> > > > > > > > > > > > and the other drivers.
> > > > > > > > > > > 
> > > > > > > > > > > And there is another reason to not use irq_set_chained_handler_and_data
> > > > > > > > > > > and instead use devm_request_irq(). Armada XP has some interrupts
> > > > > > > > > > > shared and it looks like that irq_set_chained_handler_and_data() API
> > > > > > > > > > > does not handle shared interrupt sources too.
> > > > > > > > > > > 
> > > > > > > > > > > I can update commit message to mention also this fact.
> > > > > > > > > > 
> > > > > > > > > > Anything needed from me to improve this fix?
> > > > > > > > > 
> > > > > > > > > My impression from Marc's response [1] was that this patch would
> > > > > > > > > "break the contract the kernel has with userspace" and he didn't think
> > > > > > > > > this was acceptable.  But maybe I'm not understanding it correctly.
> > > > > > > > 
> > > > > > > > This is argument which Marc use when he does not have any argument.
> > > > > > > > 
> > > > > > > > Support for dedicated INTx into pci-mvebu.c was introduced just recently
> > > > > > > > and I used irq_set_chained_handler_and_data() just because I thought it
> > > > > > > > is a good idea and did not know about all those issues with it. So there
> > > > > > > > cannot be any breakage by this patch.
> > > > > > > > 
> > > > > > > > I already converted other pci-aardvark.c driver to use
> > > > > > > > irq_set_chained_handler_and_data() API because wanted it... But at the
> > > > > > > > end _that conversion_ caused breakage of afinity support and so this
> > > > > > > > conversion had to be reverted:
> > > > > > > > https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> > > > > > > > 
> > > > > > > > Based on his past decisions, above suggestions which cause _real_
> > > > > > > > breakage and his expressions like mvebu should be put into the trash,
> > > > > > > > I'm not going to listen him anymore. The only breaking is done by him.
> > > > > > > > 
> > > > > > > > 
> > > > > > > > There are two arguments why to not use irq_set_chained_handler_and_data:
> > > > > > > > 
> > > > > > > > 1) It does not support afinity and therefore has negative performance
> > > > > > > >    impact on Armada platforms with more CPUs and more PCIe ports.
> > > > > > > > 
> > > > > > > > 2) It does not support shared interrupts and therefore it will break
> > > > > > > >    hardware on which interrupt lines are shares (mostly Armada XP).
> > > > > > > > 
> > > > > > > > So these issues have to be fixed and currently I see only option to
> > > > > > > > switch irq_set_chained_handler_and_data() to devm_request_irq() which I
> > > > > > > > did in this fixup patch.
> > > > > > > 
> > > > > > > Any progress here? This patch is waiting here since end of May and if
> > > > > > > something is going to be broken then it is this fact of ignoring reported
> > > > > > > issues and proposed patch. Do you better solution how to fix commit
> > > > > > > ec075262648f?
> > > > > > 
> > > > > > After two weeks I'm reminding this fix patch again...
> > > > > 
> > > > > There is no point complaining about something you were asked
> > > > > to change, really - there is not.
> > > > > 
> > > > > You were given feedback, feel free to ignore it, it won't help
> > > > > getting this patch upstream - it is as simple as that, sorry.
> > > > > 
> > > > > Thanks,
> > > > > Lorenzo
> > > > 
> > > > I'm not sure if I understand you, what do you mean that all patches
> > > > which depends on this are now automatically rejected or what?
> > > 
> > > I am not merging this code unless it is acked by an IRQ maintainer.
> > > 
> > > Is it clear enough ?
> > > 
> > > Thanks,
> > > Lorenzo
> > 
> > So, could you then propose a solution how to fix this issue to allow one
> > interrupt to be shared with more devices, like it is needed for some
> > Armada platforms? Because I do not see a way how to do it without
> > IRQF_SHARED and without shared interrupt it is not possible to implement
> > any other pending features.
> > 
> > I'm feeling that since May there is just conclusion that all development
> > on the mvebu must be stopped as more people here do not like this
> > hardware and trying to do remove it or at least make it orphan.
> 
> Marc gave you a solution - we are going round in circles, it is getting
> boring, honestly.
> 
> https://lore.kernel.org/linux-pci/874k0bf7f7.wl-maz@kernel.org

This is not the solution. As this does not allow to share one interrupt
by multiple devices / drivers, like devm_request_irq with IRQF_SHARED
flag. irq_set_chained_handler_and_data() can be used only by one device.
This was expressed more times in other threads, but seems it was
ignored.

In past I have already send patches for armada interrupt drivers and
they were rejected by Marc, who did not wanted to talk about them. And
in past already expressed that he is not interested in any more
development in Armada HW as by thinks it is broken hardware and his
devel board stopped working.

I'm really not going to implement anything new for people who already
expressed that would reject my contribution. Why would I do it?

Anyway, here we are dealing with REGRESSION in mainline kernel. Not a
new feature. Mainline kernel is currently broken and this my patch is
fixing it. Exactly same fix was already applied for other pci controller
driver.

Why it is disallowed to fix regression in kernel with exactly same
approach and same patch which was allowed for other drivers? Because it
really looks like that somebody is trying to ensure that Linux kernel
should stop working on existing hardware.

> Either you implement what he asks for or I drop this patch and all
> dependent ones from the PCI queue - apologies.
> 
> Lorenzo

So, please if you are going to drop patch series, and you do not like my
fix for mentioned issue, could you please fix the driver to work in
mainline kernel?

Do not take me wrong, but it is really problem if mainline kernel is
broken, developers like Greg are publicly talking that vendors,
distributors and other should update kernels to new versions and not
stay on the old one; and then other developers are rejecting fixes to
regression in new versions and basically are saying that people and
vendors should not upgrade kernels to new versions as it would stop
working on some hardware.

And I'm not even mentioned that there happened lot of other development
and cleanup in this area and everything is stopped and maybe now after
2 years would discarded, like some developers want as they do not like
some kind of hardware.

I'm really disappointed that I have to discuss about such thing.


If fixing regressions and bugs is not a priority for kernel and is first
subject to rewrite other parts of code and implement couple of new
features, then I think it is a good idea to stop being doing
development.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-11-11 17:15                         ` Pali Rohár
@ 2022-11-14  9:33                           ` Lorenzo Pieralisi
  0 siblings, 0 replies; 28+ messages in thread
From: Lorenzo Pieralisi @ 2022-11-14  9:33 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Helgaas, Thomas Petazzoni, Krzysztof Wilczyński, linux-pci,
	linux-kernel

On Fri, Nov 11, 2022 at 06:15:14PM +0100, Pali Rohár wrote:

[...]

> > > So, could you then propose a solution how to fix this issue to allow one
> > > interrupt to be shared with more devices, like it is needed for some
> > > Armada platforms? Because I do not see a way how to do it without
> > > IRQF_SHARED and without shared interrupt it is not possible to implement
> > > any other pending features.
> > > 
> > > I'm feeling that since May there is just conclusion that all development
> > > on the mvebu must be stopped as more people here do not like this
> > > hardware and trying to do remove it or at least make it orphan.
> > 
> > Marc gave you a solution - we are going round in circles, it is getting
> > boring, honestly.
> > 
> > https://lore.kernel.org/linux-pci/874k0bf7f7.wl-maz@kernel.org
> 
> This is not the solution. As this does not allow to share one interrupt
> by multiple devices / drivers, like devm_request_irq with IRQF_SHARED
> flag. irq_set_chained_handler_and_data() can be used only by one device.
> This was expressed more times in other threads, but seems it was
> ignored.
> 
> In past I have already send patches for armada interrupt drivers and
> they were rejected by Marc, who did not wanted to talk about them. And
> in past already expressed that he is not interested in any more
> development in Armada HW as by thinks it is broken hardware and his
> devel board stopped working.
> 
> I'm really not going to implement anything new for people who already
> expressed that would reject my contribution. Why would I do it?

Because that's the only way you can get this done, that's how it works,
I can't merge code that the respective maintainer finds unacceptable.

> Anyway, here we are dealing with REGRESSION in mainline kernel. Not a
> new feature. Mainline kernel is currently broken and this my patch is
> fixing it. Exactly same fix was already applied for other pci controller
> driver.
> 
> Why it is disallowed to fix regression in kernel with exactly same
> approach and same patch which was allowed for other drivers? Because it
> really looks like that somebody is trying to ensure that Linux kernel
> should stop working on existing hardware.

Your impression is wrong - please reconsider it. The goal is to remove
the unacceptable code from the kernel not adding to it to create even
more broken legacy. It is not against this patch or HW, it is against
this same code that there is pushback, there are no strings attached.

> > Either you implement what he asks for or I drop this patch and all
> > dependent ones from the PCI queue - apologies.
> > 
> > Lorenzo
> 
> So, please if you are going to drop patch series, and you do not like my
> fix for mentioned issue, could you please fix the driver to work in
> mainline kernel?

It is not that I don't like it, I expressed the reason why I can't merge
it above.

I can't fix it for you.

> Do not take me wrong, but it is really problem if mainline kernel is
> broken, developers like Greg are publicly talking that vendors,
> distributors and other should update kernels to new versions and not
> stay on the old one; and then other developers are rejecting fixes to
> regression in new versions and basically are saying that people and
> vendors should not upgrade kernels to new versions as it would stop
> working on some hardware.
> 
> And I'm not even mentioned that there happened lot of other development
> and cleanup in this area and everything is stopped and maybe now after
> 2 years would discarded, like some developers want as they do not like
> some kind of hardware.

See above, let's close this argument.

> I'm really disappointed that I have to discuss about such thing.

So am I :)

> If fixing regressions and bugs is not a priority for kernel and is first
> subject to rewrite other parts of code and implement couple of new
> features, then I think it is a good idea to stop being doing
> development.

Don't twist it that way, I think we have been clear on the matter.

Lorenzo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-05-24 12:28 [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler Pali Rohár
                   ` (2 preceding siblings ...)
  2022-11-06 23:25 ` Pali Rohár
@ 2025-02-25 16:28 ` Bjorn Helgaas
  3 siblings, 0 replies; 28+ messages in thread
From: Bjorn Helgaas @ 2025-02-25 16:28 UTC (permalink / raw)
  To: Pali Rohár
  Cc: Bjorn Helgaas, Thomas Petazzoni, Lorenzo Pieralisi, Rob Herring,
	Krzysztof Wilczyński, linux-pci, linux-kernel, Marc Zyngier

On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark: Rewrite IRQ code to
> chained IRQ handler"") for pci-aardvark driver, use devm_request_irq()
> instead of chained IRQ handler in pci-mvebu.c driver.
> 
> This change fixes affinity support and allows to pin interrupts from
> different PCIe controllers to different CPU cores.

Hi Pali,

Coming back for another pass to try to understand this; sorry it's so
much later.

I'm trying to understand the affinity connection here.  I'm assuming
this involves the irq_set_affinity() path, which looks like it
requires some irq_chip that has a .irq_set_affinity() implementation.

But the only irq_chip mentioned in pci-mvebu.c is intx_irq_chip, which
doesn't implement .irq_set_affinity(), so I can't connect the dots
between this patch and a change in affinity support.

Apparently it has something to do with the chained vs non-chained IRQ
handler?  I guess mvebu_pcie_irq_handler() calls
generic_handle_domain_irq(); does that make mvebu_pcie_irq_handler() a
chained IRQ handler?

I guess the loop over all INTx interrupts there means that INTA, INTB,
INTC, and INTD must share the same affinity, because if we enter
mvebu_pcie_irq_handler() on CPU 0 for INTA, we may end up also
handling INTB, INTC, and INTD on CPU 0?  And maybe that means that no
INTx handler with this kind of loop can support independent affinity
for the four INTx interrupts?

How does this work from a user perspective?  Do you write a mask to
/proc/irq/XX/smp_affinity?

And I guess that XX must be something like "INTx", not "INTA", right?
I.e., /proc/interrupts could only have a single line for INTx?

Sorry for all these ignorant questions, IRQs are pretty much a mystery
to me.

> Fixes: ec075262648f ("PCI: mvebu: Implement support for legacy INTx interrupts")
> Signed-off-by: Pali Rohár <pali@kernel.org>
> ---
> Hello Bjorn! This is basically same issue as for pci-aardvark.c:
> https://lore.kernel.org/linux-pci/20220515125815.30157-1-pali@kernel.org/#t
> 
> I tested this patch with pci=nomsi in cmdline (to force kernel to use
> legacy intx instead of MSI) on A385 and checked that I can set affinity
> via /proc/irq/XX/smp_affinity file for every mvebu pcie controller to
> different CPU and legacy interrupts from different cards/controllers
> were handled by different CPUs.
> 
> I think that this is important on Armada XP platforms which have many
> independent PCIe controllers (IIRC up to 10) and many cores (up to 4).
> ---
>  drivers/pci/controller/pci-mvebu.c | 30 +++++++++++++++++-------------
>  1 file changed, 17 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
> index 8f76d4bda356..de67ea39fea5 100644
> --- a/drivers/pci/controller/pci-mvebu.c
> +++ b/drivers/pci/controller/pci-mvebu.c
> @@ -1017,16 +1017,13 @@ static int mvebu_pcie_init_irq_domain(struct mvebu_pcie_port *port)
>  	return 0;
>  }
>  
> -static void mvebu_pcie_irq_handler(struct irq_desc *desc)
> +static irqreturn_t mvebu_pcie_irq_handler(int irq, void *arg)
>  {
> -	struct mvebu_pcie_port *port = irq_desc_get_handler_data(desc);
> -	struct irq_chip *chip = irq_desc_get_chip(desc);
> +	struct mvebu_pcie_port *port = arg;
>  	struct device *dev = &port->pcie->pdev->dev;
>  	u32 cause, unmask, status;
>  	int i;
>  
> -	chained_irq_enter(chip, desc);
> -
>  	cause = mvebu_readl(port, PCIE_INT_CAUSE_OFF);
>  	unmask = mvebu_readl(port, PCIE_INT_UNMASK_OFF);
>  	status = cause & unmask;
> @@ -1040,7 +1037,7 @@ static void mvebu_pcie_irq_handler(struct irq_desc *desc)
>  			dev_err_ratelimited(dev, "unexpected INT%c IRQ\n", (char)i+'A');
>  	}
>  
> -	chained_irq_exit(chip, desc);
> +	return status ? IRQ_HANDLED : IRQ_NONE;
>  }
>  
>  static int mvebu_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
> @@ -1490,9 +1487,20 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
>  				mvebu_pcie_powerdown(port);
>  				continue;
>  			}
> -			irq_set_chained_handler_and_data(irq,
> -							 mvebu_pcie_irq_handler,
> -							 port);
> +
> +			ret = devm_request_irq(dev, irq, mvebu_pcie_irq_handler,
> +					       IRQF_SHARED | IRQF_NO_THREAD,
> +					       port->name, port);
> +			if (ret) {
> +				dev_err(dev, "%s: cannot register interrupt handler: %d\n",
> +					port->name, ret);
> +				irq_domain_remove(port->intx_irq_domain);
> +				pci_bridge_emul_cleanup(&port->bridge);
> +				devm_iounmap(dev, port->base);
> +				port->base = NULL;
> +				mvebu_pcie_powerdown(port);
> +				continue;
> +			}
>  		}
>  
>  		/*
> @@ -1599,7 +1607,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
>  
>  	for (i = 0; i < pcie->nports; i++) {
>  		struct mvebu_pcie_port *port = &pcie->ports[i];
> -		int irq = port->intx_irq;
>  
>  		if (!port->base)
>  			continue;
> @@ -1615,9 +1622,6 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
>  		/* Clear all interrupt causes. */
>  		mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
>  
> -		if (irq > 0)
> -			irq_set_chained_handler_and_data(irq, NULL, NULL);
> -
>  		/* Remove IRQ domains. */
>  		if (port->intx_irq_domain)
>  			irq_domain_remove(port->intx_irq_domain);
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2022-06-23 20:31       ` Marc Zyngier
@ 2025-02-26 21:50         ` Bjorn Helgaas
  2025-03-01 19:49           ` Luís Mendes
  0 siblings, 1 reply; 28+ messages in thread
From: Bjorn Helgaas @ 2025-02-26 21:50 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Pali Rohár, Bjorn Helgaas, Thomas Petazzoni,
	Lorenzo Pieralisi, Rob Herring, Krzysztof Wilczyński,
	linux-pci, linux-kernel

On Thu, Jun 23, 2022 at 09:31:40PM +0100, Marc Zyngier wrote:
> On Thu, 23 Jun 2022 17:49:42 +0100,
> Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Thu, Jun 23, 2022 at 06:32:40PM +0200, Pali Rohár wrote:
> > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark:
> > > > > Rewrite IRQ code to chained IRQ handler"") for pci-aardvark
> > > > > driver, use devm_request_irq() instead of chained IRQ
> > > > > handler in pci-mvebu.c driver.
> > > > >
> > > > > This change fixes affinity support and allows to pin
> > > > > interrupts from different PCIe controllers to different CPU
> > > > > cores.
> > > > 
> > > > Several other drivers use irq_set_chained_handler_and_data().
> > > > Do any of them need similar changes?
> > > 
> > > I do not know. This needs testing on HW which use those other
> > > drivers.
> > > 
> > > > The commit log suggests that using chained IRQ handlers breaks
> > > > affinity support.  But perhaps that's not the case and the
> > > > real culprit is some other difference between mvebu and the
> > > > other drivers.
> > > 
> > > It is possible. But similar patch (revert; linked below) was
> > > required for aardvark. I tested same approach on mvebu and it
> > > fixed affinity support.
> > 
> > This feels like something we should understand better.  If
> > irq_set_chained_handler_and_data() is a problem for affinity, we
> > should fix it across the board in all the drivers at once.
> > 
> > If the real problem is something different, we should figure that
> > out and document it in the commit log.
> > 
> > I cc'd Marc in case he has time to educate us.
> 
> Thanks for roping me in.
> 
> The problem of changing affinities for chained (or multiplexing)
> interrupts is, to make it short, that it breaks the existing
> userspace ABI that a change in affinity affects only the interrupt
> userspace acts upon, and no other. Which is why we don't expose any
> affinity setting for such an interrupt, as by definition changing
> its affinity affects all the interrupts that are muxed onto it.
> 
> By turning a chained interrupt into a normal handler, people work
> around the above rule and break the contract the kernel has with
> userspace.
> 
> Do I think this is acceptable? No. Can something be done about this?
> Maybe.
> 
> Marek asked this exact question last month[1], and I gave a detailed
> explanation of what could be done to improve matters, allowing this
> to happen as long as userspace is made aware of the effects, which
> means creating a new userspace ABI.
> 
> I would rather see people work on a proper solution than add bad
> hacks that only work in environments where the userspace ABI can be
> safely ignored, such as on an closed, embedded device.
> 
> [1] https://lore.kernel.org/all/20220502102137.764606ee@thinkpad/

OK, this patch [2] has languished forever, and I don't know how to
move forward.

The patch basically changes mvebu_pcie_irq_handler() from a chained
IRQ handler to a handler registered with devm_request_irq() so it can
be pinned to a CPU.

How are we supposed to decide whether this is safe?  What should we
look at in the patch?

IIUC on mvebu, there's a single IRQ (port->intx_irq, described by a DT
"intx" in interrupt-names) that invokes mvebu_pcie_irq_handler(),
which loops through and handles INTA, INTB, INTC, and INTD.

I think if port->intx_irq is pinned to CPU X, that means INTA, INTB,
INTC, and INTD are all pinned to that same CPU.

I assume changing to devm_request_irq() means port->intx_irq will
appear in /proc/interrupts and can be pinned to a CPU.  Is it a
problem if INTA, INTB, INTC, and INTD for that controller all
effectively share intx_irq and are pinned to the same CPU?

AFAICT we currently have three PCI host controllers with INTx handlers
that are registered with devm_request_irq(), which is what [2]
proposes to do:

  advk_pcie_irq_handler()
  xilinx_pl_dma_pcie_intx_flow()
  xilinx_pcie_intr_handler()

Do we assume that these are mistakes that shouldn't be emulated?

[2] https://lore.kernel.org/r/20220524122817.7199-1-pali@kernel.org

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2025-02-26 21:50         ` Bjorn Helgaas
@ 2025-03-01 19:49           ` Luís Mendes
  2025-03-03 20:21             ` Bjorn Helgaas
  0 siblings, 1 reply; 28+ messages in thread
From: Luís Mendes @ 2025-03-01 19:49 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Marc Zyngier, Pali Rohár, Bjorn Helgaas, Thomas Petazzoni,
	Lorenzo Pieralisi, Rob Herring, Krzysztof Wilczyński,
	linux-pci, linux-kernel

Hi everyone,

First of all I have to admire your work maintaining so many systems
and architectures and in particular the mvebu platform and I would
like to offer my help for testing the patches with your guidance.

I have essentially five working Armada A388 systems, one from solidrun
and the other four are my own creation, a mini-DTX board that I
designed and built for my home use, although at some point I planned
to sell those boards, but it did workout due to my PhD that consumed
too much time. I am now planning to do some NAS units with it and
possibly sell them. In any way, long story short I've read in some
threads about mvebu that PCIe is broken, and I say that it is not, the
secret to have PCIe working is to use a good u-boot bootloader. Many
recent u-boot versions cause the PCIe not to work on boot and I
haven't figured out why yet. In fact my A388 systems are working with
the amdgpu driver and an AMD Polaris, a RX 550 with 4GB VRAM. I have
been succesfully using PCIe with this old u-boot version:
https://github.com/SolidRun/u-boot-armada38x repo and branch
u-boot-2013.01-15t1-clearfog. The system has 2GB of RAM and I am able
to use the Ubuntu desktop and even navigate the web with Firefox, or
watch 4K H265 videos with Kodi. The amdgpu driver works stable for
several weeks in a row of power-up time, although since a year ago
there is a regression in drm_buddy that freezes 32-bits systems on
boot, but I have reverted that patch locally and have it working with
kernel Vanilla 6.9.6. Yes, I know I should have reported it to the
amdgpu team, but I have been very busy during this period. I hope to
do it soon, would like to debug it better even with OpenOCD and a
remote GDB for the kernel code.

Please let me know if I can be of help.

Thanks & best regards,
Luis Mendes

On Wed, Feb 26, 2025 at 9:50 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Thu, Jun 23, 2022 at 09:31:40PM +0100, Marc Zyngier wrote:
> > On Thu, 23 Jun 2022 17:49:42 +0100,
> > Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Thu, Jun 23, 2022 at 06:32:40PM +0200, Pali Rohár wrote:
> > > > On Thursday 23 June 2022 11:27:47 Bjorn Helgaas wrote:
> > > > > On Tue, May 24, 2022 at 02:28:17PM +0200, Pali Rohár wrote:
> > > > > > Same as in commit a3b69dd0ad62 ("Revert "PCI: aardvark:
> > > > > > Rewrite IRQ code to chained IRQ handler"") for pci-aardvark
> > > > > > driver, use devm_request_irq() instead of chained IRQ
> > > > > > handler in pci-mvebu.c driver.
> > > > > >
> > > > > > This change fixes affinity support and allows to pin
> > > > > > interrupts from different PCIe controllers to different CPU
> > > > > > cores.
> > > > >
> > > > > Several other drivers use irq_set_chained_handler_and_data().
> > > > > Do any of them need similar changes?
> > > >
> > > > I do not know. This needs testing on HW which use those other
> > > > drivers.
> > > >
> > > > > The commit log suggests that using chained IRQ handlers breaks
> > > > > affinity support.  But perhaps that's not the case and the
> > > > > real culprit is some other difference between mvebu and the
> > > > > other drivers.
> > > >
> > > > It is possible. But similar patch (revert; linked below) was
> > > > required for aardvark. I tested same approach on mvebu and it
> > > > fixed affinity support.
> > >
> > > This feels like something we should understand better.  If
> > > irq_set_chained_handler_and_data() is a problem for affinity, we
> > > should fix it across the board in all the drivers at once.
> > >
> > > If the real problem is something different, we should figure that
> > > out and document it in the commit log.
> > >
> > > I cc'd Marc in case he has time to educate us.
> >
> > Thanks for roping me in.
> >
> > The problem of changing affinities for chained (or multiplexing)
> > interrupts is, to make it short, that it breaks the existing
> > userspace ABI that a change in affinity affects only the interrupt
> > userspace acts upon, and no other. Which is why we don't expose any
> > affinity setting for such an interrupt, as by definition changing
> > its affinity affects all the interrupts that are muxed onto it.
> >
> > By turning a chained interrupt into a normal handler, people work
> > around the above rule and break the contract the kernel has with
> > userspace.
> >
> > Do I think this is acceptable? No. Can something be done about this?
> > Maybe.
> >
> > Marek asked this exact question last month[1], and I gave a detailed
> > explanation of what could be done to improve matters, allowing this
> > to happen as long as userspace is made aware of the effects, which
> > means creating a new userspace ABI.
> >
> > I would rather see people work on a proper solution than add bad
> > hacks that only work in environments where the userspace ABI can be
> > safely ignored, such as on an closed, embedded device.
> >
> > [1] https://lore.kernel.org/all/20220502102137.764606ee@thinkpad/
>
> OK, this patch [2] has languished forever, and I don't know how to
> move forward.
>
> The patch basically changes mvebu_pcie_irq_handler() from a chained
> IRQ handler to a handler registered with devm_request_irq() so it can
> be pinned to a CPU.
>
> How are we supposed to decide whether this is safe?  What should we
> look at in the patch?
>
> IIUC on mvebu, there's a single IRQ (port->intx_irq, described by a DT
> "intx" in interrupt-names) that invokes mvebu_pcie_irq_handler(),
> which loops through and handles INTA, INTB, INTC, and INTD.
>
> I think if port->intx_irq is pinned to CPU X, that means INTA, INTB,
> INTC, and INTD are all pinned to that same CPU.
>
> I assume changing to devm_request_irq() means port->intx_irq will
> appear in /proc/interrupts and can be pinned to a CPU.  Is it a
> problem if INTA, INTB, INTC, and INTD for that controller all
> effectively share intx_irq and are pinned to the same CPU?
>
> AFAICT we currently have three PCI host controllers with INTx handlers
> that are registered with devm_request_irq(), which is what [2]
> proposes to do:
>
>   advk_pcie_irq_handler()
>   xilinx_pl_dma_pcie_intx_flow()
>   xilinx_pcie_intr_handler()
>
> Do we assume that these are mistakes that shouldn't be emulated?
>
> [2] https://lore.kernel.org/r/20220524122817.7199-1-pali@kernel.org
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2025-03-01 19:49           ` Luís Mendes
@ 2025-03-03 20:21             ` Bjorn Helgaas
  2025-03-09 23:10               ` Luís Mendes
  0 siblings, 1 reply; 28+ messages in thread
From: Bjorn Helgaas @ 2025-03-03 20:21 UTC (permalink / raw)
  To: Luís Mendes
  Cc: Marc Zyngier, Pali Rohár, Bjorn Helgaas, Thomas Petazzoni,
	Lorenzo Pieralisi, Rob Herring, Krzysztof Wilczyński,
	linux-pci, linux-kernel

On Sat, Mar 01, 2025 at 07:49:23PM +0000, Luís Mendes wrote:
> ... I would like to offer my help for testing the patches with your
> guidance.
> 
> ... In any way, long story short I've read in some threads about
> mvebu that PCIe is broken, and I say that it is not, the secret to
> have PCIe working is to use a good u-boot bootloader. Many recent
> u-boot versions cause the PCIe not to work on boot and I haven't
> figured out why yet. In fact my A388 systems are working with the
> amdgpu driver and an AMD Polaris, a RX 550 with 4GB VRAM. I have
> been succesfully using PCIe with this old u-boot version:
> https://github.com/SolidRun/u-boot-armada38x repo and branch
> u-boot-2013.01-15t1-clearfog.

Glad to hear that.  Making this work with more versions of u-boot
would be great, and maybe we can make progress on that eventually.

For right now, I want to try to make progress on Pali's patches.  I'm
pretty much in the dark as far as mvebu and IRQ details, but it might
help enlighten me if you could collect these:

  - complete dmesg log, booted with "pci=nomsi"
  - complete output of "sudo lspci -vv"
  - contents of /proc/interrupts
  - output of "grep -r . /proc/irq/"

For now just pick one of your systems, probably the solidrun one since
that sounds like one that other folks might have.

Bjorn

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2025-03-03 20:21             ` Bjorn Helgaas
@ 2025-03-09 23:10               ` Luís Mendes
  2025-03-11 16:24                 ` Bjorn Helgaas
  0 siblings, 1 reply; 28+ messages in thread
From: Luís Mendes @ 2025-03-09 23:10 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Marc Zyngier, Pali Rohár, Bjorn Helgaas, Thomas Petazzoni,
	Lorenzo Pieralisi, Rob Herring, Krzysztof Wilczyński,
	linux-pci, linux-kernel

Sorry for the delay, but last week was a busy work week and I had to
struggle to package Linux kernel 6.14-rc5 because it fails to
cross-compile... it tries to use target compiler flags with the host
compiler. I ended up compiling the kernel in a mvebu armhf machine.
Also I noticed that some hash algos are failing.

All logs presented were obtained from a SolidRun A388 ClearFog Base,
if more detailed PCI logs are needed I have the other machine, the
Armada Duo that has 2 PCIe slots and handles an AMD RX 550 GPU. Just
let me know.

- Complete dmesg log, booted with "pci=nomsi"  is available here:
https://pastebin.com/wDj0NGFN
- Complete output of "sudo lspci -vv" is available here:
https://pastebin.com/f4yHRhLr
- Contents of /proc/interrupts is available here: https://pastebin.com/ejDUuhbJ
- Output of "grep -r . /proc/irq/" is available here:
https://pastebin.com/4jvFBBhy

Let me know if more information is needed.

On Mon, Mar 3, 2025 at 8:21 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Sat, Mar 01, 2025 at 07:49:23PM +0000, Luís Mendes wrote:
> > ... I would like to offer my help for testing the patches with your
> > guidance.
> >
> > ... In any way, long story short I've read in some threads about
> > mvebu that PCIe is broken, and I say that it is not, the secret to
> > have PCIe working is to use a good u-boot bootloader. Many recent
> > u-boot versions cause the PCIe not to work on boot and I haven't
> > figured out why yet. In fact my A388 systems are working with the
> > amdgpu driver and an AMD Polaris, a RX 550 with 4GB VRAM. I have
> > been succesfully using PCIe with this old u-boot version:
> > https://github.com/SolidRun/u-boot-armada38x repo and branch
> > u-boot-2013.01-15t1-clearfog.
>
> Glad to hear that.  Making this work with more versions of u-boot
> would be great, and maybe we can make progress on that eventually.
>
> For right now, I want to try to make progress on Pali's patches.  I'm
> pretty much in the dark as far as mvebu and IRQ details, but it might
> help enlighten me if you could collect these:
>
>   - complete dmesg log, booted with "pci=nomsi"
>   - complete output of "sudo lspci -vv"
>   - contents of /proc/interrupts
>   - output of "grep -r . /proc/irq/"
>
> For now just pick one of your systems, probably the solidrun one since
> that sounds like one that other folks might have.
>
> Bjorn

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2025-03-09 23:10               ` Luís Mendes
@ 2025-03-11 16:24                 ` Bjorn Helgaas
  2025-03-12 16:25                   ` Luís Mendes
  0 siblings, 1 reply; 28+ messages in thread
From: Bjorn Helgaas @ 2025-03-11 16:24 UTC (permalink / raw)
  To: Luís Mendes
  Cc: Marc Zyngier, Pali Rohár, Bjorn Helgaas, Thomas Petazzoni,
	Lorenzo Pieralisi, Rob Herring, Krzysztof Wilczyński,
	linux-pci, linux-kernel

On Sun, Mar 09, 2025 at 11:10:58PM +0000, Luís Mendes wrote:
> All logs presented were obtained from a SolidRun A388 ClearFog Base,
> if more detailed PCI logs are needed I have the other machine, the
> Armada Duo that has 2 PCIe slots and handles an AMD RX 550 GPU. Just
> let me know.
> 
> - Complete dmesg log, booted with "pci=nomsi"  is available here:
> https://pastebin.com/wDj0NGFN
> - Complete output of "sudo lspci -vv" is available here:
> https://pastebin.com/f4yHRhLr
> - Contents of /proc/interrupts is available here: https://pastebin.com/ejDUuhbJ
> - Output of "grep -r . /proc/irq/" is available here:
> https://pastebin.com/4jvFBBhy

Thank you very much for these.

It looks like the only PCI device is 01:00.0: [1ac1:089a], a Coral
Edge TPU, and I don't see any evidence of a driver for it or any IRQ
usage.  Do you have any other PCI device you could try there?
Something with a driver that uses interrupts?

Not critical right now, but I'm puzzled by this part of the dmesg log:

  mvebu-pcie soc:pcie: host bridge /soc/pcie ranges:
  mvebu-pcie soc:pcie:      MEM 0x00f1080000..0x00f1081fff -> 0x0000080000
  mvebu-pcie soc:pcie:      MEM 0x00f1040000..0x00f1041fff -> 0x0000040000
  mvebu-pcie soc:pcie:      MEM 0x00f1044000..0x00f1045fff -> 0x0000044000
  mvebu-pcie soc:pcie:      MEM 0x00f1048000..0x00f1049fff -> 0x0000048000
  pci_bus 0000:00: root bus resource [mem 0xf1080000-0xf1081fff] (bus address [0x00080000-0x00081fff])
  pci_bus 0000:00: root bus resource [mem 0xf1040000-0xf1041fff] (bus address [0x00040000-0x00041fff])
  pci_bus 0000:00: root bus resource [mem 0xf1044000-0xf1045fff] (bus address [0x00044000-0x00045fff])
  pci_bus 0000:00: root bus resource [mem 0xf1048000-0xf1049fff] (bus address [0x00048000-0x00049fff])
  pci_bus 0000:00: root bus resource [mem 0xe0000000-0xe7ffffff]

The first four mvebu-pcie lines make good sense and match the
first four pci_bus lines.  But I don't know where the
[mem 0xe0000000-0xe7ffffff] aperture came from.  It should be
described in the devicetree, but I don't see it mentioned in the
/soc/pcie ranges.

Can you include the devicetree as well?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2025-03-11 16:24                 ` Bjorn Helgaas
@ 2025-03-12 16:25                   ` Luís Mendes
  2025-07-06 18:48                     ` Luís Mendes
  0 siblings, 1 reply; 28+ messages in thread
From: Luís Mendes @ 2025-03-12 16:25 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Marc Zyngier, Pali Rohár, Bjorn Helgaas, Thomas Petazzoni,
	Lorenzo Pieralisi, Rob Herring, Krzysztof Wilczyński,
	linux-pci, linux-kernel

Hi Bjron,

Yes, I had trouble finding PCIe cards to test with Solidrun Clearfog
Base mPCIe slot and indeed the Coral Edge TPU card does not have an
open-source driver and the proprietary one that exists does not work
in 32-bit architectures. So, yes it had no driver.

I have now connected a mPCIe to PCIe extender and installed a Marvell
4-port SATA controller card and I will also include my Armada Duo
system which is based on a Solidrun A388 SoM anyway and the Clearfog
Base logs hope this  information further helps with the analysis,

So for Solidrun Clearfog Base with Kernel 6.14-rc5 we have:
dmesg output: https://pastebin.com/aw0X9Fb5
lspci -vv: https://pastebin.com/5mDHJZ2C
cat /proc/interrupts: https://pastebin.com/ASHN8cx7
grep -r . /proc/irq: https://pastebin.com/mskASwYL
decompiled armada-388-clearfog-base.dtb: https://pastebin.com/KuNFDmYP

------------------------------------------------------
For PowerInno ArmadaDuo (2-slots PCIe) with Kernel 6.8.9 we have:
dmesg output: https://pastebin.com/54HHrPVP
lspci -vv: https://pastebin.com/KpE6Hc0r
cat /proc/interrupts: https://pastebin.com/6L64ztse
grep -r . /proc/irq: https://pastebin.com/raPkRBVk

Best Regards,
Luís

On Tue, Mar 11, 2025 at 4:24 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Sun, Mar 09, 2025 at 11:10:58PM +0000, Luís Mendes wrote:
> > All logs presented were obtained from a SolidRun A388 ClearFog Base,
> > if more detailed PCI logs are needed I have the other machine, the
> > Armada Duo that has 2 PCIe slots and handles an AMD RX 550 GPU. Just
> > let me know.
> >
> > - Complete dmesg log, booted with "pci=nomsi"  is available here:
> > https://pastebin.com/wDj0NGFN
> > - Complete output of "sudo lspci -vv" is available here:
> > https://pastebin.com/f4yHRhLr
> > - Contents of /proc/interrupts is available here: https://pastebin.com/ejDUuhbJ
> > - Output of "grep -r . /proc/irq/" is available here:
> > https://pastebin.com/4jvFBBhy
>
> Thank you very much for these.
>
> It looks like the only PCI device is 01:00.0: [1ac1:089a], a Coral
> Edge TPU, and I don't see any evidence of a driver for it or any IRQ
> usage.  Do you have any other PCI device you could try there?
> Something with a driver that uses interrupts?
>
> Not critical right now, but I'm puzzled by this part of the dmesg log:
>
>   mvebu-pcie soc:pcie: host bridge /soc/pcie ranges:
>   mvebu-pcie soc:pcie:      MEM 0x00f1080000..0x00f1081fff -> 0x0000080000
>   mvebu-pcie soc:pcie:      MEM 0x00f1040000..0x00f1041fff -> 0x0000040000
>   mvebu-pcie soc:pcie:      MEM 0x00f1044000..0x00f1045fff -> 0x0000044000
>   mvebu-pcie soc:pcie:      MEM 0x00f1048000..0x00f1049fff -> 0x0000048000
>   pci_bus 0000:00: root bus resource [mem 0xf1080000-0xf1081fff] (bus address [0x00080000-0x00081fff])
>   pci_bus 0000:00: root bus resource [mem 0xf1040000-0xf1041fff] (bus address [0x00040000-0x00041fff])
>   pci_bus 0000:00: root bus resource [mem 0xf1044000-0xf1045fff] (bus address [0x00044000-0x00045fff])
>   pci_bus 0000:00: root bus resource [mem 0xf1048000-0xf1049fff] (bus address [0x00048000-0x00049fff])
>   pci_bus 0000:00: root bus resource [mem 0xe0000000-0xe7ffffff]
>
> The first four mvebu-pcie lines make good sense and match the
> first four pci_bus lines.  But I don't know where the
> [mem 0xe0000000-0xe7ffffff] aperture came from.  It should be
> described in the devicetree, but I don't see it mentioned in the
> /soc/pcie ranges.
>
> Can you include the devicetree as well?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler
  2025-03-12 16:25                   ` Luís Mendes
@ 2025-07-06 18:48                     ` Luís Mendes
  0 siblings, 0 replies; 28+ messages in thread
From: Luís Mendes @ 2025-07-06 18:48 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Marc Zyngier, Pali Rohár, Bjorn Helgaas, Thomas Petazzoni,
	Lorenzo Pieralisi, Rob Herring, Krzysztof Wilczyński,
	linux-pci, linux-kernel

Hi,

Is the data that i sent enough? Do you need more info?

Best regards,
Luis

On Wed, Mar 12, 2025 at 4:25 PM Luís Mendes <luis.p.mendes@gmail.com> wrote:
>
> Hi Bjron,
>
> Yes, I had trouble finding PCIe cards to test with Solidrun Clearfog
> Base mPCIe slot and indeed the Coral Edge TPU card does not have an
> open-source driver and the proprietary one that exists does not work
> in 32-bit architectures. So, yes it had no driver.
>
> I have now connected a mPCIe to PCIe extender and installed a Marvell
> 4-port SATA controller card and I will also include my Armada Duo
> system which is based on a Solidrun A388 SoM anyway and the Clearfog
> Base logs hope this  information further helps with the analysis,
>
> So for Solidrun Clearfog Base with Kernel 6.14-rc5 we have:
> dmesg output: https://pastebin.com/aw0X9Fb5
> lspci -vv: https://pastebin.com/5mDHJZ2C
> cat /proc/interrupts: https://pastebin.com/ASHN8cx7
> grep -r . /proc/irq: https://pastebin.com/mskASwYL
> decompiled armada-388-clearfog-base.dtb: https://pastebin.com/KuNFDmYP
>
> ------------------------------------------------------
> For PowerInno ArmadaDuo (2-slots PCIe) with Kernel 6.8.9 we have:
> dmesg output: https://pastebin.com/54HHrPVP
> lspci -vv: https://pastebin.com/KpE6Hc0r
> cat /proc/interrupts: https://pastebin.com/6L64ztse
> grep -r . /proc/irq: https://pastebin.com/raPkRBVk
>
> Best Regards,
> Luís
>
> On Tue, Mar 11, 2025 at 4:24 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> >
> > On Sun, Mar 09, 2025 at 11:10:58PM +0000, Luís Mendes wrote:
> > > All logs presented were obtained from a SolidRun A388 ClearFog Base,
> > > if more detailed PCI logs are needed I have the other machine, the
> > > Armada Duo that has 2 PCIe slots and handles an AMD RX 550 GPU. Just
> > > let me know.
> > >
> > > - Complete dmesg log, booted with "pci=nomsi"  is available here:
> > > https://pastebin.com/wDj0NGFN
> > > - Complete output of "sudo lspci -vv" is available here:
> > > https://pastebin.com/f4yHRhLr
> > > - Contents of /proc/interrupts is available here: https://pastebin.com/ejDUuhbJ
> > > - Output of "grep -r . /proc/irq/" is available here:
> > > https://pastebin.com/4jvFBBhy
> >
> > Thank you very much for these.
> >
> > It looks like the only PCI device is 01:00.0: [1ac1:089a], a Coral
> > Edge TPU, and I don't see any evidence of a driver for it or any IRQ
> > usage.  Do you have any other PCI device you could try there?
> > Something with a driver that uses interrupts?
> >
> > Not critical right now, but I'm puzzled by this part of the dmesg log:
> >
> >   mvebu-pcie soc:pcie: host bridge /soc/pcie ranges:
> >   mvebu-pcie soc:pcie:      MEM 0x00f1080000..0x00f1081fff -> 0x0000080000
> >   mvebu-pcie soc:pcie:      MEM 0x00f1040000..0x00f1041fff -> 0x0000040000
> >   mvebu-pcie soc:pcie:      MEM 0x00f1044000..0x00f1045fff -> 0x0000044000
> >   mvebu-pcie soc:pcie:      MEM 0x00f1048000..0x00f1049fff -> 0x0000048000
> >   pci_bus 0000:00: root bus resource [mem 0xf1080000-0xf1081fff] (bus address [0x00080000-0x00081fff])
> >   pci_bus 0000:00: root bus resource [mem 0xf1040000-0xf1041fff] (bus address [0x00040000-0x00041fff])
> >   pci_bus 0000:00: root bus resource [mem 0xf1044000-0xf1045fff] (bus address [0x00044000-0x00045fff])
> >   pci_bus 0000:00: root bus resource [mem 0xf1048000-0xf1049fff] (bus address [0x00048000-0x00049fff])
> >   pci_bus 0000:00: root bus resource [mem 0xe0000000-0xe7ffffff]
> >
> > The first four mvebu-pcie lines make good sense and match the
> > first four pci_bus lines.  But I don't know where the
> > [mem 0xe0000000-0xe7ffffff] aperture came from.  It should be
> > described in the devicetree, but I don't see it mentioned in the
> > /soc/pcie ranges.
> >
> > Can you include the devicetree as well?

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2025-07-06 18:48 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-05-24 12:28 [PATCH] PCI: mvebu: Use devm_request_irq() for registering interrupt handler Pali Rohár
2022-06-23 13:10 ` Pali Rohár
2022-06-23 16:27 ` Bjorn Helgaas
2022-06-23 16:32   ` Pali Rohár
2022-06-23 16:49     ` Bjorn Helgaas
2022-06-23 20:31       ` Marc Zyngier
2025-02-26 21:50         ` Bjorn Helgaas
2025-03-01 19:49           ` Luís Mendes
2025-03-03 20:21             ` Bjorn Helgaas
2025-03-09 23:10               ` Luís Mendes
2025-03-11 16:24                 ` Bjorn Helgaas
2025-03-12 16:25                   ` Luís Mendes
2025-07-06 18:48                     ` Luís Mendes
2022-07-01 14:29   ` Pali Rohár
2022-07-09 14:31     ` Pali Rohár
2022-07-09 23:44       ` Bjorn Helgaas
2022-07-10  0:06         ` Pali Rohár
2022-08-29 16:51           ` Pali Rohár
2022-09-11 15:45             ` Pali Rohár
2022-09-12  8:01               ` Lorenzo Pieralisi
2022-09-12  8:48                 ` Pali Rohár
2022-09-12  8:55                   ` Lorenzo Pieralisi
2022-09-12  9:03                     ` Pali Rohár
2022-11-11 12:56                       ` Lorenzo Pieralisi
2022-11-11 17:15                         ` Pali Rohár
2022-11-14  9:33                           ` Lorenzo Pieralisi
2022-11-06 23:25 ` Pali Rohár
2025-02-25 16:28 ` Bjorn Helgaas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).