devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
To: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Cc: Andy Gross <agross@kernel.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	Rob Herring <robh+dt@kernel.org>,
	Krzysztof Kozlowski <krzk@kernel.org>,
	Jingoo Han <jingoohan1@gmail.com>,
	Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Stanimir Varbanov <svarbanov@mm-sol.com>,
	Vinod Koul <vkoul@kernel.org>,
	linux-arm-msm@vger.kernel.org, linux-pci@vger.kernel.org,
	devicetree@vger.kernel.org
Subject: Re: [PATCH v3 1/5] PCI: dwc: Convert msi_irq to the array
Date: Thu, 28 Apr 2022 13:17:58 +0530	[thread overview]
Message-ID: <20220428074758.GB81644@thinkpad> (raw)
In-Reply-To: <80132575-1abc-9a2e-dc73-3df72035a7d0@linaro.org>

On Thu, Apr 28, 2022 at 10:43:54AM +0300, Dmitry Baryshkov wrote:
> On 28/04/2022 09:06, Manivannan Sadhasivam wrote:
> > On Wed, Apr 27, 2022 at 07:59:57PM +0300, Dmitry Baryshkov wrote:
> > > On 27/04/2022 17:13, Manivannan Sadhasivam wrote:
> > > > On Wed, Apr 27, 2022 at 03:16:49PM +0300, Dmitry Baryshkov wrote:
> > > > > Qualcomm version of DWC PCIe controller supports more than 32 MSI
> > > > > interrupts, but they are routed to separate interrupts in groups of 32
> > > > > vectors. To support such configuration, change the msi_irq field into an
> > > > > array. Let the DWC core handle all interrupts that were set in this
> > > > > array.
> > > > > 
> > > > 
> > > > Instead of defining it as an array, can we allocate it dynamically in the
> > > > controller drivers instead? This has two benefits:
> > > > 
> > > > 1. There is no need of using a dedicated flag.
> > > > 2. Controller drivers that don't support MSIs can pass NULL and in the core we
> > > > can use platform_get_irq_byname_optional() to get supported number of MSIs from
> > > > devicetree.
> > > 
> > > I think using dynamic allocation would make code worse. It would add
> > > additional checks here and there.
> > > 
> > 
> > I take back my suggestion of allocating the memory for msi_irq in controller
> > drivers. It should be done in the designware-host instead.
> > 
> > We already know how many MSIs are supported by the platform using num_vectors.
> > So we should just allocate msi_irqs of num_vectors length in
> > dw_pcie_host_init() and populate it using platform_get_irq_byname_optional().
> > 
> > I don't think this can make the code worse.
> > 
> > > If you don't like this design. I have an alternative suggestion: export the
> > > dw_chained_msi_irq() and move allocation of all MSIs to the pcie-qcom code.
> > > Would that be better? I'm not sure whether this multi-host-IRQ design is
> > > used on other DWC platforms or not.
> > > 
> > 
> > No, I think the allocation should belong to designware-host.
> 
> Granted that at least tegra and iMX tie all MSI vectors to a single host
> interrupt, I think it's impractical to assume that other hosts would benefit
> from such allocation. Let's move support for split IRQs into the
> pcie-qcom.c. We can make this generic later if any other host would use a
> separate per-"endpoint" (per-group) IRQ.
> 

Okay, I'm fine with moving just the split irq handling to Qcom driver.

Thanks,
Mani

> > 
> > Thanks,
> > Mani
> > 
> > > > 
> > > > Thanks,
> > > > Mani
> > > > 
> > > > > Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
> > > > > ---
> > > > >    drivers/pci/controller/dwc/pci-dra7xx.c       |  2 +-
> > > > >    drivers/pci/controller/dwc/pci-exynos.c       |  2 +-
> > > > >    .../pci/controller/dwc/pcie-designware-host.c | 30 +++++++++++--------
> > > > >    drivers/pci/controller/dwc/pcie-designware.h  |  2 +-
> > > > >    drivers/pci/controller/dwc/pcie-keembay.c     |  2 +-
> > > > >    drivers/pci/controller/dwc/pcie-spear13xx.c   |  2 +-
> > > > >    drivers/pci/controller/dwc/pcie-tegra194.c    |  2 +-
> > > > >    7 files changed, 24 insertions(+), 18 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
> > > > > index dfcdeb432dc8..0919c96dcdbd 100644
> > > > > --- a/drivers/pci/controller/dwc/pci-dra7xx.c
> > > > > +++ b/drivers/pci/controller/dwc/pci-dra7xx.c
> > > > > @@ -483,7 +483,7 @@ static int dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
> > > > >    		return pp->irq;
> > > > >    	/* MSI IRQ is muxed */
> > > > > -	pp->msi_irq = -ENODEV;
> > > > > +	pp->msi_irq[0] = -ENODEV;
> > > > >    	ret = dra7xx_pcie_init_irq_domain(pp);
> > > > >    	if (ret < 0)
> > > > > diff --git a/drivers/pci/controller/dwc/pci-exynos.c b/drivers/pci/controller/dwc/pci-exynos.c
> > > > > index 467c8d1cd7e4..4f2010bd9cd7 100644
> > > > > --- a/drivers/pci/controller/dwc/pci-exynos.c
> > > > > +++ b/drivers/pci/controller/dwc/pci-exynos.c
> > > > > @@ -292,7 +292,7 @@ static int exynos_add_pcie_port(struct exynos_pcie *ep,
> > > > >    	}
> > > > >    	pp->ops = &exynos_pcie_host_ops;
> > > > > -	pp->msi_irq = -ENODEV;
> > > > > +	pp->msi_irq[0] = -ENODEV;
> > > > >    	ret = dw_pcie_host_init(pp);
> > > > >    	if (ret) {
> > > > > diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> > > > > index 2fa86f32d964..5d90009a0f73 100644
> > > > > --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> > > > > +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> > > > > @@ -257,8 +257,11 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
> > > > >    static void dw_pcie_free_msi(struct pcie_port *pp)
> > > > >    {
> > > > > -	if (pp->msi_irq)
> > > > > -		irq_set_chained_handler_and_data(pp->msi_irq, NULL, NULL);
> > > > > +	u32 ctrl;
> > > > > +
> > > > > +	for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++)
> > > > > +		if (pp->msi_irq[ctrl])
> > > > > +			irq_set_chained_handler_and_data(pp->msi_irq[ctrl], NULL, NULL);
> > > > >    	irq_domain_remove(pp->msi_domain);
> > > > >    	irq_domain_remove(pp->irq_domain);
> > > > > @@ -368,13 +371,15 @@ int dw_pcie_host_init(struct pcie_port *pp)
> > > > >    			for (ctrl = 0; ctrl < num_ctrls; ctrl++)
> > > > >    				pp->irq_mask[ctrl] = ~0;
> > > > > -			if (!pp->msi_irq) {
> > > > > -				pp->msi_irq = platform_get_irq_byname_optional(pdev, "msi");
> > > > > -				if (pp->msi_irq < 0) {
> > > > > -					pp->msi_irq = platform_get_irq(pdev, 0);
> > > > > -					if (pp->msi_irq < 0)
> > > > > -						return pp->msi_irq;
> > > > > +			if (!pp->msi_irq[0]) {
> > > > > +				int irq = platform_get_irq_byname_optional(pdev, "msi");
> > > > > +
> > > > > +				if (irq < 0) {
> > > > > +					irq = platform_get_irq(pdev, 0);
> > > > > +					if (irq < 0)
> > > > > +						return irq;
> > > > >    				}
> > > > > +				pp->msi_irq[0] = irq;
> > > > >    			}
> > > > >    			pp->msi_irq_chip = &dw_pci_msi_bottom_irq_chip;
> > > > > @@ -383,10 +388,11 @@ int dw_pcie_host_init(struct pcie_port *pp)
> > > > >    			if (ret)
> > > > >    				return ret;
> > > > > -			if (pp->msi_irq > 0)
> > > > > -				irq_set_chained_handler_and_data(pp->msi_irq,
> > > > > -							    dw_chained_msi_isr,
> > > > > -							    pp);
> > > > > +			for (ctrl = 0; ctrl < num_ctrls; ctrl++)
> > > > > +				if (pp->msi_irq[ctrl] > 0)
> > > > > +					irq_set_chained_handler_and_data(pp->msi_irq[ctrl],
> > > > > +									 dw_chained_msi_isr,
> > > > > +									 pp);
> > > > >    			ret = dma_set_mask(pci->dev, DMA_BIT_MASK(32));
> > > > >    			if (ret)
> > > > > diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
> > > > > index 7d6e9b7576be..9c1a38b0a6b3 100644
> > > > > --- a/drivers/pci/controller/dwc/pcie-designware.h
> > > > > +++ b/drivers/pci/controller/dwc/pcie-designware.h
> > > > > @@ -187,7 +187,7 @@ struct pcie_port {
> > > > >    	u32			io_size;
> > > > >    	int			irq;
> > > > >    	const struct dw_pcie_host_ops *ops;
> > > > > -	int			msi_irq;
> > > > > +	int			msi_irq[MAX_MSI_CTRLS];
> > > > >    	struct irq_domain	*irq_domain;
> > > > >    	struct irq_domain	*msi_domain;
> > > > >    	u16			msi_msg;
> > > > > diff --git a/drivers/pci/controller/dwc/pcie-keembay.c b/drivers/pci/controller/dwc/pcie-keembay.c
> > > > > index 1ac29a6eef22..297e6e926c00 100644
> > > > > --- a/drivers/pci/controller/dwc/pcie-keembay.c
> > > > > +++ b/drivers/pci/controller/dwc/pcie-keembay.c
> > > > > @@ -338,7 +338,7 @@ static int keembay_pcie_add_pcie_port(struct keembay_pcie *pcie,
> > > > >    	int ret;
> > > > >    	pp->ops = &keembay_pcie_host_ops;
> > > > > -	pp->msi_irq = -ENODEV;
> > > > > +	pp->msi_irq[0] = -ENODEV;
> > > > >    	ret = keembay_pcie_setup_msi_irq(pcie);
> > > > >    	if (ret)
> > > > > diff --git a/drivers/pci/controller/dwc/pcie-spear13xx.c b/drivers/pci/controller/dwc/pcie-spear13xx.c
> > > > > index 1569e82b5568..cc7776833810 100644
> > > > > --- a/drivers/pci/controller/dwc/pcie-spear13xx.c
> > > > > +++ b/drivers/pci/controller/dwc/pcie-spear13xx.c
> > > > > @@ -172,7 +172,7 @@ static int spear13xx_add_pcie_port(struct spear13xx_pcie *spear13xx_pcie,
> > > > >    	}
> > > > >    	pp->ops = &spear13xx_pcie_host_ops;
> > > > > -	pp->msi_irq = -ENODEV;
> > > > > +	pp->msi_irq[0] = -ENODEV;
> > > > >    	ret = dw_pcie_host_init(pp);
> > > > >    	if (ret) {
> > > > > diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
> > > > > index b1b5f836a806..e75712db85b0 100644
> > > > > --- a/drivers/pci/controller/dwc/pcie-tegra194.c
> > > > > +++ b/drivers/pci/controller/dwc/pcie-tegra194.c
> > > > > @@ -2271,7 +2271,7 @@ static void tegra194_pcie_shutdown(struct platform_device *pdev)
> > > > >    	disable_irq(pcie->pci.pp.irq);
> > > > >    	if (IS_ENABLED(CONFIG_PCI_MSI))
> > > > > -		disable_irq(pcie->pci.pp.msi_irq);
> > > > > +		disable_irq(pcie->pci.pp.msi_irq[0]);
> > > > >    	tegra194_pcie_pme_turnoff(pcie);
> > > > >    	tegra_pcie_unconfig_controller(pcie);
> > > > > -- 
> > > > > 2.35.1
> > > > > 
> > > 
> > > 
> > > -- 
> > > With best wishes
> > > Dmitry
> 
> 
> -- 
> With best wishes
> Dmitry

  reply	other threads:[~2022-04-28  7:48 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-27 12:16 [PATCH v3 0/5] PCI: qcom: Fix higher MSI vectors handling Dmitry Baryshkov
2022-04-27 12:16 ` [PATCH v3 1/5] PCI: dwc: Convert msi_irq to the array Dmitry Baryshkov
2022-04-27 14:13   ` Manivannan Sadhasivam
2022-04-27 16:59     ` Dmitry Baryshkov
2022-04-28  6:06       ` Manivannan Sadhasivam
2022-04-28  7:43         ` Dmitry Baryshkov
2022-04-28  7:47           ` Manivannan Sadhasivam [this message]
2022-04-27 12:16 ` [PATCH v3 2/5] PCI: dwc: Teach dwc core to parse additional MSI interrupts Dmitry Baryshkov
2022-04-27 12:16 ` [PATCH v3 3/5] PCI: qcom: Handle MSI IRQs properly Dmitry Baryshkov
2022-04-27 12:16 ` [PATCH v3 4/5] dt-bindings: pci/qcom,pcie: support additional MSI interrupts Dmitry Baryshkov
2022-04-28 12:05   ` Krzysztof Kozlowski
2022-04-27 12:16 ` [PATCH v3 5/5] arm64: dts: qcom: sm8250: provide " Dmitry Baryshkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220428074758.GB81644@thinkpad \
    --to=manivannan.sadhasivam@linaro.org \
    --cc=agross@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=bjorn.andersson@linaro.org \
    --cc=devicetree@vger.kernel.org \
    --cc=dmitry.baryshkov@linaro.org \
    --cc=gustavo.pimentel@synopsys.com \
    --cc=jingoohan1@gmail.com \
    --cc=krzk@kernel.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=robh+dt@kernel.org \
    --cc=svarbanov@mm-sol.com \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).