From: Bjorn Helgaas <helgaas@kernel.org>
To: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Cc: Andy Gross <agross@kernel.org>,
Bjorn Andersson <bjorn.andersson@linaro.org>,
Rob Herring <robh+dt@kernel.org>,
Krzysztof Kozlowski <krzk@kernel.org>,
Jingoo Han <jingoohan1@gmail.com>,
Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
Bjorn Helgaas <bhelgaas@google.com>,
Stanimir Varbanov <svarbanov@mm-sol.com>,
Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>,
Vinod Koul <vkoul@kernel.org>,
linux-arm-msm@vger.kernel.org, linux-pci@vger.kernel.org,
devicetree@vger.kernel.org
Subject: Re: [PATCH v5 5/7] PCI: qcom: Handle MSI IRQs properly
Date: Fri, 29 Apr 2022 17:47:32 -0500 [thread overview]
Message-ID: <20220429224732.GA111265@bhelgaas> (raw)
In-Reply-To: <20220429214250.3728510-6-dmitry.baryshkov@linaro.org>
In subject, "Handle MSI IRQs properly" really doesn't tell us anything
useful. The existing MSI support handles some MSI IRQs "properly," so
we should say something specific about the improvements here, like
"Handle multiple MSI groups" or "Handle MSIs routed to multiple GIC
interrupts" or "Handle split MSI IRQs" or similar.
On Sat, Apr 30, 2022 at 12:42:48AM +0300, Dmitry Baryshkov wrote:
> On some of Qualcomm platforms each group of 32 MSI vectors is routed to the
> separate GIC interrupt. Thus to receive higher MSI vectors properly,
> add separate msi_host_init()/msi_host_deinit() handling additional host
> IRQs.
> +static void qcom_chained_msi_isr(struct irq_desc *desc)
> +{
> + struct irq_chip *chip = irq_desc_get_chip(desc);
> + int irq = irq_desc_get_irq(desc);
> + struct pcie_port *pp;
> + int idx, pos;
> + unsigned long val;
> + u32 status, num_ctrls;
> + struct dw_pcie *pci;
> + struct qcom_pcie *pcie;
> +
> + chained_irq_enter(chip, desc);
> +
> + pp = irq_desc_get_handler_data(desc);
> + pci = to_dw_pcie_from_pp(pp);
> + pcie = to_qcom_pcie(pci);
> +
> + /*
> + * Unlike generic dw_handle_msi_irq we can determine, which group of
> + * MSIs triggered the IRQ, so process just single group.
Parens and punctuation touch-up:
Unlike generic dw_handle_msi_irq(), we can determine which group of
MSIs triggered the IRQ, so process just that group.
> + */
> + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
> +
> + for (idx = 0; idx < num_ctrls; idx++) {
> + if (pcie->msi_irqs[idx] == irq)
> + break;
> + }
Since this is basically an enhanced clone of dw_handle_msi_irq(), it
would be nice to use the same variable names ("i" instead of "idx")
so it's not gratuitously different.
Actually, I wonder if you could enhance dw_handle_msi_irq() slightly
so you could use it directly, e.g.,
struct dw_pcie_host_ops {
...
void (*msi_host_deinit)(struct pcie_port *pp);
+ bool (*msi_in_block)(struct pcie_port *pp, int irq, int i);
};
dw_handle_msi_irq(...)
{
...
for (i = 0; i < num_ctrls; i++) {
+ if (pp->ops->msi_in_block && !pp->ops->msi_in_block(pp, irq, i))
+ continue;
status = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS ...);
...
+ bool qcom_pcie_msi_in_block(struct pcie_port *pp, int irq, int i)
+ {
+ ...
+ pci = to_dw_pcie_from_pp(pp);
+ pcie = to_qcom_pcie(pci);
+
+ if (pcie->msi_irqs[i] == irq)
+ return true;
+
+ return false;
+ }
Maybe that's more complicated than it's worth.
> +
> + if (WARN_ON_ONCE(unlikely(idx == num_ctrls)))
> + goto out;
> +
> + status = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS +
> + (idx * MSI_REG_CTRL_BLOCK_SIZE));
> + if (!status)
> + goto out;
> +
> + val = status;
> + pos = 0;
> + while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL,
> + pos)) != MAX_MSI_IRQS_PER_CTRL) {
> + generic_handle_domain_irq(pp->irq_domain,
> + (idx * MAX_MSI_IRQS_PER_CTRL) +
> + pos);
> + pos++;
> + }
> +
> +out:
> + chained_irq_exit(chip, desc);
> +}
next prev parent reply other threads:[~2022-04-29 22:47 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-29 21:42 [PATCH v5 0/7] PCI: qcom: Fix higher MSI vectors handling Dmitry Baryshkov
2022-04-29 21:42 ` [PATCH v5 1/7] PCI: qcom: Revert "PCI: qcom: Add support for handling MSIs from 8 endpoints" Dmitry Baryshkov
2022-04-29 22:07 ` Bjorn Helgaas
2022-04-29 22:39 ` Dmitry Baryshkov
2022-04-29 21:42 ` [PATCH v5 2/7] PCI: dwc: Correct msi_irq condition in dw_pcie_free_msi() Dmitry Baryshkov
2022-04-29 21:42 ` [PATCH v5 3/7] PCI: dwc: Add msi_host_deinit callback Dmitry Baryshkov
2022-04-29 21:42 ` [PATCH v5 4/7] PCI: dwc: Export several functions useful for MSI implentations Dmitry Baryshkov
2022-04-29 21:42 ` [PATCH v5 5/7] PCI: qcom: Handle MSI IRQs properly Dmitry Baryshkov
2022-04-29 22:47 ` Bjorn Helgaas [this message]
2022-05-05 9:01 ` Dmitry Baryshkov
2022-04-29 21:42 ` [PATCH v5 6/7] dt-bindings: PCI: qcom: Support additional MSI interrupts Dmitry Baryshkov
2022-04-30 14:54 ` Krzysztof Kozlowski
2022-04-29 21:42 ` [PATCH v5 7/7] arm64: dts: qcom: sm8250: provide " Dmitry Baryshkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220429224732.GA111265@bhelgaas \
--to=helgaas@kernel.org \
--cc=agross@kernel.org \
--cc=bhelgaas@google.com \
--cc=bjorn.andersson@linaro.org \
--cc=devicetree@vger.kernel.org \
--cc=dmitry.baryshkov@linaro.org \
--cc=gustavo.pimentel@synopsys.com \
--cc=jingoohan1@gmail.com \
--cc=krzk@kernel.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=manivannan.sadhasivam@linaro.org \
--cc=robh+dt@kernel.org \
--cc=svarbanov@mm-sol.com \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).