From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
To: Krishna chaitanya chundru <quic_krichai@quicinc.com>
Cc: "Bjorn Andersson" <andersson@kernel.org>,
"Konrad Dybcio" <konrad.dybcio@linaro.org>,
"Rob Herring" <robh@kernel.org>,
"Krzysztof Kozlowski" <krzysztof.kozlowski+dt@linaro.org>,
"Conor Dooley" <conor+dt@kernel.org>,
"Lorenzo Pieralisi" <lpieralisi@kernel.org>,
"Krzysztof Wilczyński" <kw@linux.com>,
"Bjorn Helgaas" <bhelgaas@google.com>,
johan+linaro@kernel.org, bmasney@redhat.com, djakov@kernel.org,
linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org,
vireshk@kernel.org, quic_vbadigan@quicinc.com,
quic_skananth@quicinc.com, quic_nitegupt@quicinc.com,
quic_parass@quicinc.com, krzysztof.kozlowski@linaro.org
Subject: Re: [PATCH v9 6/6] PCI: qcom: Add OPP support to scale performance state of power domain
Date: Sun, 7 Apr 2024 20:30:48 +0530 [thread overview]
Message-ID: <20240407150048.GE2679@thinkpad> (raw)
In-Reply-To: <20240407-opp_support-v9-6-496184dc45d7@quicinc.com>
On Sun, Apr 07, 2024 at 10:07:39AM +0530, Krishna chaitanya chundru wrote:
> QCOM Resource Power Manager-hardened (RPMh) is a hardware block which
> maintains hardware state of a regulator by performing max aggregation of
> the requests made by all of the clients.
>
> PCIe controller can operate on different RPMh performance state of power
> domain based on the speed of the link. And this performance state varies
> from target to target, like some controllers support GEN3 in NOM (Nominal)
> voltage corner, while some other supports GEN3 in low SVS (static voltage
> scaling).
>
> The SoC can be more power efficient if we scale the performance state
> based on the aggregate PCIe link bandwidth.
>
> Add Operating Performance Points (OPP) support to vote for RPMh state based
> on the aggregate link bandwidth.
>
> OPP can handle ICC bw voting also, so move ICC bw voting through OPP
> framework if OPP entries are present.
>
> Different link configurations may share the same aggregate bandwidth,
> e.g., a 2.5 GT/s x2 link and a 5.0 GT/s x1 link have the same bandwidth
> and share the same OPP entry.
>
This info should be part of the dts change.
> As we are moving ICC voting as part of OPP, don't initialize ICC if OPP
> is supported.
>
> Before PCIe link is initialized vote for highest OPP in the OPP table,
> so that we are voting for maximum voltage corner for the link to come up
> in maximum supported speed.
>
> Signed-off-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
> ---
> drivers/pci/controller/dwc/pcie-qcom.c | 72 +++++++++++++++++++++++++++-------
> 1 file changed, 58 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> index b4893214b2d3..4ad5ef3bf8fc 100644
> --- a/drivers/pci/controller/dwc/pcie-qcom.c
> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> @@ -22,6 +22,7 @@
> #include <linux/of.h>
> #include <linux/of_gpio.h>
> #include <linux/pci.h>
> +#include <linux/pm_opp.h>
> #include <linux/pm_runtime.h>
> #include <linux/platform_device.h>
> #include <linux/phy/pcie.h>
> @@ -1442,15 +1443,13 @@ static int qcom_pcie_icc_init(struct qcom_pcie *pcie)
> return 0;
> }
>
> -static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
> +static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie)
> {
> struct dw_pcie *pci = pcie->pci;
> - u32 offset, status;
> + u32 offset, status, freq;
> + struct dev_pm_opp *opp;
> int speed, width;
> - int ret;
> -
> - if (!pcie->icc_mem)
> - return;
> + int ret, mbps;
>
> offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
> status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA);
> @@ -1462,10 +1461,26 @@ static void qcom_pcie_icc_update(struct qcom_pcie *pcie)
> speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status);
> width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status);
>
> - ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
> - if (ret) {
> - dev_err(pci->dev, "failed to set interconnect bandwidth for PCIe-MEM: %d\n",
> - ret);
> + if (pcie->icc_mem) {
> + ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed));
> + if (ret) {
> + dev_err(pci->dev, "failed to set interconnect bandwidth for PCIe-MEM: %d\n",
s/failed/Failed
> + ret);
> + }
> + } else {
> + mbps = pcie_link_speed_to_mbps(pcie_link_speed[speed]);
> + if (mbps < 0)
> + return;
> +
> + freq = mbps * 1000;
> + opp = dev_pm_opp_find_freq_exact(pci->dev, freq * width, true);
As per the API documentation, dev_pm_opp_put() should be called for both success
and failure case.
> + if (!IS_ERR(opp)) {
So what is the action if OPP is not found for the freq?
> + ret = dev_pm_opp_set_opp(pci->dev, opp);
> + if (ret)
> + dev_err(pci->dev, "Failed to set opp: freq %ld ret %d\n",
'Failed to set OPP for freq (%ld): %d'
> + dev_pm_opp_get_freq(opp), ret);
> + dev_pm_opp_put(opp);
> + }
> }
> }
>
> @@ -1509,8 +1524,10 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
> static int qcom_pcie_probe(struct platform_device *pdev)
> {
> const struct qcom_pcie_cfg *pcie_cfg;
> + unsigned long max_freq = INT_MAX;
> struct device *dev = &pdev->dev;
> struct qcom_pcie *pcie;
> + struct dev_pm_opp *opp;
> struct dw_pcie_rp *pp;
> struct resource *res;
> struct dw_pcie *pci;
> @@ -1577,9 +1594,33 @@ static int qcom_pcie_probe(struct platform_device *pdev)
> goto err_pm_runtime_put;
> }
>
> - ret = qcom_pcie_icc_init(pcie);
> - if (ret)
> + /* OPP table is optional */
> + ret = devm_pm_opp_of_add_table(dev);
> + if (ret && ret != -ENODEV) {
> + dev_err_probe(dev, ret, "Failed to add OPP table\n");
> goto err_pm_runtime_put;
> + }
> +
> + /*
> + * Use highest OPP here if the OPP table is present. At the end of
I believe I asked you to add the information justifying why the highest OPP
should be used.
> + * the probe(), OPP will be updated using qcom_pcie_icc_opp_update().
> + */
> + if (!ret) {
> + opp = dev_pm_opp_find_freq_floor(dev, &max_freq);
Same comment as dev_pm_opp_find_freq_exact().
> + if (!IS_ERR(opp)) {
> + ret = dev_pm_opp_set_opp(dev, opp);
> + if (ret)
> + dev_err_probe(pci->dev, ret,
> + "Failed to set OPP: freq %ld\n",
Same comment as above.
> + dev_pm_opp_get_freq(opp));
> + dev_pm_opp_put(opp);
So you want to continue even in the case of failure?
- Mani
> + }
> + } else {
> + /* Skip ICC init if OPP is supported as it is handled by OPP */
> + ret = qcom_pcie_icc_init(pcie);
> + if (ret)
> + goto err_pm_runtime_put;
> + }
>
> ret = pcie->cfg->ops->get_resources(pcie);
> if (ret)
> @@ -1599,7 +1640,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
> goto err_phy_exit;
> }
>
> - qcom_pcie_icc_update(pcie);
> + qcom_pcie_icc_opp_update(pcie);
>
> if (pcie->mhi)
> qcom_pcie_init_debugfs(pcie);
> @@ -1658,6 +1699,9 @@ static int qcom_pcie_suspend_noirq(struct device *dev)
> if (ret)
> dev_err(dev, "failed to disable icc path of CPU-PCIe: %d\n", ret);
>
> + if (!pcie->icc_mem)
> + dev_pm_opp_set_opp(pcie->pci->dev, NULL);
> +
> return ret;
> }
>
> @@ -1680,7 +1724,7 @@ static int qcom_pcie_resume_noirq(struct device *dev)
> pcie->suspended = false;
> }
>
> - qcom_pcie_icc_update(pcie);
> + qcom_pcie_icc_opp_update(pcie);
>
> return 0;
> }
>
> --
> 2.42.0
>
--
மணிவண்ணன் சதாசிவம்
next prev parent reply other threads:[~2024-04-07 15:00 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-07 4:37 [PATCH v9 0/6] PCI: qcom: Add support for OPP Krishna chaitanya chundru
2024-04-07 4:37 ` [PATCH v9 1/6] arm64: dts: qcom: sm8450: Add interconnect path to PCIe node Krishna chaitanya chundru
2024-04-07 4:37 ` [PATCH v9 2/6] PCI: qcom: Add ICC bandwidth vote for CPU to PCIe path Krishna chaitanya chundru
2024-04-07 14:39 ` Manivannan Sadhasivam
2024-04-08 8:53 ` Krishna Chaitanya Chundru
2024-04-07 4:37 ` [PATCH v9 3/6] dt-bindings: pci: qcom: Add opp table Krishna chaitanya chundru
2024-04-07 9:00 ` Krzysztof Kozlowski
2024-04-07 14:42 ` Manivannan Sadhasivam
2024-04-08 8:53 ` Krishna Chaitanya Chundru
2024-04-07 4:37 ` [PATCH v9 4/6] arm64: dts: qcom: sm8450: Add opp table support to PCIe Krishna chaitanya chundru
2024-04-07 14:45 ` Manivannan Sadhasivam
2024-04-07 4:37 ` [PATCH v9 5/6] PCI: Bring the PCIe speed to MBps logic to new pcie_link_speed_to_mbps() Krishna chaitanya chundru
2024-04-07 4:37 ` [PATCH v9 6/6] PCI: qcom: Add OPP support to scale performance state of power domain Krishna chaitanya chundru
2024-04-07 15:00 ` Manivannan Sadhasivam [this message]
2024-04-08 9:02 ` Krishna Chaitanya Chundru
2024-04-08 9:45 ` Manivannan Sadhasivam
2024-04-08 9:52 ` Krishna Chaitanya Chundru
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240407150048.GE2679@thinkpad \
--to=manivannan.sadhasivam@linaro.org \
--cc=andersson@kernel.org \
--cc=bhelgaas@google.com \
--cc=bmasney@redhat.com \
--cc=conor+dt@kernel.org \
--cc=devicetree@vger.kernel.org \
--cc=djakov@kernel.org \
--cc=johan+linaro@kernel.org \
--cc=konrad.dybcio@linaro.org \
--cc=krzysztof.kozlowski+dt@linaro.org \
--cc=krzysztof.kozlowski@linaro.org \
--cc=kw@linux.com \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=lpieralisi@kernel.org \
--cc=quic_krichai@quicinc.com \
--cc=quic_nitegupt@quicinc.com \
--cc=quic_parass@quicinc.com \
--cc=quic_skananth@quicinc.com \
--cc=quic_vbadigan@quicinc.com \
--cc=robh@kernel.org \
--cc=vireshk@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).