From: Pradeep P V K <ppvk@codeaurora.org>
To: bjorn.andersson@linaro.org, adrian.hunter@intel.com,
robh+dt@kernel.org, ulf.hansson@linaro.org,
vbadigan@codeaurora.org, sboyd@kernel.org,
georgi.djakov@linaro.org, mka@chromium.org
Cc: linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org,
linux-mmc-owner@vger.kernel.org, rnayak@codeaurora.org,
sibis@codeaurora.org, matthias@chromium.org,
Pradeep P V K <ppvk@codeaurora.org>
Subject: [PATCH V4 1/2] mmc: sdhci-msm: Add interconnect bandwidth scaling support
Date: Tue, 9 Jun 2020 14:07:25 +0530 [thread overview]
Message-ID: <1591691846-7578-2-git-send-email-ppvk@codeaurora.org> (raw)
In-Reply-To: <1591691846-7578-1-git-send-email-ppvk@codeaurora.org>
Interconnect bandwidth scaling support is now added as a
part of OPP. So, make sure interconnect driver is ready
before handling interconnect scaling.
Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
Reviewed-by: Sibi Sankar <sibis@codeaurora.org>
---
This change is based on
[1] [Patch v8] Introduce OPP bandwidth bindings
(https://lkml.org/lkml/2020/5/12/493)
[2] [Patch v3] mmc: sdhci-msm: Fix error handling
for dev_pm_opp_of_add_table()
(https://lkml.org/lkml/2020/5/5/491)
drivers/mmc/host/sdhci-msm.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index b277dd7..15c42b0 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -14,6 +14,7 @@
#include <linux/slab.h>
#include <linux/iopoll.h>
#include <linux/regulator/consumer.h>
+#include <linux/interconnect.h>
#include "sdhci-pltfm.h"
#include "cqhci.h"
@@ -2070,6 +2071,11 @@ static int sdhci_msm_probe(struct platform_device *pdev)
}
msm_host->bulk_clks[0].clk = clk;
+ /* Check for optional interconnect paths */
+ ret = dev_pm_opp_of_find_icc_paths(&pdev->dev, NULL);
+ if (ret)
+ goto bus_clk_disable;
+
msm_host->opp_table = dev_pm_opp_set_clkname(&pdev->dev, "core");
if (IS_ERR(msm_host->opp_table)) {
ret = PTR_ERR(msm_host->opp_table);
--
1.9.1
next prev parent reply other threads:[~2020-06-09 8:38 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-09 8:37 [PATCH V4 0/2] Add SDHC interconnect bandwidth scaling Pradeep P V K
2020-06-09 8:37 ` Pradeep P V K [this message]
2020-06-15 23:12 ` [PATCH V4 1/2] mmc: sdhci-msm: Add interconnect bandwidth scaling support Matthias Kaehlcke
2020-06-16 13:07 ` ppvk
2020-06-16 11:32 ` Ulf Hansson
2020-06-09 8:37 ` [PATCH V4 2/2] dt-bindings: mmc: sdhci-msm: Add interconnect BW scaling strings Pradeep P V K
2020-06-16 11:32 ` Ulf Hansson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1591691846-7578-2-git-send-email-ppvk@codeaurora.org \
--to=ppvk@codeaurora.org \
--cc=adrian.hunter@intel.com \
--cc=bjorn.andersson@linaro.org \
--cc=devicetree@vger.kernel.org \
--cc=georgi.djakov@linaro.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mmc-owner@vger.kernel.org \
--cc=linux-mmc@vger.kernel.org \
--cc=matthias@chromium.org \
--cc=mka@chromium.org \
--cc=rnayak@codeaurora.org \
--cc=robh+dt@kernel.org \
--cc=sboyd@kernel.org \
--cc=sibis@codeaurora.org \
--cc=ulf.hansson@linaro.org \
--cc=vbadigan@codeaurora.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).