From: Konrad Dybcio <konrad.dybcio@linaro.org>
To: Andy Gross <agross@kernel.org>,
Bjorn Andersson <andersson@kernel.org>,
Michael Turquette <mturquette@baylibre.com>,
Stephen Boyd <sboyd@kernel.org>,
Georgi Djakov <djakov@kernel.org>, Leo Yan <leo.yan@linaro.org>,
Evan Green <evgreen@chromium.org>
Cc: Marijn Suijten <marijn.suijten@somainline.org>,
linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-clk@vger.kernel.org, linux-pm@vger.kernel.org,
Stephan Gerhold <stephan@gerhold.net>
Subject: Re: [PATCH 20/20] interconnect: qcom: Divide clk rate by src node bus width
Date: Tue, 30 May 2023 18:32:04 +0200 [thread overview]
Message-ID: <5a26e456-fe45-6def-27f9-26ec00c333e6@linaro.org> (raw)
In-Reply-To: <20230526-topic-smd_icc-v1-20-1bf8e6663c4e@linaro.org>
On 30.05.2023 12:20, Konrad Dybcio wrote:
> Ever since the introduction of SMD RPM ICC, we've been dividing the
> clock rate by the wrong bus width. This has resulted in:
>
> - setting wrong (mostly too low) rates, affecting performance
> - most often /2 or /4
> - things like DDR never hit their full potential
> - the rates were only correct if src bus width == dst bus width
> for all src, dst pairs on a given bus
>
> - Qualcomm using the same wrong logic in their BSP driver in msm-5.x
> that ships in production devices today
>
> - me losing my sanity trying to find this
>
> Resolve it by using dst_qn, if it exists.
>
> Fixes: 5e4e6c4d3ae0 ("interconnect: qcom: Add QCS404 interconnect provider driver")
> Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org>
> ---
The problem is deeper.
Chatting with Stephan (+CC), we tackled a few issues (that I will send
fixes for in v2):
1. qcom_icc_rpm_set() should take per-node (src_qn->sum_avg, dst_qn->sum_avg)
and NOT aggregated bw (unless you want ALL of your nodes on a given provider
to "go very fast")
2. the aggregate bw/clk rate calculation should use the node-specific bus widths
and not only the bus width of the src/dst node, otherwise the average bw
values will be utterly meaningless
3. thanks to (1) and (2) qcom_icc_bus_aggregate() can be remodeled to instead
calculate the clock rates for the two rpm contexts, which we can then max()
and pass on to the ratesetting call
----8<---- Cutting off Stephan's seal of approval, this is my thinking ----
4. I *think* Qualcomm really made a mistake in their msm-5.4 driver where they
took most of the logic from the current -next state and should have been
setting the rate based on the *DST* provider, or at least that's my
understanding trying to read the "known good" msm-4.19 driver
(which remembers msm-3.0 lol).. Or maybe we should keep src but ensure there's
also a final (dst, dst) vote cast:
provider->inter_set = false // current state upstream
setting apps_proc<->slv_bimc_snoc
setting mas_bimc_snoc<->slv_snoc_cnoc
setting mas_snoc_cnoc<->qhs_sdc2
provider->inter_set = true // I don't think there's effectively a difference?
setting apps_proc<->slv_bimc_snoc
setting slv_bimc_snoc<->mas_bimc_snoc
setting mas_bimc_snoc<->slv_snoc_cnoc
setting slv_snoc_cnoc<->mas_snoc_cnoc
setting mas_snoc_cnoc<->qhs_sdc2
all the (mas|slv)_bus1_bus2 are very wide whereas the target nodes are usually
4-, 8- or 16-wide, which without this patch or something equivalent decimates
(or actually 2^n-ates) the calculated rates..
Konrad
> drivers/interconnect/qcom/icc-rpm.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
> index 59be704364bb..58e2a8b1b7c3 100644
> --- a/drivers/interconnect/qcom/icc-rpm.c
> +++ b/drivers/interconnect/qcom/icc-rpm.c
> @@ -340,7 +340,7 @@ static void qcom_icc_bus_aggregate(struct icc_provider *provider,
> static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
> {
> struct qcom_icc_provider *qp;
> - struct qcom_icc_node *src_qn = NULL, *dst_qn = NULL;
> + struct qcom_icc_node *src_qn = NULL, *dst_qn = NULL, *qn = NULL;
> struct icc_provider *provider;
> u64 active_rate, sleep_rate;
> u64 agg_avg[QCOM_SMD_RPM_STATE_NUM], agg_peak[QCOM_SMD_RPM_STATE_NUM];
> @@ -353,6 +353,8 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
> provider = src->provider;
> qp = to_qcom_provider(provider);
>
> + qn = dst_qn ? dst_qn : src_qn;
> +
> qcom_icc_bus_aggregate(provider, agg_avg, agg_peak, &max_agg_avg);
>
> ret = qcom_icc_rpm_set(src_qn, agg_avg);
> @@ -372,11 +374,11 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
> /* Intentionally keep the rates in kHz as that's what RPM accepts */
> active_rate = max(agg_avg[QCOM_SMD_RPM_ACTIVE_STATE],
> agg_peak[QCOM_SMD_RPM_ACTIVE_STATE]);
> - do_div(active_rate, src_qn->buswidth);
> + do_div(active_rate, qn->buswidth);
>
> sleep_rate = max(agg_avg[QCOM_SMD_RPM_SLEEP_STATE],
> agg_peak[QCOM_SMD_RPM_SLEEP_STATE]);
> - do_div(sleep_rate, src_qn->buswidth);
> + do_div(sleep_rate, qn->buswidth);
>
> /*
> * Downstream checks whether the requested rate is zero, but it makes little sense
>
next prev parent reply other threads:[~2023-05-30 16:32 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-30 10:19 [PATCH 00/20] Restructure RPM SMD ICC Konrad Dybcio
2023-05-30 10:20 ` [PATCH 01/20] soc: qcom: smd-rpm: Add QCOM_SMD_RPM_STATE_NUM Konrad Dybcio
2023-06-01 9:20 ` Dmitry Baryshkov
2023-06-01 10:01 ` Konrad Dybcio
2023-05-30 10:20 ` [PATCH 02/20] clk: qcom: smd-rpm: Move some RPM resources to the common header Konrad Dybcio
2023-06-01 9:24 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 03/20] clk: qcom: smd-rpm: Separate out interconnect bus clocks Konrad Dybcio
2023-06-01 10:03 ` Dmitry Baryshkov
2023-06-01 10:06 ` Konrad Dybcio
2023-05-30 10:20 ` [PATCH 04/20] clk: qcom: smd-rpm: Export clock scaling availability Konrad Dybcio
2023-05-30 10:20 ` [PATCH 05/20] interconnect: qcom: icc-rpm: Introduce keep_alive Konrad Dybcio
2023-06-01 9:54 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 06/20] interconnect: qcom: icc-rpm: Allow negative QoS offset Konrad Dybcio
2023-06-01 9:56 ` Dmitry Baryshkov
2023-06-01 9:59 ` Konrad Dybcio
2023-06-01 10:01 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 07/20] interconnect: qcom: Fold smd-rpm.h into icc-rpm.h Konrad Dybcio
2023-06-01 9:57 ` Dmitry Baryshkov
2023-06-01 10:00 ` Konrad Dybcio
2023-06-01 10:04 ` Dmitry Baryshkov
2023-06-01 10:07 ` Konrad Dybcio
2023-05-30 10:20 ` [PATCH 08/20] interconnect: qcom: smd-rpm: Add rpmcc handling skeleton code Konrad Dybcio
2023-06-01 10:01 ` Dmitry Baryshkov
2023-06-01 10:04 ` Konrad Dybcio
2023-05-30 10:20 ` [PATCH 09/20] interconnect: qcom: Add missing headers in icc-rpm.h Konrad Dybcio
2023-06-01 10:02 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 10/20] interconnect: qcom: Define RPM bus clocks Konrad Dybcio
2023-06-01 10:04 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 11/20] interconnect: qcom: sdm660: Hook up RPM bus clk definitions Konrad Dybcio
2023-06-01 10:08 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 12/20] interconnect: qcom: msm8996: " Konrad Dybcio
2023-06-01 10:08 ` Dmitry Baryshkov
2023-06-01 10:10 ` Konrad Dybcio
2023-06-01 10:11 ` Dmitry Baryshkov
2023-06-01 10:13 ` Konrad Dybcio
2023-05-30 10:20 ` [PATCH 13/20] interconnect: qcom: qcs404: " Konrad Dybcio
2023-06-01 10:09 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 14/20] interconnect: qcom: msm8939: " Konrad Dybcio
2023-06-01 10:09 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 15/20] interconnect: qcom: msm8916: " Konrad Dybcio
2023-06-01 10:10 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 16/20] interconnect: qcom: qcm2290: " Konrad Dybcio
2023-06-01 10:11 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 17/20] interconnect: qcom: icc-rpm: Control bus rpmcc from icc Konrad Dybcio
2023-05-30 13:33 ` Konrad Dybcio
2023-05-30 10:20 ` [PATCH 18/20] interconnect: qcom: icc-rpm: Fix bucket number Konrad Dybcio
2023-06-01 10:13 ` Dmitry Baryshkov
2023-05-30 10:20 ` [PATCH 19/20] interconnect: qcom: icc-rpm: Set bandwidth on both contexts Konrad Dybcio
2023-05-30 10:20 ` [PATCH 20/20] interconnect: qcom: Divide clk rate by src node bus width Konrad Dybcio
2023-05-30 12:16 ` Konrad Dybcio
2023-05-30 16:32 ` Konrad Dybcio [this message]
2023-05-30 19:02 ` Stephan Gerhold
2023-06-01 12:43 ` Konrad Dybcio
2023-06-01 13:23 ` Stephan Gerhold
2023-06-01 13:29 ` Konrad Dybcio
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5a26e456-fe45-6def-27f9-26ec00c333e6@linaro.org \
--to=konrad.dybcio@linaro.org \
--cc=agross@kernel.org \
--cc=andersson@kernel.org \
--cc=djakov@kernel.org \
--cc=evgreen@chromium.org \
--cc=leo.yan@linaro.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-clk@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=marijn.suijten@somainline.org \
--cc=mturquette@baylibre.com \
--cc=sboyd@kernel.org \
--cc=stephan@gerhold.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox