From: Georgi Djakov <djakov@kernel.org>
To: Leo Yan <leo.yan@linaro.org>, Andy Gross <agross@kernel.org>,
Bjorn Andersson <bjorn.andersson@linaro.org>,
Rob Herring <robh+dt@kernel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
linux-arm-msm@vger.kernel.org, linux-pm@vger.kernel.org,
devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 5/5] interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values
Date: Thu, 7 Jul 2022 17:33:58 +0300 [thread overview]
Message-ID: <28bf991f-7b4c-0af1-2780-842500b01a0f@kernel.org> (raw)
In-Reply-To: <20220705072336.742703-6-leo.yan@linaro.org>
On 5.07.22 10:23, Leo Yan wrote:
> This commit uses buckets for support bandwidth and clock rates. It
> introduces a new function qcom_icc_bus_aggregate() to calculate the
> aggregate average and peak bandwidths for every bucket, and also it
> calculates the maximum aggregate values across all buckets.
>
> The maximum aggregate values are used to calculate the final bandwidth
> requests. And we can set the clock rate per bucket, we use SLEEP bucket
> as default bucket if a platform doesn't enable the interconnect path
> tags in DT binding; otherwise, we use WAKE bucket to set active clock
> and use SLEEP bucket for other clocks. So far we don't use AMC bucket.
>
> Signed-off-by: Leo Yan <leo.yan@linaro.org>
> ---
> drivers/interconnect/qcom/icc-rpm.c | 80 ++++++++++++++++++++++++-----
> 1 file changed, 67 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
> index b025fc6b97c9..4b932eb807c7 100644
> --- a/drivers/interconnect/qcom/icc-rpm.c
> +++ b/drivers/interconnect/qcom/icc-rpm.c
> @@ -302,18 +302,62 @@ static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
> return 0;
> }
>
> +/**
> + * qcom_icc_bus_aggregate - aggregate bandwidth by traversing all nodes
> + * @provider: generic interconnect provider
> + * @agg_avg: an array for aggregated average bandwidth of buckets
> + * @agg_peak: an array for aggregated peak bandwidth of buckets
> + * @max_agg_avg: pointer to max value of aggregated average bandwidth
> + * @max_agg_peak: pointer to max value of aggregated peak bandwidth
> + */
> +static void qcom_icc_bus_aggregate(struct icc_provider *provider,
> + u64 *agg_avg, u64 *agg_peak,
> + u64 *max_agg_avg, u64 *max_agg_peak)
> +{
> + struct icc_node *node;
> + struct qcom_icc_node *qn;
> + int i;
> +
> + /* Initialise aggregate values */
> + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
> + agg_avg[i] = 0;
> + agg_peak[i] = 0;
> + }
> +
> + *max_agg_avg = 0;
> + *max_agg_peak = 0;
> +
> + /*
> + * Iterate nodes on the interconnect and aggregate bandwidth
> + * requests for every bucket.
> + */
> + list_for_each_entry(node, &provider->nodes, node_list) {
> + qn = node->data;
> + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
> + agg_avg[i] += qn->sum_avg[i];
> + agg_peak[i] = max_t(u64, agg_peak[i], qn->max_peak[i]);
> + }
> + }
> +
> + /* Find maximum values across all buckets */
> + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
> + *max_agg_avg = max_t(u64, *max_agg_avg, agg_avg[i]);
> + *max_agg_peak = max_t(u64, *max_agg_peak, agg_peak[i]);
> + }
> +}
> +
> static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
> {
> struct qcom_icc_provider *qp;
> struct qcom_icc_node *src_qn = NULL, *dst_qn = NULL;
> struct icc_provider *provider;
> - struct icc_node *n;
> u64 sum_bw;
> u64 max_peak_bw;
> u64 rate;
> - u32 agg_avg = 0;
> - u32 agg_peak = 0;
> + u64 agg_avg[QCOM_ICC_NUM_BUCKETS], agg_peak[QCOM_ICC_NUM_BUCKETS];
> + u64 max_agg_avg, max_agg_peak;
> int ret, i;
> + int bucket;
>
> src_qn = src->data;
> if (dst)
> @@ -321,12 +365,11 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
> provider = src->provider;
> qp = to_qcom_provider(provider);
>
> - list_for_each_entry(n, &provider->nodes, node_list)
> - provider->aggregate(n, 0, n->avg_bw, n->peak_bw,
> - &agg_avg, &agg_peak);
> + qcom_icc_bus_aggregate(provider, agg_avg, agg_peak, &max_agg_avg,
> + &max_agg_peak);
>
> - sum_bw = icc_units_to_bps(agg_avg);
> - max_peak_bw = icc_units_to_bps(agg_peak);
> + sum_bw = icc_units_to_bps(max_agg_avg);
> + max_peak_bw = icc_units_to_bps(max_agg_peak);
>
> ret = __qcom_icc_set(src, src_qn, sum_bw);
> if (ret)
> @@ -337,12 +380,23 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
> return ret;
> }
>
> - rate = max(sum_bw, max_peak_bw);
Looks like max_peak_bw is unused now?
> - do_div(rate, src_qn->buswidth);
> - rate = min_t(u64, rate, LONG_MAX);
> -
> for (i = 0; i < qp->num_clks; i++) {
> + /*
> + * Use WAKE bucket for active clock, otherwise, use SLEEP bucket
> + * for other clocks. If a platform doesn't set interconnect
> + * path tags, by default use sleep bucket for all clocks.
> + *
> + * Note, AMC bucket is not supported yet.
> + */
> + if (!strcmp(qp->bus_clks[i].id, "bus_a"))
> + bucket = QCOM_ICC_BUCKET_WAKE;
> + else
> + bucket = QCOM_ICC_BUCKET_SLEEP;
> +
> + rate = icc_units_to_bps(max(agg_avg[bucket], agg_peak[bucket]));
> + do_div(rate, src_qn->buswidth);
> + rate = min_t(u64, rate, LONG_MAX);
> +
> if (qp->bus_clk_rate[i] == rate)
> continue;
Thanks,
Georgi
next prev parent reply other threads:[~2022-07-07 14:34 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-05 7:23 [PATCH v4 0/5] interconnect: qcom: icc-rpm: Support bucket Leo Yan
2022-07-05 7:23 ` [PATCH v4 1/5] dt-bindings: interconnect: Update property for icc-rpm path tag Leo Yan
2022-07-05 9:49 ` Krzysztof Kozlowski
2022-07-05 7:23 ` [PATCH v4 2/5] interconnect: qcom: Move qcom_icc_xlate_extended() to a common file Leo Yan
2022-07-05 7:23 ` [PATCH v4 3/5] interconnect: qcom: icc-rpm: Change to use qcom_icc_xlate_extended() Leo Yan
2022-07-05 7:23 ` [PATCH v4 4/5] interconnect: qcom: icc-rpm: Support multiple buckets Leo Yan
2022-07-07 14:29 ` Georgi Djakov
2022-07-08 6:29 ` Leo Yan
2022-07-05 7:23 ` [PATCH v4 5/5] interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values Leo Yan
2022-07-07 14:33 ` Georgi Djakov [this message]
2022-07-08 6:33 ` Leo Yan
2022-07-05 14:03 ` [PATCH v4 0/5] interconnect: qcom: icc-rpm: Support bucket Georgi Djakov
2022-07-07 2:52 ` Leo Yan
2022-07-07 9:56 ` Leo Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=28bf991f-7b4c-0af1-2780-842500b01a0f@kernel.org \
--to=djakov@kernel.org \
--cc=agross@kernel.org \
--cc=bjorn.andersson@linaro.org \
--cc=devicetree@vger.kernel.org \
--cc=krzysztof.kozlowski+dt@linaro.org \
--cc=leo.yan@linaro.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=robh+dt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).