devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sibi Sankar <sibis@codeaurora.org>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Ansuel Smith <ansuelsmth@gmail.com>,
	vincent.guittot@linaro.org, saravanak@google.com,
	Sudeep Holla <sudeep.holla@arm.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Rob Herring <robh+dt@kernel.org>,
	linux-pm@vger.kernel.org, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH v3 0/2] Add Krait Cache Scaling support
Date: Mon, 31 Aug 2020 11:15:50 +0530	[thread overview]
Message-ID: <b339e01f9d1e955137120daa06d26228@codeaurora.org> (raw)
In-Reply-To: <20200824104053.kpjpwzl2iw3lpg2m@vireshk-i7>

On 2020-08-24 16:10, Viresh Kumar wrote:
> +Vincent/Saravana/Sibi
> 
> On 21-08-20, 16:00, Ansuel Smith wrote:
>> This adds Krait Cache scaling support using the cpufreq notifier.
>> I have some doubt about where this should be actually placed (clk or 
>> cpufreq)?
>> Also the original idea was to create a dedicated cpufreq driver (like 
>> it's done in
>> the codeaurora qcom repo) by copying the cpufreq-dt driver and adding 
>> the cache
>> scaling logic but i still don't know what is better. Have a very 
>> similar driver or
>> add a dedicated driver only for the cache using the cpufreq notifier 
>> and do the
>> scale on every freq transition.
>> Thanks to everyone who will review or answer these questions.
> 
> Saravana was doing something with devfreq to solve such issues if I
> wasn't mistaken.
> 
> Sibi ?

IIRC the final plan was to create a devfreq device
and devfreq-cpufreq based governor to scale them, this
way one can switch to a different governor if required.
(I don't see if ^^ applies well for l2 though). In the
interim until such a solution is acked on the list we
just scale the resources directly from the cpufreq
driver. On SDM845/SC7180 SoCs, L3 is modeled as a
interconnect provider and is directly scaled from the
cpufreq-hw driver.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project.

  reply	other threads:[~2020-08-31  5:45 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-21 14:00 [RFC PATCH v3 0/2] Add Krait Cache Scaling support Ansuel Smith
2020-08-21 14:00 ` [RFC PATCH v3 1/2] cpufreq: qcom: " Ansuel Smith
2020-08-21 14:00 ` [RFC PATCH v3 2/2] dt-bindings: cpufreq: Document Krait CPU Cache scaling Ansuel Smith
2020-08-24 17:28   ` Rob Herring
2020-08-24 10:40 ` [RFC PATCH v3 0/2] Add Krait Cache Scaling support Viresh Kumar
2020-08-31  5:45   ` Sibi Sankar [this message]
2020-08-31  7:41     ` R: " ansuelsmth
2020-09-03  6:53       ` Viresh Kumar
2020-09-03  7:13         ` Sibi Sankar
2020-09-03 11:00           ` R: " ansuelsmth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b339e01f9d1e955137120daa06d26228@codeaurora.org \
    --to=sibis@codeaurora.org \
    --cc=ansuelsmth@gmail.com \
    --cc=devicetree@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=rjw@rjwysocki.net \
    --cc=robh+dt@kernel.org \
    --cc=saravanak@google.com \
    --cc=sudeep.holla@arm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).