From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
To: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
Cc: Richard Acayan <mailingradian@gmail.com>,
"Rafael J. Wysocki" <rafael@kernel.org>,
Daniel Lezcano <daniel.lezcano@kernel.org>,
Zhang Rui <rui.zhang@intel.com>,
Lukasz Luba <lukasz.luba@arm.com>, Rob Herring <robh@kernel.org>,
Krzysztof Kozlowski <krzk+dt@kernel.org>,
Conor Dooley <conor+dt@kernel.org>,
Amit Kucheria <amitk@kernel.org>,
Thara Gopinath <thara.gopinath@gmail.com>,
Bjorn Andersson <andersson@kernel.org>,
Konrad Dybcio <konradybcio@kernel.org>,
linux-arm-msm@vger.kernel.org, linux-pm@vger.kernel.org,
devicetree@vger.kernel.org
Subject: Re: [PATCH v4 3/4] thermal/qcom/lmh: support SDM670 and its CPU clusters
Date: Mon, 30 Mar 2026 15:50:12 +0200 [thread overview]
Message-ID: <1fcecede-16f0-4ce1-b76c-32f569cb5e41@oss.qualcomm.com> (raw)
In-Reply-To: <lnumerwlyvmbdkwum64js46tbnvpxjrdrouhq3vybuwto4st3g@7xzr52e3samd>
On 3/30/26 12:59 PM, Dmitry Baryshkov wrote:
> On Mon, Mar 30, 2026 at 12:32:29PM +0200, Konrad Dybcio wrote:
>> On 3/29/26 12:44 PM, Dmitry Baryshkov wrote:
>>> On Fri, Mar 27, 2026 at 09:40:40PM -0400, Richard Acayan wrote:
>>>> The LMh driver was made for Qualcomm SoCs with clusters of 4 CPUs, but
>>>> some SoCs divide the CPUs into different sizes of clusters. In SDM670,
>>>> the first 6 CPUs are in the little cluster and the next 2 are in the big
>>>> cluster. Define the clusters in the match data and define the different
>>>> cluster configuration for SDM670.
>>>>
>>>> Currently, this only supports 8 CPUs and tolerates linking to any CPU in
>>>> the cluster.
>>>>
>>>> Signed-off-by: Richard Acayan <mailingradian@gmail.com>
>>>> ---
>>>> drivers/thermal/qcom/lmh.c | 69 +++++++++++++++++++++++++++++++-------
>>>> 1 file changed, 56 insertions(+), 13 deletions(-)
>>>>
>>>> +static const struct lmh_soc_data sdm670_lmh_data = {
>>>> + .enable_algos = true,
>>>> + .node_ids = {
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + },
>>>> +};
>>>> +
>>>> +static const struct lmh_soc_data sdm845_lmh_data = {
>>>> + .enable_algos = true,
>>>> + .node_ids = {
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + },
>>>> +};
>>>
>>> These tables made me wonder, can we determine this information from the
>>> DT? For example, by reading the qcom,freq-domain property. But...
>>>
>>>> +
>>>> +static const struct lmh_soc_data sm8150_lmh_data = {
>>>> + .enable_algos = false,
>>>> + .node_ids = {
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER0_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + LMH_CLUSTER1_NODE_ID,
>>>> + },
>>>> +};
>>>
>>> ... this might be problematic, unless this entry is broken. On SM8150 we
>>> have three freq domains, but up to now we were programming two clustern
>>> nodes. Of course it is possible to define that node_id is 0 for freq
>>> domain 0 and 1 for domains 1 and 2.
>>
>> The third cluster situation on 8150 is not super good - we e.g. only have
>> a single LMH irq that's shared between the big and prime cores. That
>> was fixed with later SoCs (which is why it's not wired up in the DT today)
>
> Thanks!
>
> Anyway, from your point of view, would it be better to define mappings
> in the driver (like it's done with this patch) or parse the DT?
Well, we can spend a lot of time trying to be smart about it and handle
the odd edge case, or add a simple comparison!
Konrad
next prev parent reply other threads:[~2026-03-30 13:50 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-28 1:40 [PATCH v4 0/4] SDM670 Basic SoC thermal zones Richard Acayan
2026-03-28 1:40 ` [PATCH v4 1/4] dt-bindings: thermal: tsens: add SDM670 compatible Richard Acayan
2026-03-28 1:40 ` [PATCH v4 2/4] dt-bindings: thermal: lmh: Add " Richard Acayan
2026-03-28 12:20 ` Krzysztof Kozlowski
2026-03-28 15:16 ` Richard Acayan
2026-03-28 1:40 ` [PATCH v4 3/4] thermal/qcom/lmh: support SDM670 and its CPU clusters Richard Acayan
2026-03-29 10:44 ` Dmitry Baryshkov
2026-03-30 10:32 ` Konrad Dybcio
2026-03-30 10:59 ` Dmitry Baryshkov
2026-03-30 13:50 ` Konrad Dybcio [this message]
2026-03-30 14:04 ` Dmitry Baryshkov
2026-03-30 10:04 ` Konrad Dybcio
2026-03-28 1:40 ` [PATCH v4 4/4] arm64: dts: qcom: sdm670: add thermal zones and thermal devices Richard Acayan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1fcecede-16f0-4ce1-b76c-32f569cb5e41@oss.qualcomm.com \
--to=konrad.dybcio@oss.qualcomm.com \
--cc=amitk@kernel.org \
--cc=andersson@kernel.org \
--cc=conor+dt@kernel.org \
--cc=daniel.lezcano@kernel.org \
--cc=devicetree@vger.kernel.org \
--cc=dmitry.baryshkov@oss.qualcomm.com \
--cc=konradybcio@kernel.org \
--cc=krzk+dt@kernel.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=lukasz.luba@arm.com \
--cc=mailingradian@gmail.com \
--cc=rafael@kernel.org \
--cc=robh@kernel.org \
--cc=rui.zhang@intel.com \
--cc=thara.gopinath@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox