linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Manivannan Sadhasivam <mani@kernel.org>
To: andersson@kernel.org, konradybcio@kernel.org, robh@kernel.org,
	krzk+dt@kernel.org, conor+dt@kernel.org
Cc: linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>,
	Manivannan Sadhasivam <mani@kernel.org>
Subject: [PATCH] arm64: dts: qcom: x1e80100: Add '#cooling-cells' for CPU nodes
Date: Wed, 15 Oct 2025 12:27:03 +0530	[thread overview]
Message-ID: <20251015065703.9422-1-mani@kernel.org> (raw)

From: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>

Enable passive cooling for CPUs in the X1E80100 SoC by adding the
'#cooling-cells' property. This will allow the OS to mitigate the CPU
power dissipation with the help of SCMI DVFS.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>
Signed-off-by: Manivannan Sadhasivam <mani@kernel.org>
---
 arch/arm64/boot/dts/qcom/x1e80100.dtsi | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
index 51576d9c935d..001cf9cbb0c5 100644
--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
@@ -76,6 +76,7 @@ cpu0: cpu@0 {
 			power-domains = <&cpu_pd0>, <&scmi_dvfs 0>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 
 			l2_0: l2-cache {
 				compatible = "cache";
@@ -93,6 +94,7 @@ cpu1: cpu@100 {
 			power-domains = <&cpu_pd1>, <&scmi_dvfs 0>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 		};
 
 		cpu2: cpu@200 {
@@ -104,6 +106,7 @@ cpu2: cpu@200 {
 			power-domains = <&cpu_pd2>, <&scmi_dvfs 0>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 		};
 
 		cpu3: cpu@300 {
@@ -115,6 +118,7 @@ cpu3: cpu@300 {
 			power-domains = <&cpu_pd3>, <&scmi_dvfs 0>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 		};
 
 		cpu4: cpu@10000 {
@@ -126,6 +130,7 @@ cpu4: cpu@10000 {
 			power-domains = <&cpu_pd4>, <&scmi_dvfs 1>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 
 			l2_1: l2-cache {
 				compatible = "cache";
@@ -143,6 +148,7 @@ cpu5: cpu@10100 {
 			power-domains = <&cpu_pd5>, <&scmi_dvfs 1>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 		};
 
 		cpu6: cpu@10200 {
@@ -154,6 +160,7 @@ cpu6: cpu@10200 {
 			power-domains = <&cpu_pd6>, <&scmi_dvfs 1>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 		};
 
 		cpu7: cpu@10300 {
@@ -165,6 +172,7 @@ cpu7: cpu@10300 {
 			power-domains = <&cpu_pd7>, <&scmi_dvfs 1>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 		};
 
 		cpu8: cpu@20000 {
@@ -176,6 +184,7 @@ cpu8: cpu@20000 {
 			power-domains = <&cpu_pd8>, <&scmi_dvfs 2>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 
 			l2_2: l2-cache {
 				compatible = "cache";
@@ -193,6 +202,7 @@ cpu9: cpu@20100 {
 			power-domains = <&cpu_pd9>, <&scmi_dvfs 2>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 		};
 
 		cpu10: cpu@20200 {
@@ -204,6 +214,7 @@ cpu10: cpu@20200 {
 			power-domains = <&cpu_pd10>, <&scmi_dvfs 2>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 		};
 
 		cpu11: cpu@20300 {
@@ -215,6 +226,7 @@ cpu11: cpu@20300 {
 			power-domains = <&cpu_pd11>, <&scmi_dvfs 2>;
 			power-domain-names = "psci", "perf";
 			cpu-idle-states = <&cluster_c4>;
+			#cooling-cells = <2>;
 		};
 
 		cpu-map {
-- 
2.48.1


             reply	other threads:[~2025-10-15  6:57 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-15  6:57 Manivannan Sadhasivam [this message]
2025-10-19 16:01 ` [PATCH] arm64: dts: qcom: x1e80100: Add '#cooling-cells' for CPU nodes Dmitry Baryshkov
2025-12-08  5:13   ` Manivannan Sadhasivam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251015065703.9422-1-mani@kernel.org \
    --to=mani@kernel.org \
    --cc=andersson@kernel.org \
    --cc=conor+dt@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=konradybcio@kernel.org \
    --cc=krzk+dt@kernel.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=manivannan.sadhasivam@oss.qualcomm.com \
    --cc=robh@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).