devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194
@ 2025-09-09  6:21 Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth Aaron Kling via B4 Relay
                   ` (8 more replies)
  0 siblings, 9 replies; 19+ messages in thread
From: Aaron Kling via B4 Relay @ 2025-09-09  6:21 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm, Aaron Kling

This series borrows the concept used on Tegra234 to scale EMC based on
CPU frequency and applies it to Tegra186 and Tegra194. Except that the
bpmp on those archs does not support bandwidth manager, so the scaling
iteself is handled similar to how Tegra124 currently works.

Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
Changes in v2:
- Use opp scoped free in patch 3
- Cleanup as requested in patch 3
- Move patch 3 to the start of the series to keep subsystems grouped
- Link to v1: https://lore.kernel.org/r/20250831-tegra186-icc-v1-0-607ddc53b507@gmail.com

---
Aaron Kling (8):
      cpufreq: tegra186: add OPP support and set bandwidth
      dt-bindings: memory: tegra186-mc: Add dummy client IDs for Tegra186
      dt-bindings: memory: tegra194-mc: Add dummy client IDs for Tegra194
      memory: tegra186-emc: Support non-bpmp icc scaling
      memory: tegra186: Support icc scaling
      memory: tegra194: Support icc scaling
      arm64: tegra: Add CPU OPP tables for Tegra186
      arm64: tegra: Add CPU OPP tables for Tegra194

 arch/arm64/boot/dts/nvidia/tegra186.dtsi | 317 +++++++++++++++
 arch/arm64/boot/dts/nvidia/tegra194.dtsi | 636 +++++++++++++++++++++++++++++++
 drivers/cpufreq/tegra186-cpufreq.c       | 152 +++++++-
 drivers/memory/tegra/tegra186-emc.c      | 132 ++++++-
 drivers/memory/tegra/tegra186.c          |  48 +++
 drivers/memory/tegra/tegra194.c          |  59 ++-
 include/dt-bindings/memory/tegra186-mc.h |   4 +
 include/dt-bindings/memory/tegra194-mc.h |   6 +
 8 files changed, 1344 insertions(+), 10 deletions(-)
---
base-commit: 1b237f190eb3d36f52dffe07a40b5eb210280e00
change-id: 20250823-tegra186-icc-7299110cd774
prerequisite-change-id: 20250826-tegra186-cpufreq-fixes-7fbff81c68a2:v3
prerequisite-patch-id: 74a2633b412b641f9808306cff9b0a697851d6c8
prerequisite-patch-id: 9c52827317f7abfb93885febb1894b40967bd64c

Best regards,
-- 
Aaron Kling <webgeek1234@gmail.com>



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth
  2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
@ 2025-09-09  6:21 ` Aaron Kling via B4 Relay
  2025-09-30 10:30   ` Viresh Kumar
  2025-09-09  6:21 ` [PATCH v2 2/8] dt-bindings: memory: tegra186-mc: Add dummy client IDs for Tegra186 Aaron Kling via B4 Relay
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Aaron Kling via B4 Relay @ 2025-09-09  6:21 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm, Aaron Kling

From: Aaron Kling <webgeek1234@gmail.com>

Add support to use OPP table from DT in Tegra186 cpufreq driver.
Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW.
Cross check the OPP's present in DT against the LUT from BPMP-FW
and enable only those DT OPP's which are present in LUT also.

The OPP table in DT has CPU Frequency to bandwidth mapping where
the bandwidth value is per MC channel. DRAM bandwidth depends on the
number of MC channels which can vary as per the boot configuration.
This per channel bandwidth from OPP table will be later converted by
MC driver to final bandwidth value by multiplying with number of
channels before being handled in the EMC driver.

If OPP table is not present in DT, then use the LUT from BPMP-FW
directy as the CPU frequency table and not do the DRAM frequency
scaling which is same as the current behavior.

Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
 drivers/cpufreq/tegra186-cpufreq.c | 152 +++++++++++++++++++++++++++++++++++--
 1 file changed, 145 insertions(+), 7 deletions(-)

diff --git a/drivers/cpufreq/tegra186-cpufreq.c b/drivers/cpufreq/tegra186-cpufreq.c
index bd94beebc4cc2fe6870e13ca55343cedb9729e99..cb7a033e8ae6e81b18bbf3bc63632c631e99129b 100644
--- a/drivers/cpufreq/tegra186-cpufreq.c
+++ b/drivers/cpufreq/tegra186-cpufreq.c
@@ -8,6 +8,7 @@
 #include <linux/module.h>
 #include <linux/of.h>
 #include <linux/platform_device.h>
+#include <linux/units.h>
 
 #include <soc/tegra/bpmp.h>
 #include <soc/tegra/bpmp-abi.h>
@@ -58,7 +59,7 @@ static const struct tegra186_cpufreq_cpu tegra186_cpus[] = {
 };
 
 struct tegra186_cpufreq_cluster {
-	struct cpufreq_frequency_table *table;
+	struct cpufreq_frequency_table *bpmp_lut;
 	u32 ref_clk_khz;
 	u32 div;
 };
@@ -66,16 +67,121 @@ struct tegra186_cpufreq_cluster {
 struct tegra186_cpufreq_data {
 	void __iomem *regs;
 	const struct tegra186_cpufreq_cpu *cpus;
+	bool icc_dram_bw_scaling;
 	struct tegra186_cpufreq_cluster clusters[];
 };
 
+static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
+{
+	struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
+	struct dev_pm_opp *opp __free(put_opp);
+	struct device *dev;
+	int ret;
+
+	dev = get_cpu_device(policy->cpu);
+	if (!dev)
+		return -ENODEV;
+
+	opp = dev_pm_opp_find_freq_exact(dev, freq_khz * HZ_PER_KHZ, true);
+	if (IS_ERR(opp))
+		return PTR_ERR(opp);
+
+	ret = dev_pm_opp_set_opp(dev, opp);
+	if (ret)
+		data->icc_dram_bw_scaling = false;
+
+	return ret;
+}
+
+static int tegra_cpufreq_init_cpufreq_table(struct cpufreq_policy *policy,
+					    struct cpufreq_frequency_table *bpmp_lut,
+					    struct cpufreq_frequency_table **opp_table)
+{
+	struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
+	struct cpufreq_frequency_table *freq_table = NULL;
+	struct cpufreq_frequency_table *pos;
+	struct device *cpu_dev;
+	unsigned long rate;
+	int ret, max_opps;
+	int j = 0;
+
+	cpu_dev = get_cpu_device(policy->cpu);
+	if (!cpu_dev) {
+		pr_err("%s: failed to get cpu%d device\n", __func__, policy->cpu);
+		return -ENODEV;
+	}
+
+	/* Initialize OPP table mentioned in operating-points-v2 property in DT */
+	ret = dev_pm_opp_of_add_table_indexed(cpu_dev, 0);
+	if (ret) {
+		dev_err(cpu_dev, "Invalid or empty opp table in device tree\n");
+		data->icc_dram_bw_scaling = false;
+		return ret;
+	}
+
+	max_opps = dev_pm_opp_get_opp_count(cpu_dev);
+	if (max_opps <= 0) {
+		dev_err(cpu_dev, "Failed to add OPPs\n");
+		return max_opps;
+	}
+
+	/* Disable all opps and cross-validate against LUT later */
+	for (rate = 0; ; rate++) {
+		struct dev_pm_opp *opp __free(put_opp);
+
+		opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate);
+		if (IS_ERR(opp))
+			break;
+
+		dev_pm_opp_disable(cpu_dev, rate);
+	}
+
+	freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_KERNEL);
+	if (!freq_table)
+		return -ENOMEM;
+
+	/*
+	 * Cross check the frequencies from BPMP-FW LUT against the OPP's present in DT.
+	 * Enable only those DT OPP's which are present in LUT also.
+	 */
+	cpufreq_for_each_valid_entry(pos, bpmp_lut) {
+		struct dev_pm_opp *opp __free(put_opp);
+
+		opp = dev_pm_opp_find_freq_exact(cpu_dev, pos->frequency * HZ_PER_KHZ, false);
+		if (IS_ERR(opp))
+			continue;
+
+		ret = dev_pm_opp_enable(cpu_dev, pos->frequency * HZ_PER_KHZ);
+		if (ret < 0)
+			return ret;
+
+		freq_table[j].driver_data = pos->driver_data;
+		freq_table[j].frequency = pos->frequency;
+		j++;
+	}
+
+	freq_table[j].driver_data = pos->driver_data;
+	freq_table[j].frequency = CPUFREQ_TABLE_END;
+
+	*opp_table = &freq_table[0];
+
+	dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
+
+	/* Prime interconnect data */
+	tegra_cpufreq_set_bw(policy, freq_table[j - 1].frequency);
+
+	return ret;
+}
+
 static int tegra186_cpufreq_init(struct cpufreq_policy *policy)
 {
 	struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
 	unsigned int cluster = data->cpus[policy->cpu].bpmp_cluster_id;
+	struct cpufreq_frequency_table *freq_table;
+	struct cpufreq_frequency_table *bpmp_lut;
 	u32 cpu;
+	int ret;
 
-	policy->freq_table = data->clusters[cluster].table;
 	policy->cpuinfo.transition_latency = 300 * 1000;
 	policy->driver_data = NULL;
 
@@ -85,6 +191,20 @@ static int tegra186_cpufreq_init(struct cpufreq_policy *policy)
 			cpumask_set_cpu(cpu, policy->cpus);
 	}
 
+	bpmp_lut = data->clusters[cluster].bpmp_lut;
+
+	if (data->icc_dram_bw_scaling) {
+		ret = tegra_cpufreq_init_cpufreq_table(policy, bpmp_lut, &freq_table);
+		if (!ret) {
+			policy->freq_table = freq_table;
+			return 0;
+		}
+	}
+
+	data->icc_dram_bw_scaling = false;
+	policy->freq_table = bpmp_lut;
+	pr_info("OPP tables missing from DT, EMC frequency scaling disabled\n");
+
 	return 0;
 }
 
@@ -102,6 +222,10 @@ static int tegra186_cpufreq_set_target(struct cpufreq_policy *policy,
 		writel(edvd_val, data->regs + edvd_offset);
 	}
 
+	if (data->icc_dram_bw_scaling)
+		tegra_cpufreq_set_bw(policy, tbl->frequency);
+
+
 	return 0;
 }
 
@@ -136,7 +260,7 @@ static struct cpufreq_driver tegra186_cpufreq_driver = {
 	.init = tegra186_cpufreq_init,
 };
 
-static struct cpufreq_frequency_table *init_vhint_table(
+static struct cpufreq_frequency_table *tegra_cpufreq_bpmp_read_lut(
 	struct platform_device *pdev, struct tegra_bpmp *bpmp,
 	struct tegra186_cpufreq_cluster *cluster, unsigned int cluster_id,
 	int *num_rates)
@@ -231,6 +355,7 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
 {
 	struct tegra186_cpufreq_data *data;
 	struct tegra_bpmp *bpmp;
+	struct device *cpu_dev;
 	unsigned int i = 0, err, edvd_offset;
 	int num_rates = 0;
 	u32 edvd_val, cpu;
@@ -256,9 +381,9 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
 	for (i = 0; i < TEGRA186_NUM_CLUSTERS; i++) {
 		struct tegra186_cpufreq_cluster *cluster = &data->clusters[i];
 
-		cluster->table = init_vhint_table(pdev, bpmp, cluster, i, &num_rates);
-		if (IS_ERR(cluster->table)) {
-			err = PTR_ERR(cluster->table);
+		cluster->bpmp_lut = tegra_cpufreq_bpmp_read_lut(pdev, bpmp, cluster, i, &num_rates);
+		if (IS_ERR(cluster->bpmp_lut)) {
+			err = PTR_ERR(cluster->bpmp_lut);
 			goto put_bpmp;
 		} else if (!num_rates) {
 			err = -EINVAL;
@@ -267,7 +392,7 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
 
 		for (cpu = 0; cpu < ARRAY_SIZE(tegra186_cpus); cpu++) {
 			if (data->cpus[cpu].bpmp_cluster_id == i) {
-				edvd_val = cluster->table[num_rates - 1].driver_data;
+				edvd_val = cluster->bpmp_lut[num_rates - 1].driver_data;
 				edvd_offset = data->cpus[cpu].edvd_offset;
 				writel(edvd_val, data->regs + edvd_offset);
 			}
@@ -276,6 +401,19 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
 
 	tegra186_cpufreq_driver.driver_data = data;
 
+	/* Check for optional OPPv2 and interconnect paths on CPU0 to enable ICC scaling */
+	cpu_dev = get_cpu_device(0);
+	if (!cpu_dev) {
+		err = -EPROBE_DEFER;
+		goto put_bpmp;
+	}
+
+	if (dev_pm_opp_of_get_opp_desc_node(cpu_dev)) {
+		err = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL);
+		if (!err)
+			data->icc_dram_bw_scaling = true;
+	}
+
 	err = cpufreq_register_driver(&tegra186_cpufreq_driver);
 
 put_bpmp:

-- 
2.50.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 2/8] dt-bindings: memory: tegra186-mc: Add dummy client IDs for Tegra186
  2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth Aaron Kling via B4 Relay
@ 2025-09-09  6:21 ` Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 3/8] dt-bindings: memory: tegra194-mc: Add dummy client IDs for Tegra194 Aaron Kling via B4 Relay
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Aaron Kling via B4 Relay @ 2025-09-09  6:21 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm, Aaron Kling

From: Aaron Kling <webgeek1234@gmail.com>

Add ICC IDs for dummy software clients representing CCPLEX clusters.

Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
 include/dt-bindings/memory/tegra186-mc.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/dt-bindings/memory/tegra186-mc.h b/include/dt-bindings/memory/tegra186-mc.h
index 82a1e27f73576212bc227c74adff28c5f33c6bb1..8abbc26f3123aad2dffaec6be21f99f8de1ccf89 100644
--- a/include/dt-bindings/memory/tegra186-mc.h
+++ b/include/dt-bindings/memory/tegra186-mc.h
@@ -247,4 +247,8 @@
 #define TEGRA186_MEMORY_CLIENT_VICSRD1 0xa2
 #define TEGRA186_MEMORY_CLIENT_NVDECSRD1 0xa3
 
+/* ICC ID's for dummy MC clients used to represent CPU Clusters */
+#define TEGRA_ICC_MC_CPU_CLUSTER0       1003
+#define TEGRA_ICC_MC_CPU_CLUSTER1       1004
+
 #endif

-- 
2.50.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 3/8] dt-bindings: memory: tegra194-mc: Add dummy client IDs for Tegra194
  2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 2/8] dt-bindings: memory: tegra186-mc: Add dummy client IDs for Tegra186 Aaron Kling via B4 Relay
@ 2025-09-09  6:21 ` Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 4/8] memory: tegra186-emc: Support non-bpmp icc scaling Aaron Kling via B4 Relay
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Aaron Kling via B4 Relay @ 2025-09-09  6:21 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm, Aaron Kling

From: Aaron Kling <webgeek1234@gmail.com>

Add ICC IDs for dummy software clients representing CCPLEX clusters.

Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
 include/dt-bindings/memory/tegra194-mc.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/dt-bindings/memory/tegra194-mc.h b/include/dt-bindings/memory/tegra194-mc.h
index eed48b746bc94072a6bd0af7f344dbb6f6618859..a7d97a1a470cd3cfb18c7ef45c421426ea3c7abf 100644
--- a/include/dt-bindings/memory/tegra194-mc.h
+++ b/include/dt-bindings/memory/tegra194-mc.h
@@ -407,4 +407,10 @@
 /* MSS internal memqual MIU6 write clients */
 #define TEGRA194_MEMORY_CLIENT_MIU6W 0xff
 
+/* ICC ID's for dummy MC clients used to represent CPU Clusters */
+#define TEGRA_ICC_MC_CPU_CLUSTER0       1003
+#define TEGRA_ICC_MC_CPU_CLUSTER1       1004
+#define TEGRA_ICC_MC_CPU_CLUSTER2       1005
+#define TEGRA_ICC_MC_CPU_CLUSTER3       1006
+
 #endif

-- 
2.50.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 4/8] memory: tegra186-emc: Support non-bpmp icc scaling
  2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
                   ` (2 preceding siblings ...)
  2025-09-09  6:21 ` [PATCH v2 3/8] dt-bindings: memory: tegra194-mc: Add dummy client IDs for Tegra194 Aaron Kling via B4 Relay
@ 2025-09-09  6:21 ` Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 5/8] memory: tegra186: Support " Aaron Kling via B4 Relay
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Aaron Kling via B4 Relay @ 2025-09-09  6:21 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm, Aaron Kling

From: Aaron Kling <webgeek1234@gmail.com>

This adds support for dynamic frequency scaling of external memory on
devices with bpmp firmware that does not support bwmgr.

Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
 drivers/memory/tegra/tegra186-emc.c | 132 +++++++++++++++++++++++++++++++++++-
 1 file changed, 130 insertions(+), 2 deletions(-)

diff --git a/drivers/memory/tegra/tegra186-emc.c b/drivers/memory/tegra/tegra186-emc.c
index d6cd90c7ad5380a9ff9052a60f62c9bcc4fdac5f..1711f2e85ad07692feb8f6f14c8c2b10ea42fde5 100644
--- a/drivers/memory/tegra/tegra186-emc.c
+++ b/drivers/memory/tegra/tegra186-emc.c
@@ -18,6 +18,17 @@ struct tegra186_emc_dvfs {
 	unsigned long rate;
 };
 
+enum emc_rate_request_type {
+	EMC_RATE_DEBUG,
+	EMC_RATE_ICC,
+	EMC_RATE_TYPE_MAX,
+};
+
+struct emc_rate_request {
+	unsigned long min_rate;
+	unsigned long max_rate;
+};
+
 struct tegra186_emc {
 	struct tegra_bpmp *bpmp;
 	struct device *dev;
@@ -33,8 +44,90 @@ struct tegra186_emc {
 	} debugfs;
 
 	struct icc_provider provider;
+
+	/*
+	 * There are multiple sources in the EMC driver which could request
+	 * a min/max clock rate, these rates are contained in this array.
+	 */
+	struct emc_rate_request requested_rate[EMC_RATE_TYPE_MAX];
+
+	/* protect shared rate-change code path */
+	struct mutex rate_lock;
 };
 
+static void tegra_emc_rate_requests_init(struct tegra186_emc *emc)
+{
+	unsigned int i;
+
+	for (i = 0; i < EMC_RATE_TYPE_MAX; i++) {
+		emc->requested_rate[i].min_rate = 0;
+		emc->requested_rate[i].max_rate = ULONG_MAX;
+	}
+}
+
+static int emc_request_rate(struct tegra186_emc *emc,
+			    unsigned long new_min_rate,
+			    unsigned long new_max_rate,
+			    enum emc_rate_request_type type)
+{
+	struct emc_rate_request *req = emc->requested_rate;
+	unsigned long min_rate = 0, max_rate = ULONG_MAX;
+	unsigned int i;
+	int err;
+
+	/* select minimum and maximum rates among the requested rates */
+	for (i = 0; i < EMC_RATE_TYPE_MAX; i++, req++) {
+		if (i == type) {
+			min_rate = max(new_min_rate, min_rate);
+			max_rate = min(new_max_rate, max_rate);
+		} else {
+			min_rate = max(req->min_rate, min_rate);
+			max_rate = min(req->max_rate, max_rate);
+		}
+	}
+
+	if (min_rate > max_rate) {
+		dev_err_ratelimited(emc->dev, "%s: type %u: out of range: %lu %lu\n",
+				    __func__, type, min_rate, max_rate);
+		return -ERANGE;
+	}
+
+	err = clk_set_rate(emc->clk, min_rate);
+	if (err)
+		return err;
+
+	emc->requested_rate[type].min_rate = new_min_rate;
+	emc->requested_rate[type].max_rate = new_max_rate;
+
+	return 0;
+}
+
+static int emc_set_min_rate(struct tegra186_emc *emc, unsigned long rate,
+			    enum emc_rate_request_type type)
+{
+	struct emc_rate_request *req = &emc->requested_rate[type];
+	int ret;
+
+	mutex_lock(&emc->rate_lock);
+	ret = emc_request_rate(emc, rate, req->max_rate, type);
+	mutex_unlock(&emc->rate_lock);
+
+	return ret;
+}
+
+static int emc_set_max_rate(struct tegra186_emc *emc, unsigned long rate,
+			    enum emc_rate_request_type type)
+{
+	struct emc_rate_request *req = &emc->requested_rate[type];
+	int ret;
+
+	mutex_lock(&emc->rate_lock);
+	ret = emc_request_rate(emc, req->min_rate, rate, type);
+	mutex_unlock(&emc->rate_lock);
+
+	return ret;
+}
+
 /*
  * debugfs interface
  *
@@ -107,7 +200,7 @@ static int tegra186_emc_debug_min_rate_set(void *data, u64 rate)
 	if (!tegra186_emc_validate_rate(emc, rate))
 		return -EINVAL;
 
-	err = clk_set_min_rate(emc->clk, rate);
+	err = emc_set_min_rate(emc, rate, EMC_RATE_DEBUG);
 	if (err < 0)
 		return err;
 
@@ -137,7 +230,7 @@ static int tegra186_emc_debug_max_rate_set(void *data, u64 rate)
 	if (!tegra186_emc_validate_rate(emc, rate))
 		return -EINVAL;
 
-	err = clk_set_max_rate(emc->clk, rate);
+	err = emc_set_max_rate(emc, rate, EMC_RATE_DEBUG);
 	if (err < 0)
 		return err;
 
@@ -217,6 +310,12 @@ static int tegra186_emc_get_emc_dvfs_latency(struct tegra186_emc *emc)
 	return 0;
 }
 
+static inline struct tegra186_emc *
+to_tegra186_emc_provider(struct icc_provider *provider)
+{
+	return container_of(provider, struct tegra186_emc, provider);
+}
+
 /*
  * tegra_emc_icc_set_bw() - Set BW api for EMC provider
  * @src: ICC node for External Memory Controller (EMC)
@@ -227,6 +326,33 @@ static int tegra186_emc_get_emc_dvfs_latency(struct tegra186_emc *emc)
  */
 static int tegra_emc_icc_set_bw(struct icc_node *src, struct icc_node *dst)
 {
+	struct tegra186_emc *emc = to_tegra186_emc_provider(dst->provider);
+	struct tegra_mc *mc = dev_get_drvdata(emc->dev->parent);
+	unsigned long long peak_bw = icc_units_to_bps(dst->peak_bw);
+	unsigned long long avg_bw = icc_units_to_bps(dst->avg_bw);
+	unsigned long long rate = max(avg_bw, peak_bw);
+	const unsigned int ddr = 2;
+	int err;
+
+	/*
+	 * Do nothing here if bwmgr is supported in BPMP-FW. BPMP-FW sets the final
+	 * Freq based on the passed values.
+	 */
+	if (mc->bwmgr_mrq_supported)
+		return 0;
+
+	/*
+	 * Tegra186 EMC runs on a clock rate of SDRAM bus. This means that
+	 * EMC clock rate is twice smaller than the peak data rate because
+	 * data is sampled on both EMC clock edges.
+	 */
+	do_div(rate, ddr);
+	rate = min_t(u64, rate, U32_MAX);
+
+	err = emc_set_min_rate(emc, rate, EMC_RATE_ICC);
+	if (err)
+		return err;
+
 	return 0;
 }
 
@@ -334,6 +460,8 @@ static int tegra186_emc_probe(struct platform_device *pdev)
 	platform_set_drvdata(pdev, emc);
 	emc->dev = &pdev->dev;
 
+	tegra_emc_rate_requests_init(emc);
+
 	if (tegra_bpmp_mrq_is_supported(emc->bpmp, MRQ_EMC_DVFS_LATENCY)) {
 		err = tegra186_emc_get_emc_dvfs_latency(emc);
 		if (err)

-- 
2.50.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 5/8] memory: tegra186: Support icc scaling
  2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
                   ` (3 preceding siblings ...)
  2025-09-09  6:21 ` [PATCH v2 4/8] memory: tegra186-emc: Support non-bpmp icc scaling Aaron Kling via B4 Relay
@ 2025-09-09  6:21 ` Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 6/8] memory: tegra194: " Aaron Kling via B4 Relay
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Aaron Kling via B4 Relay @ 2025-09-09  6:21 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm, Aaron Kling

From: Aaron Kling <webgeek1234@gmail.com>

Add Interconnect framework support to dynamically set the DRAM
bandwidth from different clients. The MC driver is added as an ICC
provider and the EMC driver is already a provider.

Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
 drivers/memory/tegra/tegra186.c | 48 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/drivers/memory/tegra/tegra186.c b/drivers/memory/tegra/tegra186.c
index aee11457bf8e032637d1772affb87da0cac68494..1384164f624af5d4aaccedc84443d203ba3db2c6 100644
--- a/drivers/memory/tegra/tegra186.c
+++ b/drivers/memory/tegra/tegra186.c
@@ -899,9 +899,56 @@ static const struct tegra_mc_client tegra186_mc_clients[] = {
 				.security = 0x51c,
 			},
 		},
+	}, {
+		.id = TEGRA_ICC_MC_CPU_CLUSTER0,
+		.name = "sw_cluster0",
+		.type = TEGRA_ICC_NISO,
+	}, {
+		.id = TEGRA_ICC_MC_CPU_CLUSTER1,
+		.name = "sw_cluster1",
+		.type = TEGRA_ICC_NISO,
 	},
 };
 
+static int tegra186_mc_icc_set(struct icc_node *src, struct icc_node *dst)
+{
+	/* TODO: program PTSA */
+	return 0;
+}
+
+static int tegra186_mc_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
+				     u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
+{
+	struct icc_provider *p = node->provider;
+	struct tegra_mc *mc = icc_provider_to_tegra_mc(p);
+
+	if (node->id == TEGRA_ICC_MC_CPU_CLUSTER0 ||
+	    node->id == TEGRA_ICC_MC_CPU_CLUSTER1) {
+		if (mc)
+			peak_bw = peak_bw * mc->num_channels;
+	}
+
+	*agg_avg += avg_bw;
+	*agg_peak = max(*agg_peak, peak_bw);
+
+	return 0;
+}
+
+static int tegra186_mc_icc_get_init_bw(struct icc_node *node, u32 *avg, u32 *peak)
+{
+	*avg = 0;
+	*peak = 0;
+
+	return 0;
+}
+
+static const struct tegra_mc_icc_ops tegra186_mc_icc_ops = {
+	.xlate = tegra_mc_icc_xlate,
+	.aggregate = tegra186_mc_icc_aggregate,
+	.get_bw = tegra186_mc_icc_get_init_bw,
+	.set = tegra186_mc_icc_set,
+};
+
 const struct tegra_mc_soc tegra186_mc_soc = {
 	.num_clients = ARRAY_SIZE(tegra186_mc_clients),
 	.clients = tegra186_mc_clients,
@@ -912,6 +959,7 @@ const struct tegra_mc_soc tegra186_mc_soc = {
 		   MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
 		   MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
 	.ops = &tegra186_mc_ops,
+	.icc_ops = &tegra186_mc_icc_ops,
 	.ch_intmask = 0x0000000f,
 	.global_intstatus_channel_shift = 0,
 };

-- 
2.50.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 6/8] memory: tegra194: Support icc scaling
  2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
                   ` (4 preceding siblings ...)
  2025-09-09  6:21 ` [PATCH v2 5/8] memory: tegra186: Support " Aaron Kling via B4 Relay
@ 2025-09-09  6:21 ` Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 7/8] arm64: tegra: Add CPU OPP tables for Tegra186 Aaron Kling via B4 Relay
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Aaron Kling via B4 Relay @ 2025-09-09  6:21 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm, Aaron Kling

From: Aaron Kling <webgeek1234@gmail.com>

Add Interconnect framework support to dynamically set the DRAM
bandwidth from different clients. The MC driver is added as an ICC
provider and the EMC driver is already a provider.

Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
 drivers/memory/tegra/tegra194.c | 59 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 58 insertions(+), 1 deletion(-)

diff --git a/drivers/memory/tegra/tegra194.c b/drivers/memory/tegra/tegra194.c
index 26035ac3a1eb51a3d8ce3830427b4412b48baf3c..e478587586e7f01afd41ff74d26a9a3f1d881347 100644
--- a/drivers/memory/tegra/tegra194.c
+++ b/drivers/memory/tegra/tegra194.c
@@ -1340,9 +1340,66 @@ static const struct tegra_mc_client tegra194_mc_clients[] = {
 				.security = 0x7fc,
 			},
 		},
+	}, {
+		.id = TEGRA_ICC_MC_CPU_CLUSTER0,
+		.name = "sw_cluster0",
+		.type = TEGRA_ICC_NISO,
+	}, {
+		.id = TEGRA_ICC_MC_CPU_CLUSTER1,
+		.name = "sw_cluster1",
+		.type = TEGRA_ICC_NISO,
+	}, {
+		.id = TEGRA_ICC_MC_CPU_CLUSTER2,
+		.name = "sw_cluster2",
+		.type = TEGRA_ICC_NISO,
+	}, {
+		.id = TEGRA_ICC_MC_CPU_CLUSTER3,
+		.name = "sw_cluster3",
+		.type = TEGRA_ICC_NISO,
 	},
 };
 
+static int tegra194_mc_icc_set(struct icc_node *src, struct icc_node *dst)
+{
+	/* TODO: program PTSA */
+	return 0;
+}
+
+static int tegra194_mc_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
+				     u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
+{
+	struct icc_provider *p = node->provider;
+	struct tegra_mc *mc = icc_provider_to_tegra_mc(p);
+
+	if (node->id == TEGRA_ICC_MC_CPU_CLUSTER0 ||
+	    node->id == TEGRA_ICC_MC_CPU_CLUSTER1 ||
+	    node->id == TEGRA_ICC_MC_CPU_CLUSTER2 ||
+	    node->id == TEGRA_ICC_MC_CPU_CLUSTER3) {
+		if (mc)
+			peak_bw = peak_bw * mc->num_channels;
+	}
+
+	*agg_avg += avg_bw;
+	*agg_peak = max(*agg_peak, peak_bw);
+
+	return 0;
+}
+
+static int tegra194_mc_icc_get_init_bw(struct icc_node *node, u32 *avg, u32 *peak)
+{
+	*avg = 0;
+	*peak = 0;
+
+	return 0;
+}
+
+static const struct tegra_mc_icc_ops tegra194_mc_icc_ops = {
+	.xlate = tegra_mc_icc_xlate,
+	.aggregate = tegra194_mc_icc_aggregate,
+	.get_bw = tegra194_mc_icc_get_init_bw,
+	.set = tegra194_mc_icc_set,
+};
+
 const struct tegra_mc_soc tegra194_mc_soc = {
 	.num_clients = ARRAY_SIZE(tegra194_mc_clients),
 	.clients = tegra194_mc_clients,
@@ -1355,7 +1412,7 @@ const struct tegra_mc_soc tegra194_mc_soc = {
 		   MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
 	.has_addr_hi_reg = true,
 	.ops = &tegra186_mc_ops,
-	.icc_ops = &tegra_mc_icc_ops,
+	.icc_ops = &tegra194_mc_icc_ops,
 	.ch_intmask = 0x00000f00,
 	.global_intstatus_channel_shift = 8,
 };

-- 
2.50.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 7/8] arm64: tegra: Add CPU OPP tables for Tegra186
  2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
                   ` (5 preceding siblings ...)
  2025-09-09  6:21 ` [PATCH v2 6/8] memory: tegra194: " Aaron Kling via B4 Relay
@ 2025-09-09  6:21 ` Aaron Kling via B4 Relay
  2025-09-09  6:21 ` [PATCH v2 8/8] arm64: tegra: Add CPU OPP tables for Tegra194 Aaron Kling via B4 Relay
  2025-10-09  0:05 ` [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Krzysztof Kozlowski
  8 siblings, 0 replies; 19+ messages in thread
From: Aaron Kling via B4 Relay @ 2025-09-09  6:21 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm, Aaron Kling

From: Aaron Kling <webgeek1234@gmail.com>

Add OPP table and interconnects property to scale DDR frequency with
CPU frequency for better performance. Each operating point entry of
the OPP table has CPU freq to per MC channel bandwidth mapping. One
table is added for each cluster because the different cpu types have
different scaling curves.

Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
 arch/arm64/boot/dts/nvidia/tegra186.dtsi | 317 +++++++++++++++++++++++++++++++
 1 file changed, 317 insertions(+)

diff --git a/arch/arm64/boot/dts/nvidia/tegra186.dtsi b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
index 5778c93af3e6e72f5f14a9fcee1e7abf80d2d2c5..d3f6a938a9b019a043ce2de7ec17bd00155b3eb2 100644
--- a/arch/arm64/boot/dts/nvidia/tegra186.dtsi
+++ b/arch/arm64/boot/dts/nvidia/tegra186.dtsi
@@ -1943,6 +1943,8 @@ cpus {
 		denver_0: cpu@0 {
 			compatible = "nvidia,tegra186-denver";
 			device_type = "cpu";
+			operating-points-v2 = <&dnv_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;
 			i-cache-size = <0x20000>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -1956,6 +1958,8 @@ denver_0: cpu@0 {
 		denver_1: cpu@1 {
 			compatible = "nvidia,tegra186-denver";
 			device_type = "cpu";
+			operating-points-v2 = <&dnv_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;
 			i-cache-size = <0x20000>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -1969,6 +1973,8 @@ denver_1: cpu@1 {
 		ca57_0: cpu@2 {
 			compatible = "arm,cortex-a57";
 			device_type = "cpu";
+			operating-points-v2 = <&a57_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
 			i-cache-size = <0xC000>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -1982,6 +1988,8 @@ ca57_0: cpu@2 {
 		ca57_1: cpu@3 {
 			compatible = "arm,cortex-a57";
 			device_type = "cpu";
+			operating-points-v2 = <&a57_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
 			i-cache-size = <0xC000>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -1995,6 +2003,8 @@ ca57_1: cpu@3 {
 		ca57_2: cpu@4 {
 			compatible = "arm,cortex-a57";
 			device_type = "cpu";
+			operating-points-v2 = <&a57_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
 			i-cache-size = <0xC000>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2008,6 +2018,8 @@ ca57_2: cpu@4 {
 		ca57_3: cpu@5 {
 			compatible = "arm,cortex-a57";
 			device_type = "cpu";
+			operating-points-v2 = <&a57_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
 			i-cache-size = <0xC000>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <256>;
@@ -2182,4 +2194,309 @@ timer {
 		interrupt-parent = <&gic>;
 		always-on;
 	};
+
+	dnv_opp_tbl: opp-table-cluster0 {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		opp-998400000 {
+			  opp-hz = /bits/ 64 <998400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1036800000 {
+			  opp-hz = /bits/ 64 <1036800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1075200000 {
+			  opp-hz = /bits/ 64 <1075200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1113600000 {
+			  opp-hz = /bits/ 64 <1113600000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1152000000 {
+			  opp-hz = /bits/ 64 <1152000000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1190400000 {
+			  opp-hz = /bits/ 64 <1190400000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1228800000 {
+			  opp-hz = /bits/ 64 <1228800000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1267200000 {
+			  opp-hz = /bits/ 64 <1267200000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1305600000 {
+			  opp-hz = /bits/ 64 <1305600000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1344000000 {
+			  opp-hz = /bits/ 64 <1344000000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1382400000 {
+			  opp-hz = /bits/ 64 <1382400000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1420800000 {
+			  opp-hz = /bits/ 64 <1420800000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1459200000 {
+			  opp-hz = /bits/ 64 <1459200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1497600000 {
+			  opp-hz = /bits/ 64 <1497600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1536000000 {
+			  opp-hz = /bits/ 64 <1536000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1574400000 {
+			  opp-hz = /bits/ 64 <1574400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1612800000 {
+			  opp-hz = /bits/ 64 <1612800000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1651200000 {
+			  opp-hz = /bits/ 64 <1651200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1689600000 {
+			  opp-hz = /bits/ 64 <1689600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1728000000 {
+			  opp-hz = /bits/ 64 <1728000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1766400000 {
+			  opp-hz = /bits/ 64 <1766400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1804800000 {
+			  opp-hz = /bits/ 64 <1804800000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1843200000 {
+			  opp-hz = /bits/ 64 <1843200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1881600000 {
+			  opp-hz = /bits/ 64 <1881600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1920000000 {
+			  opp-hz = /bits/ 64 <1920000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1958400000 {
+			  opp-hz = /bits/ 64 <1958400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1996800000 {
+			  opp-hz = /bits/ 64 <1996800000>;
+			  opp-peak-kBps = <3732000>;
+		};
+
+		opp-2035200000 {
+			  opp-hz = /bits/ 64 <2035200000>;
+			  opp-peak-kBps = <3732000>;
+		};
+	};
+
+	a57_opp_tbl: opp-table-cluster1 {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		opp-883200000 {
+			  opp-hz = /bits/ 64 <883200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-921600000 {
+			  opp-hz = /bits/ 64 <921600000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-960000000 {
+			  opp-hz = /bits/ 64 <960000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-998400000 {
+			  opp-hz = /bits/ 64 <998400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1036800000 {
+			  opp-hz = /bits/ 64 <1036800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1075200000 {
+			  opp-hz = /bits/ 64 <1075200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1113600000 {
+			  opp-hz = /bits/ 64 <1113600000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1152000000 {
+			  opp-hz = /bits/ 64 <1152000000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1190400000 {
+			  opp-hz = /bits/ 64 <1190400000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1228800000 {
+			  opp-hz = /bits/ 64 <1228800000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1267200000 {
+			  opp-hz = /bits/ 64 <1267200000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1305600000 {
+			  opp-hz = /bits/ 64 <1305600000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1344000000 {
+			  opp-hz = /bits/ 64 <1344000000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1382400000 {
+			  opp-hz = /bits/ 64 <1382400000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1420800000 {
+			  opp-hz = /bits/ 64 <1420800000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1459200000 {
+			  opp-hz = /bits/ 64 <1459200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1497600000 {
+			  opp-hz = /bits/ 64 <1497600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1536000000 {
+			  opp-hz = /bits/ 64 <1536000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1574400000 {
+			  opp-hz = /bits/ 64 <1574400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1612800000 {
+			  opp-hz = /bits/ 64 <1612800000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1651200000 {
+			  opp-hz = /bits/ 64 <1651200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1689600000 {
+			  opp-hz = /bits/ 64 <1689600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1728000000 {
+			  opp-hz = /bits/ 64 <1728000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1766400000 {
+			  opp-hz = /bits/ 64 <1766400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1804800000 {
+			  opp-hz = /bits/ 64 <1804800000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1843200000 {
+			  opp-hz = /bits/ 64 <1843200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1881600000 {
+			  opp-hz = /bits/ 64 <1881600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1920000000 {
+			  opp-hz = /bits/ 64 <1920000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1958400000 {
+			  opp-hz = /bits/ 64 <1958400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1996800000 {
+			  opp-hz = /bits/ 64 <1996800000>;
+			  opp-peak-kBps = <3732000>;
+		};
+
+		opp-2035200000 {
+			  opp-hz = /bits/ 64 <2035200000>;
+			  opp-peak-kBps = <3732000>;
+		};
+	};
 };

-- 
2.50.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 8/8] arm64: tegra: Add CPU OPP tables for Tegra194
  2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
                   ` (6 preceding siblings ...)
  2025-09-09  6:21 ` [PATCH v2 7/8] arm64: tegra: Add CPU OPP tables for Tegra186 Aaron Kling via B4 Relay
@ 2025-09-09  6:21 ` Aaron Kling via B4 Relay
  2025-10-09  0:05 ` [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Krzysztof Kozlowski
  8 siblings, 0 replies; 19+ messages in thread
From: Aaron Kling via B4 Relay @ 2025-09-09  6:21 UTC (permalink / raw)
  To: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm, Aaron Kling

From: Aaron Kling <webgeek1234@gmail.com>

Add OPP table and interconnects property to scale DDR frequency with
CPU frequency for better performance. Each operating point entry of
the OPP table has CPU freq to per MC channel bandwidth mapping.
One table is added for each cluster even though the table data is
same because the bandwidth request is per cluster. This is done
because the OPP framework creates a single icc path and hence single
bandwidth request if the table is marked as 'opp-shared' and shared
among all clusters. For us, the OPP table data is same but the MC
Client ID argument to interconnects property is different for each
cluster. So, having per cluster tables makes different icc paths for
each cluster and helps to make per cluster BW requests.

Signed-off-by: Aaron Kling <webgeek1234@gmail.com>
---
 arch/arm64/boot/dts/nvidia/tegra194.dtsi | 636 +++++++++++++++++++++++++++++++
 1 file changed, 636 insertions(+)

diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
index 1399342f23e1c4f73b278adc66dfb948fc30d326..a6c4c6c73707354f62f778bbea5afaec3fdbe22d 100644
--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
@@ -2890,6 +2890,8 @@ cpu0_0: cpu@0 {
 			device_type = "cpu";
 			reg = <0x000>;
 			enable-method = "psci";
+			operating-points-v2 = <&cl0_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;
 			i-cache-size = <131072>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -2904,6 +2906,8 @@ cpu0_1: cpu@1 {
 			device_type = "cpu";
 			reg = <0x001>;
 			enable-method = "psci";
+			operating-points-v2 = <&cl0_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER0 &emc>;
 			i-cache-size = <131072>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -2918,6 +2922,8 @@ cpu1_0: cpu@100 {
 			device_type = "cpu";
 			reg = <0x100>;
 			enable-method = "psci";
+			operating-points-v2 = <&cl1_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
 			i-cache-size = <131072>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -2932,6 +2938,8 @@ cpu1_1: cpu@101 {
 			device_type = "cpu";
 			reg = <0x101>;
 			enable-method = "psci";
+			operating-points-v2 = <&cl1_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER1 &emc>;
 			i-cache-size = <131072>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -2946,6 +2954,8 @@ cpu2_0: cpu@200 {
 			device_type = "cpu";
 			reg = <0x200>;
 			enable-method = "psci";
+			operating-points-v2 = <&cl2_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER2 &emc>;
 			i-cache-size = <131072>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -2960,6 +2970,8 @@ cpu2_1: cpu@201 {
 			device_type = "cpu";
 			reg = <0x201>;
 			enable-method = "psci";
+			operating-points-v2 = <&cl2_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER2 &emc>;
 			i-cache-size = <131072>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -2974,6 +2986,8 @@ cpu3_0: cpu@300 {
 			device_type = "cpu";
 			reg = <0x300>;
 			enable-method = "psci";
+			operating-points-v2 = <&cl3_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER3 &emc>;
 			i-cache-size = <131072>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -2988,6 +3002,8 @@ cpu3_1: cpu@301 {
 			device_type = "cpu";
 			reg = <0x301>;
 			enable-method = "psci";
+			operating-points-v2 = <&cl3_opp_tbl>;
+			interconnects = <&mc TEGRA_ICC_MC_CPU_CLUSTER3 &emc>;
 			i-cache-size = <131072>;
 			i-cache-line-size = <64>;
 			i-cache-sets = <512>;
@@ -3181,4 +3197,624 @@ timer {
 		interrupt-parent = <&gic>;
 		always-on;
 	};
+
+	cl0_opp_tbl: opp-table-cluster0 {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		opp-115200000 {
+			  opp-hz = /bits/ 64 <115200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-192000000 {
+			  opp-hz = /bits/ 64 <192000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-268800000 {
+			  opp-hz = /bits/ 64 <268800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-345600000 {
+			  opp-hz = /bits/ 64 <345600000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-422400000 {
+			  opp-hz = /bits/ 64 <422400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-499200000 {
+			  opp-hz = /bits/ 64 <499200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-576000000 {
+			  opp-hz = /bits/ 64 <576000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-652800000 {
+			  opp-hz = /bits/ 64 <652800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-729600000 {
+			  opp-hz = /bits/ 64 <729600000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-806400000 {
+			  opp-hz = /bits/ 64 <806400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-883200000 {
+			  opp-hz = /bits/ 64 <883200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-960000000 {
+			  opp-hz = /bits/ 64 <960000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1036800000 {
+			  opp-hz = /bits/ 64 <1036800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1113600000 {
+			  opp-hz = /bits/ 64 <1113600000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1190400000 {
+			  opp-hz = /bits/ 64 <1190400000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1267200000 {
+			  opp-hz = /bits/ 64 <1267200000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1344000000 {
+			  opp-hz = /bits/ 64 <1344000000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1420800000 {
+			  opp-hz = /bits/ 64 <1420800000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1497600000 {
+			  opp-hz = /bits/ 64 <1497600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1574400000 {
+			  opp-hz = /bits/ 64 <1574400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1651200000 {
+			  opp-hz = /bits/ 64 <1651200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1728000000 {
+			  opp-hz = /bits/ 64 <1728000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1804800000 {
+			  opp-hz = /bits/ 64 <1804800000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1881600000 {
+			  opp-hz = /bits/ 64 <1881600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1907200000 {
+			  opp-hz = /bits/ 64 <1907200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1958400000 {
+			  opp-hz = /bits/ 64 <1958400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-2035200000 {
+			  opp-hz = /bits/ 64 <2035200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-2112000000 {
+			  opp-hz = /bits/ 64 <2112000000>;
+			  opp-peak-kBps = <4266000>;
+		};
+
+		opp-2188800000 {
+			  opp-hz = /bits/ 64 <2188800000>;
+			  opp-peak-kBps = <4266000>;
+		};
+
+		opp-2265600000 {
+			  opp-hz = /bits/ 64 <2265600000>;
+			  opp-peak-kBps = <4266000>;
+		};
+	};
+
+	cl1_opp_tbl: opp-table-cluster1 {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		opp-115200000 {
+			  opp-hz = /bits/ 64 <115200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-192000000 {
+			  opp-hz = /bits/ 64 <192000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-268800000 {
+			  opp-hz = /bits/ 64 <268800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-345600000 {
+			  opp-hz = /bits/ 64 <345600000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-422400000 {
+			  opp-hz = /bits/ 64 <422400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-499200000 {
+			  opp-hz = /bits/ 64 <499200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-576000000 {
+			  opp-hz = /bits/ 64 <576000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-652800000 {
+			  opp-hz = /bits/ 64 <652800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-729600000 {
+			  opp-hz = /bits/ 64 <729600000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-806400000 {
+			  opp-hz = /bits/ 64 <806400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-883200000 {
+			  opp-hz = /bits/ 64 <883200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-960000000 {
+			  opp-hz = /bits/ 64 <960000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1036800000 {
+			  opp-hz = /bits/ 64 <1036800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1113600000 {
+			  opp-hz = /bits/ 64 <1113600000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1190400000 {
+			  opp-hz = /bits/ 64 <1190400000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1267200000 {
+			  opp-hz = /bits/ 64 <1267200000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1344000000 {
+			  opp-hz = /bits/ 64 <1344000000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1420800000 {
+			  opp-hz = /bits/ 64 <1420800000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1497600000 {
+			  opp-hz = /bits/ 64 <1497600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1574400000 {
+			  opp-hz = /bits/ 64 <1574400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1651200000 {
+			  opp-hz = /bits/ 64 <1651200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1728000000 {
+			  opp-hz = /bits/ 64 <1728000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1804800000 {
+			  opp-hz = /bits/ 64 <1804800000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1881600000 {
+			  opp-hz = /bits/ 64 <1881600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1907200000 {
+			  opp-hz = /bits/ 64 <1907200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1958400000 {
+			  opp-hz = /bits/ 64 <1958400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-2035200000 {
+			  opp-hz = /bits/ 64 <2035200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-2112000000 {
+			  opp-hz = /bits/ 64 <2112000000>;
+			  opp-peak-kBps = <4266000>;
+		};
+
+		opp-2188800000 {
+			  opp-hz = /bits/ 64 <2188800000>;
+			  opp-peak-kBps = <4266000>;
+		};
+
+		opp-2265600000 {
+			  opp-hz = /bits/ 64 <2265600000>;
+			  opp-peak-kBps = <4266000>;
+		};
+	};
+
+	cl2_opp_tbl: opp-table-cluster2 {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		opp-115200000 {
+			  opp-hz = /bits/ 64 <115200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-192000000 {
+			  opp-hz = /bits/ 64 <192000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-268800000 {
+			  opp-hz = /bits/ 64 <268800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-345600000 {
+			  opp-hz = /bits/ 64 <345600000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-422400000 {
+			  opp-hz = /bits/ 64 <422400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-499200000 {
+			  opp-hz = /bits/ 64 <499200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-576000000 {
+			  opp-hz = /bits/ 64 <576000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-652800000 {
+			  opp-hz = /bits/ 64 <652800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-729600000 {
+			  opp-hz = /bits/ 64 <729600000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-806400000 {
+			  opp-hz = /bits/ 64 <806400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-883200000 {
+			  opp-hz = /bits/ 64 <883200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-960000000 {
+			  opp-hz = /bits/ 64 <960000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1036800000 {
+			  opp-hz = /bits/ 64 <1036800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1113600000 {
+			  opp-hz = /bits/ 64 <1113600000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1190400000 {
+			  opp-hz = /bits/ 64 <1190400000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1267200000 {
+			  opp-hz = /bits/ 64 <1267200000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1344000000 {
+			  opp-hz = /bits/ 64 <1344000000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1420800000 {
+			  opp-hz = /bits/ 64 <1420800000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1497600000 {
+			  opp-hz = /bits/ 64 <1497600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1574400000 {
+			  opp-hz = /bits/ 64 <1574400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1651200000 {
+			  opp-hz = /bits/ 64 <1651200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1728000000 {
+			  opp-hz = /bits/ 64 <1728000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1804800000 {
+			  opp-hz = /bits/ 64 <1804800000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1881600000 {
+			  opp-hz = /bits/ 64 <1881600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1907200000 {
+			  opp-hz = /bits/ 64 <1907200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1958400000 {
+			  opp-hz = /bits/ 64 <1958400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-2035200000 {
+			  opp-hz = /bits/ 64 <2035200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-2112000000 {
+			  opp-hz = /bits/ 64 <2112000000>;
+			  opp-peak-kBps = <4266000>;
+		};
+
+		opp-2188800000 {
+			  opp-hz = /bits/ 64 <2188800000>;
+			  opp-peak-kBps = <4266000>;
+		};
+
+		opp-2265600000 {
+			  opp-hz = /bits/ 64 <2265600000>;
+			  opp-peak-kBps = <4266000>;
+		};
+	};
+
+	cl3_opp_tbl: opp-table-cluster3 {
+		compatible = "operating-points-v2";
+		opp-shared;
+
+		opp-115200000 {
+			  opp-hz = /bits/ 64 <115200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-192000000 {
+			  opp-hz = /bits/ 64 <192000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-268800000 {
+			  opp-hz = /bits/ 64 <268800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-345600000 {
+			  opp-hz = /bits/ 64 <345600000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-422400000 {
+			  opp-hz = /bits/ 64 <422400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-499200000 {
+			  opp-hz = /bits/ 64 <499200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-576000000 {
+			  opp-hz = /bits/ 64 <576000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-652800000 {
+			  opp-hz = /bits/ 64 <652800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-729600000 {
+			  opp-hz = /bits/ 64 <729600000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-806400000 {
+			  opp-hz = /bits/ 64 <806400000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-883200000 {
+			  opp-hz = /bits/ 64 <883200000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-960000000 {
+			  opp-hz = /bits/ 64 <960000000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1036800000 {
+			  opp-hz = /bits/ 64 <1036800000>;
+			  opp-peak-kBps = <816000>;
+		};
+
+		opp-1113600000 {
+			  opp-hz = /bits/ 64 <1113600000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1190400000 {
+			  opp-hz = /bits/ 64 <1190400000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1267200000 {
+			  opp-hz = /bits/ 64 <1267200000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1344000000 {
+			  opp-hz = /bits/ 64 <1344000000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1420800000 {
+			  opp-hz = /bits/ 64 <1420800000>;
+			  opp-peak-kBps = <1600000>;
+		};
+
+		opp-1497600000 {
+			  opp-hz = /bits/ 64 <1497600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1574400000 {
+			  opp-hz = /bits/ 64 <1574400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1651200000 {
+			  opp-hz = /bits/ 64 <1651200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1728000000 {
+			  opp-hz = /bits/ 64 <1728000000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1804800000 {
+			  opp-hz = /bits/ 64 <1804800000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1881600000 {
+			  opp-hz = /bits/ 64 <1881600000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1907200000 {
+			  opp-hz = /bits/ 64 <1907200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-1958400000 {
+			  opp-hz = /bits/ 64 <1958400000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-2035200000 {
+			  opp-hz = /bits/ 64 <2035200000>;
+			  opp-peak-kBps = <3200000>;
+		};
+
+		opp-2112000000 {
+			  opp-hz = /bits/ 64 <2112000000>;
+			  opp-peak-kBps = <4266000>;
+		};
+
+		opp-2188800000 {
+			  opp-hz = /bits/ 64 <2188800000>;
+			  opp-peak-kBps = <4266000>;
+		};
+
+		opp-2265600000 {
+			  opp-hz = /bits/ 64 <2265600000>;
+			  opp-peak-kBps = <4266000>;
+		};
+	};
 };

-- 
2.50.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth
  2025-09-09  6:21 ` [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth Aaron Kling via B4 Relay
@ 2025-09-30 10:30   ` Viresh Kumar
  2025-10-13  2:32     ` Aaron Kling
  0 siblings, 1 reply; 19+ messages in thread
From: Viresh Kumar @ 2025-09-30 10:30 UTC (permalink / raw)
  To: webgeek1234
  Cc: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Krzysztof Kozlowski,
	linux-kernel, devicetree, linux-tegra, linux-pm

On 09-09-25, 01:21, Aaron Kling via B4 Relay wrote:
> +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
> +{
> +	struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
> +	struct dev_pm_opp *opp __free(put_opp);

The usage here looks incorrect..

> +	struct device *dev;
> +	int ret;
> +
> +	dev = get_cpu_device(policy->cpu);
> +	if (!dev)
> +		return -ENODEV;

On failure, we would return from here with a garbage `opp` pointer, which the
OPP core may try to free ?

Moving the variable definition here would fix that.

> +
> +	opp = dev_pm_opp_find_freq_exact(dev, freq_khz * HZ_PER_KHZ, true);
> +	if (IS_ERR(opp))
> +		return PTR_ERR(opp);
> +
> +	ret = dev_pm_opp_set_opp(dev, opp);
> +	if (ret)
> +		data->icc_dram_bw_scaling = false;
> +
> +	return ret;
> +}

-- 
viresh

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194
  2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
                   ` (7 preceding siblings ...)
  2025-09-09  6:21 ` [PATCH v2 8/8] arm64: tegra: Add CPU OPP tables for Tegra194 Aaron Kling via B4 Relay
@ 2025-10-09  0:05 ` Krzysztof Kozlowski
  2025-10-13  2:18   ` Aaron Kling
  8 siblings, 1 reply; 19+ messages in thread
From: Krzysztof Kozlowski @ 2025-10-09  0:05 UTC (permalink / raw)
  To: webgeek1234, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Viresh Kumar,
	Krzysztof Kozlowski
  Cc: linux-kernel, devicetree, linux-tegra, linux-pm

On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote:
> This series borrows the concept used on Tegra234 to scale EMC based on
> CPU frequency and applies it to Tegra186 and Tegra194. Except that the
> bpmp on those archs does not support bandwidth manager, so the scaling
> iteself is handled similar to how Tegra124 currently works.
> 

Nothing improved:
https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/

Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194
  2025-10-09  0:05 ` [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Krzysztof Kozlowski
@ 2025-10-13  2:18   ` Aaron Kling
  2025-10-13  2:25     ` Krzysztof Kozlowski
  0 siblings, 1 reply; 19+ messages in thread
From: Aaron Kling @ 2025-10-13  2:18 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Rob Herring, Conor Dooley, Thierry Reding, Jonathan Hunter,
	Rafael J. Wysocki, Viresh Kumar, Krzysztof Kozlowski,
	linux-kernel, devicetree, linux-tegra, linux-pm

On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote:
>
> On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote:
> > This series borrows the concept used on Tegra234 to scale EMC based on
> > CPU frequency and applies it to Tegra186 and Tegra194. Except that the
> > bpmp on those archs does not support bandwidth manager, so the scaling
> > iteself is handled similar to how Tegra124 currently works.
> >
>
> Nothing improved:
> https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/

The dt changes should go last. The cpufreq and memory pieces can go in
either order because the new code won't be used unless the dt pieces
activate them.

Aaron

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194
  2025-10-13  2:18   ` Aaron Kling
@ 2025-10-13  2:25     ` Krzysztof Kozlowski
  2025-10-13  2:31       ` Aaron Kling
  0 siblings, 1 reply; 19+ messages in thread
From: Krzysztof Kozlowski @ 2025-10-13  2:25 UTC (permalink / raw)
  To: Aaron Kling
  Cc: Rob Herring, Conor Dooley, Thierry Reding, Jonathan Hunter,
	Rafael J. Wysocki, Viresh Kumar, Krzysztof Kozlowski,
	linux-kernel, devicetree, linux-tegra, linux-pm

On 13/10/2025 04:18, Aaron Kling wrote:
> On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote:
>>
>> On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote:
>>> This series borrows the concept used on Tegra234 to scale EMC based on
>>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the
>>> bpmp on those archs does not support bandwidth manager, so the scaling
>>> iteself is handled similar to how Tegra124 currently works.
>>>
>>
>> Nothing improved:
>> https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/
> 
> The dt changes should go last. The cpufreq and memory pieces can go in
> either order because the new code won't be used unless the dt pieces
> activate them.


Then cpufreq and memory should never have been part of same patchset.
Instead of simple command to apply it, maintainers need multiple steps.
Really, when you send patches, think how this should be handled and how
much effort this needs on maintainer side.

Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194
  2025-10-13  2:25     ` Krzysztof Kozlowski
@ 2025-10-13  2:31       ` Aaron Kling
  2025-10-20 20:14         ` Aaron Kling
  0 siblings, 1 reply; 19+ messages in thread
From: Aaron Kling @ 2025-10-13  2:31 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Rob Herring, Conor Dooley, Thierry Reding, Jonathan Hunter,
	Rafael J. Wysocki, Viresh Kumar, Krzysztof Kozlowski,
	linux-kernel, devicetree, linux-tegra, linux-pm

On Sun, Oct 12, 2025 at 9:25 PM Krzysztof Kozlowski <krzk@kernel.org> wrote:
>
> On 13/10/2025 04:18, Aaron Kling wrote:
> > On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote:
> >>
> >> On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote:
> >>> This series borrows the concept used on Tegra234 to scale EMC based on
> >>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the
> >>> bpmp on those archs does not support bandwidth manager, so the scaling
> >>> iteself is handled similar to how Tegra124 currently works.
> >>>
> >>
> >> Nothing improved:
> >> https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/
> >
> > The dt changes should go last. The cpufreq and memory pieces can go in
> > either order because the new code won't be used unless the dt pieces
> > activate them.
>
>
> Then cpufreq and memory should never have been part of same patchset.
> Instead of simple command to apply it, maintainers need multiple steps.
> Really, when you send patches, think how this should be handled and how
> much effort this needs on maintainer side.

To be honest, I was expecting all of these to go through the tegra
tree, since all the drivers I touch are owned by the tegra
maintainers. But getting stuff moved through that tree has been like
pulling teeth recently. So Krzysztof, what's the alternative you're
suggesting here?

Aaron

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth
  2025-09-30 10:30   ` Viresh Kumar
@ 2025-10-13  2:32     ` Aaron Kling
  2025-10-13  5:08       ` Viresh Kumar
  0 siblings, 1 reply; 19+ messages in thread
From: Aaron Kling @ 2025-10-13  2:32 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Krzysztof Kozlowski,
	linux-kernel, devicetree, linux-tegra, linux-pm

On Tue, Sep 30, 2025 at 5:30 AM Viresh Kumar <viresh.kumar@linaro.org> wrote:
>
> On 09-09-25, 01:21, Aaron Kling via B4 Relay wrote:
> > +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
> > +{
> > +     struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
> > +     struct dev_pm_opp *opp __free(put_opp);
>
> The usage here looks incorrect..
>
> > +     struct device *dev;
> > +     int ret;
> > +
> > +     dev = get_cpu_device(policy->cpu);
> > +     if (!dev)
> > +             return -ENODEV;
>
> On failure, we would return from here with a garbage `opp` pointer, which the
> OPP core may try to free ?
>
> Moving the variable definition here would fix that.

If the var was NULL initialized, would the free handle that correctly?
Keeping the declarations at the start of the function reads better
imo.

Aaron

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth
  2025-10-13  2:32     ` Aaron Kling
@ 2025-10-13  5:08       ` Viresh Kumar
  2025-10-21 17:58         ` Aaron Kling
  0 siblings, 1 reply; 19+ messages in thread
From: Viresh Kumar @ 2025-10-13  5:08 UTC (permalink / raw)
  To: Aaron Kling
  Cc: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Krzysztof Kozlowski,
	linux-kernel, devicetree, linux-tegra, linux-pm

On 12-10-25, 21:32, Aaron Kling wrote:
> On Tue, Sep 30, 2025 at 5:30 AM Viresh Kumar <viresh.kumar@linaro.org> wrote:
> >
> > On 09-09-25, 01:21, Aaron Kling via B4 Relay wrote:
> > > +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
> > > +{
> > > +     struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
> > > +     struct dev_pm_opp *opp __free(put_opp);
> >
> > The usage here looks incorrect..
> >
> > > +     struct device *dev;
> > > +     int ret;
> > > +
> > > +     dev = get_cpu_device(policy->cpu);
> > > +     if (!dev)
> > > +             return -ENODEV;
> >
> > On failure, we would return from here with a garbage `opp` pointer, which the
> > OPP core may try to free ?
> >
> > Moving the variable definition here would fix that.
> 
> If the var was NULL initialized, would the free handle that correctly?
> Keeping the declarations at the start of the function reads better
> imo.

include/linux/cleanup.h has some recommendations around that.

-- 
viresh

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194
  2025-10-13  2:31       ` Aaron Kling
@ 2025-10-20 20:14         ` Aaron Kling
  2025-10-20 20:37           ` Krzysztof Kozlowski
  0 siblings, 1 reply; 19+ messages in thread
From: Aaron Kling @ 2025-10-20 20:14 UTC (permalink / raw)
  To: Krzysztof Kozlowski
  Cc: Rob Herring, Conor Dooley, Thierry Reding, Jonathan Hunter,
	Rafael J. Wysocki, Viresh Kumar, Krzysztof Kozlowski,
	linux-kernel, devicetree, linux-tegra, linux-pm

On Sun, Oct 12, 2025 at 9:31 PM Aaron Kling <webgeek1234@gmail.com> wrote:
>
> On Sun, Oct 12, 2025 at 9:25 PM Krzysztof Kozlowski <krzk@kernel.org> wrote:
> >
> > On 13/10/2025 04:18, Aaron Kling wrote:
> > > On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote:
> > >>
> > >> On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote:
> > >>> This series borrows the concept used on Tegra234 to scale EMC based on
> > >>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the
> > >>> bpmp on those archs does not support bandwidth manager, so the scaling
> > >>> iteself is handled similar to how Tegra124 currently works.
> > >>>
> > >>
> > >> Nothing improved:
> > >> https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/
> > >
> > > The dt changes should go last. The cpufreq and memory pieces can go in
> > > either order because the new code won't be used unless the dt pieces
> > > activate them.
> >
> >
> > Then cpufreq and memory should never have been part of same patchset.
> > Instead of simple command to apply it, maintainers need multiple steps.
> > Really, when you send patches, think how this should be handled and how
> > much effort this needs on maintainer side.
>
> To be honest, I was expecting all of these to go through the tegra
> tree, since all the drivers I touch are owned by the tegra
> maintainers. But getting stuff moved through that tree has been like
> pulling teeth recently. So Krzysztof, what's the alternative you're
> suggesting here?

What is the expectation for the series here, and related, the tegra210
actmon series? Everything put together here accomplishes the single
logical task of enabling dynamic frequency scaling for emc on tegra186
and tegra194. The driver subsystems do not have hard dependencies in
that the new driver code has fallbacks to not fail to probe if the
complementary driver changes are missing. But if I was to split them
up, how would it work? I send the cpufreq patch by itself, the memory
changes in a group, then the dt changes in a group with b4 deps lines
for the two driver sets? That seems crazy complicated for something
that's a single logical concept. Especially when as far as I know,
this can all go together through the tegra tree.

Aaron

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194
  2025-10-20 20:14         ` Aaron Kling
@ 2025-10-20 20:37           ` Krzysztof Kozlowski
  0 siblings, 0 replies; 19+ messages in thread
From: Krzysztof Kozlowski @ 2025-10-20 20:37 UTC (permalink / raw)
  To: Aaron Kling
  Cc: Rob Herring, Conor Dooley, Thierry Reding, Jonathan Hunter,
	Rafael J. Wysocki, Viresh Kumar, Krzysztof Kozlowski,
	linux-kernel, devicetree, linux-tegra, linux-pm

On 20/10/2025 22:14, Aaron Kling wrote:
> On Sun, Oct 12, 2025 at 9:31 PM Aaron Kling <webgeek1234@gmail.com> wrote:
>>
>> On Sun, Oct 12, 2025 at 9:25 PM Krzysztof Kozlowski <krzk@kernel.org> wrote:
>>>
>>> On 13/10/2025 04:18, Aaron Kling wrote:
>>>> On Wed, Oct 8, 2025 at 7:05 PM Krzysztof Kozlowski <krzk@kernel.org> wrote:
>>>>>
>>>>> On 09/09/2025 15:21, Aaron Kling via B4 Relay wrote:
>>>>>> This series borrows the concept used on Tegra234 to scale EMC based on
>>>>>> CPU frequency and applies it to Tegra186 and Tegra194. Except that the
>>>>>> bpmp on those archs does not support bandwidth manager, so the scaling
>>>>>> iteself is handled similar to how Tegra124 currently works.
>>>>>>
>>>>>
>>>>> Nothing improved:
>>>>> https://lore.kernel.org/all/20250902-glittering-toucan-of-feminism-95fd9f@kuoka/
>>>>
>>>> The dt changes should go last. The cpufreq and memory pieces can go in
>>>> either order because the new code won't be used unless the dt pieces
>>>> activate them.
>>>
>>>
>>> Then cpufreq and memory should never have been part of same patchset.
>>> Instead of simple command to apply it, maintainers need multiple steps.
>>> Really, when you send patches, think how this should be handled and how
>>> much effort this needs on maintainer side.
>>
>> To be honest, I was expecting all of these to go through the tegra
>> tree, since all the drivers I touch are owned by the tegra
>> maintainers. But getting stuff moved through that tree has been like
>> pulling teeth recently. So Krzysztof, what's the alternative you're
>> suggesting here?
> 
> What is the expectation for the series here, and related, the tegra210
> actmon series? Everything put together here accomplishes the single
> logical task of enabling dynamic frequency scaling for emc on tegra186
> and tegra194. The driver subsystems do not have hard dependencies in

There are comments from Viresh so I dropped the patchset from my queue.


> that the new driver code has fallbacks to not fail to probe if the
> complementary driver changes are missing. But if I was to split them
> up, how would it work? I send the cpufreq patch by itself, the memory

Please open MAINTAINERS file or read the output of get_maintainers.pl.
This will tell you what is the subsystem here. Currently you mixed a
lot: three subsystems which has only drawbacks. There is no single
benefit of that approach, unless you have dependencies (REAL
dependencies), but you said you don't have such. If you have
dependencies this must be FIRST, the most important thing you mention in
the cover letter. Many maintainers appreciate if you mention in patch
changelogs as well, because they (me included) do not read cover letters.

So if you open the MAINTAINERS file you will find subsystems: cpufreq,
Tegra SoC and memory controllers (where DT bindings belong)

You split your patchset per subsystem, with the difference (explained in
DT submitting patches) is that DT bindings for drivers belong to the
driver subsystem.

The DTS patches using newly introduced bindings should carry lore links
to patchsets with the bindings, so the SoC maintainer can apply them
once bindings hit next.

I also described the entire process before:
https://lore.kernel.org/linux-samsung-soc/CADrjBPq_0nUYRABKpskRF_dhHu+4K=duPVZX==0pr+cjSL_caQ@mail.gmail.com/T/#m2d9130a1342ab201ab49670fa6c858ee3724c83c

so now I repeated it second time. It is the last time I repeat the
basics of organizing patchsets.

> changes in a group, then the dt changes in a group with b4 deps lines
> for the two driver sets? That seems crazy complicated for something

That's pretty standard, nothing complicated. You should have seen
complicated posting here:
https://lore.kernel.org/all/20231121-topic-sm8650-upstream-dt-v3-0-db9d0507ffd3@linaro.org/

We all send multiple patchsets, with or without dependencies.

Best regards,
Krzysztof

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth
  2025-10-13  5:08       ` Viresh Kumar
@ 2025-10-21 17:58         ` Aaron Kling
  0 siblings, 0 replies; 19+ messages in thread
From: Aaron Kling @ 2025-10-21 17:58 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: Krzysztof Kozlowski, Rob Herring, Conor Dooley, Thierry Reding,
	Jonathan Hunter, Rafael J. Wysocki, Krzysztof Kozlowski,
	linux-kernel, devicetree, linux-tegra, linux-pm

On Mon, Oct 13, 2025 at 12:08 AM Viresh Kumar <viresh.kumar@linaro.org> wrote:
>
> On 12-10-25, 21:32, Aaron Kling wrote:
> > On Tue, Sep 30, 2025 at 5:30 AM Viresh Kumar <viresh.kumar@linaro.org> wrote:
> > >
> > > On 09-09-25, 01:21, Aaron Kling via B4 Relay wrote:
> > > > +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz)
> > > > +{
> > > > +     struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
> > > > +     struct dev_pm_opp *opp __free(put_opp);
> > >
> > > The usage here looks incorrect..
> > >
> > > > +     struct device *dev;
> > > > +     int ret;
> > > > +
> > > > +     dev = get_cpu_device(policy->cpu);
> > > > +     if (!dev)
> > > > +             return -ENODEV;
> > >
> > > On failure, we would return from here with a garbage `opp` pointer, which the
> > > OPP core may try to free ?
> > >
> > > Moving the variable definition here would fix that.
> >
> > If the var was NULL initialized, would the free handle that correctly?
> > Keeping the declarations at the start of the function reads better
> > imo.
>
> include/linux/cleanup.h has some recommendations around that.

There was a request to split this series into separate series
per-subsystem. So I will fix this in a new patch, but it won't be
tracked as a new revision to this.

Aaron

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2025-10-21 17:58 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-09  6:21 [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Aaron Kling via B4 Relay
2025-09-09  6:21 ` [PATCH v2 1/8] cpufreq: tegra186: add OPP support and set bandwidth Aaron Kling via B4 Relay
2025-09-30 10:30   ` Viresh Kumar
2025-10-13  2:32     ` Aaron Kling
2025-10-13  5:08       ` Viresh Kumar
2025-10-21 17:58         ` Aaron Kling
2025-09-09  6:21 ` [PATCH v2 2/8] dt-bindings: memory: tegra186-mc: Add dummy client IDs for Tegra186 Aaron Kling via B4 Relay
2025-09-09  6:21 ` [PATCH v2 3/8] dt-bindings: memory: tegra194-mc: Add dummy client IDs for Tegra194 Aaron Kling via B4 Relay
2025-09-09  6:21 ` [PATCH v2 4/8] memory: tegra186-emc: Support non-bpmp icc scaling Aaron Kling via B4 Relay
2025-09-09  6:21 ` [PATCH v2 5/8] memory: tegra186: Support " Aaron Kling via B4 Relay
2025-09-09  6:21 ` [PATCH v2 6/8] memory: tegra194: " Aaron Kling via B4 Relay
2025-09-09  6:21 ` [PATCH v2 7/8] arm64: tegra: Add CPU OPP tables for Tegra186 Aaron Kling via B4 Relay
2025-09-09  6:21 ` [PATCH v2 8/8] arm64: tegra: Add CPU OPP tables for Tegra194 Aaron Kling via B4 Relay
2025-10-09  0:05 ` [PATCH v2 0/8] Support dynamic EMC frequency scaling on Tegra186/Tegra194 Krzysztof Kozlowski
2025-10-13  2:18   ` Aaron Kling
2025-10-13  2:25     ` Krzysztof Kozlowski
2025-10-13  2:31       ` Aaron Kling
2025-10-20 20:14         ` Aaron Kling
2025-10-20 20:37           ` Krzysztof Kozlowski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).