public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Rafael J. Wysocki" <rjw@rjwysocki.net>
To: Linux PM <linux-pm@vger.kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Lukasz Luba <lukasz.luba@arm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Ricardo Neri <ricardo.neri-calderon@linux.intel.com>,
	Pierre Gondois <pierre.gondois@arm.com>
Subject: [RFC][PATCH v021 8/9] cpufreq: intel_pstate: Introduce hybrid domains
Date: Fri, 29 Nov 2024 17:21:40 +0100	[thread overview]
Message-ID: <2030654.usQuhbGJ8B@rjwysocki.net> (raw)
In-Reply-To: <5861970.DvuYhMxLoT@rjwysocki.net>

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Hybrid platforms contain different types of CPUs.  They may differ
by micro-architecture, by cache topology, by manufacturing process, by
the interconnect access design etc.  Of course, this means that power-
performance curves for CPUs of different types are generally different.

Because of these differences, CPUs of different types need to be handled
differently in certain situations and so it is convenient to operate
groups of CPUs that each contain CPUs of the same type.  In intel_pstate,
each of them will be represented by a struct hybrid_domain object and
referred to as a hybrid domain.

A key problem is how to identify the type of a CPUs so as to know which
hybrid domain it belongs to.  In principle, there are a few ways to do
it, but none of them is perfectly reliable.

From the computational perspective, an important factor is how many
instructions (on average) can be executed by the given CPU when it is
running at a specific frequency, often referred to as the IPC
(instructions per cycle) ratio of the given CPU to the least-capable
CPU in the system.  In intel_pstate this ratio is represented by the
performance-to-frequency scaling factor which needs to be used to get
a frequency in kHz for a given HWP performance level of the given CPU.
Since HWP performance levels are in the same units for all CPUs in a
hybrid system, the smaller the scaling factor, the larger the IPC ratio
for the given CPU.

Of course, the performance-to-frequency scaling factor must be the
same for all CPUs of the same type.  While it may be the same for CPUs
of different types, there is only one case in which that actually
happens (Meteor Lake platforms with two types of E-cores) and it is not
expected to happen again in the future.  Moreover, when it happens,
there is no straightforward way to distinguish CPUs of different types
with the same scaling factor in general.

For this reason, the scaling factor is as good as it gets for CPU
type identification and so it is used for building hybrid domains in
intel_pstate.

On hybrid systems, every CPU is added to a hybrid domain at the
initialization time.  If a hybrid domain with a matching scaling
factor is already present at that point, the CPU will be added to it.
Otherwise, a new hybrid domain will be created and the CPU will be
put into it.  The domain's scaling factor will then be set to the
one of the new CPU.

So far, the new code doesn't do much beyond printing debud messages,
but subsequently the EAS support for intel_pstate will be based on it.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/intel_pstate.c |   57 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 57 insertions(+)

Index: linux-pm/drivers/cpufreq/intel_pstate.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/intel_pstate.c
+++ linux-pm/drivers/cpufreq/intel_pstate.c
@@ -943,6 +943,62 @@ static struct cpudata *hybrid_max_perf_c
  */
 static DEFINE_MUTEX(hybrid_capacity_lock);
 
+#ifdef CONFIG_ENERGY_MODEL
+/*
+ * A hybrid domain is a collection of CPUs with the same perf-to-frequency
+ * scaling factor.
+ */
+struct hybrid_domain {
+	struct hybrid_domain *next;
+	cpumask_t cpumask;
+	int scaling;
+};
+
+static struct hybrid_domain *hybrid_domains;
+
+static void hybrid_add_to_domain(struct cpudata *cpudata)
+{
+	int scaling = cpudata->pstate.scaling;
+	int cpu = cpudata->cpu;
+	struct hybrid_domain *hd;
+
+	/* Do this only on hubrid platforms. */
+	if (!cpu_feature_enabled(X86_FEATURE_HYBRID_CPU))
+		return;
+
+	guard(mutex)(&hybrid_capacity_lock);
+
+	/* Look for an existing hybrid domain matching this CPU. */
+	for (hd = hybrid_domains; hd; hd = hd->next) {
+		if (hd->scaling == scaling) {
+			if (cpumask_test_cpu(cpu, &hd->cpumask))
+				return;
+
+			cpumask_set_cpu(cpu, &hd->cpumask);
+
+			pr_debug("CPU %d added to hybrid domain %*pbl\n", cpu,
+				 cpumask_pr_args(&hd->cpumask));
+			return;
+		}
+	}
+
+	/* No match.  Add a new one. */
+	hd = kzalloc(sizeof(*hd), GFP_KERNEL);
+	if (!hd)
+		return;
+
+	cpumask_set_cpu(cpu, &hd->cpumask);
+	hd->scaling = scaling;
+	hd->next = hybrid_domains;
+	hybrid_domains = hd;
+
+	pr_debug("New hybrid domain %*pbl: scaling = %d\n",
+		 cpumask_pr_args(&hd->cpumask), hd->scaling);
+}
+#else /* CONFIG_ENERGY_MODEL */
+static inline void hybrid_add_to_domain(struct cpudata *cpudata) {}
+#endif /* !CONFIG_ENERGY_MODEL */
+
 static void hybrid_set_cpu_capacity(struct cpudata *cpu)
 {
 	arch_set_cpu_capacity(cpu->cpu, cpu->capacity_perf,
@@ -2273,6 +2329,7 @@ static void intel_pstate_get_cpu_pstates
 				intel_pstate_hybrid_hwp_adjust(cpu);
 				hwp_is_hybrid = true;
 			}
+			hybrid_add_to_domain(cpu);
 		} else {
 			cpu->pstate.scaling = perf_ctl_scaling;
 		}




  parent reply	other threads:[~2024-11-29 16:28 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-29 15:55 [RFC][PATCH v021 0/9] cpufreq: intel_pstate: Enable EAS on hybrid platforms without SMT Rafael J. Wysocki
2024-11-29 15:56 ` [RFC][PATCH v021 1/9] cpufreq: intel_pstate: Use CPPC to get scaling factors Rafael J. Wysocki
2024-11-29 15:56 ` [RFC][PATCH v021 2/9] cpufreq: intel_pstate: Drop Arrow Lake from "scaling factor" list Rafael J. Wysocki
2024-11-29 15:59 ` [RFC][PATCH v021 3/9] PM: EM: Move perf rebuilding function from schedutil to EM Rafael J. Wysocki
2024-12-11 10:32   ` Christian Loehle
2024-11-29 16:00 ` [RFC][PATCH v021 4/9] sched/topology: Adjust cpufreq checks for EAS Rafael J. Wysocki
2024-12-11 10:33   ` Christian Loehle
2024-12-11 11:29     ` Rafael J. Wysocki
2024-12-11 11:43       ` Christian Loehle
2024-12-11 11:59         ` Rafael J. Wysocki
2024-12-11 13:25       ` Vincent Guittot
2024-12-11 16:37         ` Rafael J. Wysocki
2024-12-11 17:07           ` Vincent Guittot
2024-12-11 17:55             ` Rafael J. Wysocki
2024-12-12  8:34               ` Vincent Guittot
2024-12-16 14:49   ` Dietmar Eggemann
2024-12-16 14:58     ` Rafael J. Wysocki
2024-11-29 16:02 ` [RFC][PATCH v021 5/9] PM: EM: Introduce em_dev_expand_perf_domain() Rafael J. Wysocki
2024-12-17  9:38   ` Dietmar Eggemann
2024-12-17 20:40     ` Rafael J. Wysocki
2025-01-06 12:59       ` Dietmar Eggemann
2024-11-29 16:06 ` [RFC][PATCH v021 6/9] PM: EM: Call em_compute_costs() from em_create_perf_table() Rafael J. Wysocki
2024-11-29 16:15 ` [RFC][PATCH v021 7/9] PM: EM: Register perf domains with ho :active_power() callbacks Rafael J. Wysocki
2024-12-16 10:59   ` Dietmar Eggemann
2024-12-16 11:58     ` Rafael J. Wysocki
2024-11-29 16:21 ` Rafael J. Wysocki [this message]
2024-12-12 17:04   ` [RFC][PATCH v021 8/9] cpufreq: intel_pstate: Introduce hybrid domains Christian Loehle
2024-11-29 16:28 ` [RFC][PATCH v021 9/9] cpufreq: intel_pstate: Add basic EAS support on hybrid platforms Rafael J. Wysocki
2025-01-25 11:18 ` [RFC][PATCH v021 0/9] cpufreq: intel_pstate: Enable EAS on hybrid platforms without SMT Dietmar Eggemann
2025-01-27 13:57   ` Rafael J. Wysocki
2025-02-01 12:43     ` Christian Loehle

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2030654.usQuhbGJ8B@rjwysocki.net \
    --to=rjw@rjwysocki.net \
    --cc=dietmar.eggemann@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=lukasz.luba@arm.com \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=pierre.gondois@arm.com \
    --cc=ricardo.neri-calderon@linux.intel.com \
    --cc=srinivas.pandruvada@linux.intel.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox