linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs
@ 2025-10-03 15:02 Ulf Hansson
  2025-10-03 15:02 ` [PATCH 1/3] smp: Introduce a weak helper function to check for pending IPIs Ulf Hansson
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Ulf Hansson @ 2025-10-03 15:02 UTC (permalink / raw)
  To: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner
  Cc: Maulik Shah, Sudeep Holla, Daniel Lezcano, Vincent Guittot,
	linux-pm, linux-arm-kernel, linux-kernel, Ulf Hansson

Platforms using the genpd governor for CPUs are relying on it to find the most
optimal idlestate for a group of CPUs. Although, observations tells us that
there are some significant improvement that can be made around this.

These improvement are based upon allowing us to take pending IPIs into account
for the group of CPUs that the genpd governor is in control of. If there is
pending IPI for any of these CPUs, we should not request an idlestate that
affects the group, but rather pick a shallower state that affects only the CPU.

More details are available in the commit messages for each patch.

Kind regards
Ulf Hansson


Ulf Hansson (3):
  smp: Introduce a weak helper function to check for pending IPIs
  arm64: smp: Implement cpus_has_pending_ipi()
  pmdomain: Extend the genpd governor for CPUs to account for IPIs

 arch/arm64/kernel/smp.c     | 20 ++++++++++++++++++++
 drivers/pmdomain/governor.c | 20 +++++++++++++-------
 include/linux/smp.h         |  5 +++++
 kernel/smp.c                | 18 ++++++++++++++++++
 4 files changed, 56 insertions(+), 7 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] smp: Introduce a weak helper function to check for pending IPIs
  2025-10-03 15:02 [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs Ulf Hansson
@ 2025-10-03 15:02 ` Ulf Hansson
  2025-10-03 15:02 ` [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi() Ulf Hansson
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Ulf Hansson @ 2025-10-03 15:02 UTC (permalink / raw)
  To: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner
  Cc: Maulik Shah, Sudeep Holla, Daniel Lezcano, Vincent Guittot,
	linux-pm, linux-arm-kernel, linux-kernel, Ulf Hansson

When governors used during cpuidle, tries to find the most optimal
idlestate for a CPU or a group of CPUs, they are known to quite often fail.
One reason for this, is that we are not taking into account whether there
has been an IPI scheduled for any of the CPUs that are affected by the
selected idlestate.

To enable pending IPIs to be taken into account for cpuidle decisions,
let's introduce a new helper function, cpus_has_pending_ipi(). Moreover,
let's use the __weak attribute for the default implementation, to allow
this to be implemented on a per architecture basis.

Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 include/linux/smp.h |  5 +++++
 kernel/smp.c        | 18 ++++++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/include/linux/smp.h b/include/linux/smp.h
index 18e9c918325e..476186e5e69c 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -168,6 +168,7 @@ int smp_call_function_any(const struct cpumask *mask,
 
 void kick_all_cpus_sync(void);
 void wake_up_all_idle_cpus(void);
+bool cpus_has_pending_ipi(const struct cpumask *mask);
 
 /*
  * Generic and arch helpers
@@ -216,6 +217,10 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func,
 
 static inline void kick_all_cpus_sync(void) {  }
 static inline void wake_up_all_idle_cpus(void) {  }
+static inline bool cpus_has_pending_ipi(const struct cpumask *mask)
+{
+	return false;
+}
 
 #define setup_max_cpus 0
 
diff --git a/kernel/smp.c b/kernel/smp.c
index 56f83aa58ec8..ec524db501b5 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -1088,6 +1088,24 @@ void wake_up_all_idle_cpus(void)
 }
 EXPORT_SYMBOL_GPL(wake_up_all_idle_cpus);
 
+/**
+ * cpus_has_pending_ipi - Check for pending IPIs for CPUs
+ * @mask: The CPU mask for the CPUs to check.
+ *
+ * This function may be overriden by an arch specific implementation, which
+ * should walk through the CPU-mask and check if there are any pending IPIs
+ * being scheduled for any of the CPUs in the CPU-mask.
+ *
+ * Note, the default implementation below doesn't have the capability to check
+ * for IPIs, hence it must return false.
+ *
+ * Returns true if there is a pending IPI scheduled.
+ */
+bool __weak cpus_has_pending_ipi(const struct cpumask *mask)
+{
+	return false;
+}
+
 /**
  * struct smp_call_on_cpu_struct - Call a function on a specific CPU
  * @work: &work_struct
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-03 15:02 [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs Ulf Hansson
  2025-10-03 15:02 ` [PATCH 1/3] smp: Introduce a weak helper function to check for pending IPIs Ulf Hansson
@ 2025-10-03 15:02 ` Ulf Hansson
  2025-10-06 10:54   ` Sudeep Holla
                     ` (2 more replies)
  2025-10-03 15:02 ` [PATCH 3/3] pmdomain: Extend the genpd governor for CPUs to account for IPIs Ulf Hansson
  2025-10-06 15:36 ` [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs Sudeep Holla
  3 siblings, 3 replies; 16+ messages in thread
From: Ulf Hansson @ 2025-10-03 15:02 UTC (permalink / raw)
  To: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner
  Cc: Maulik Shah, Sudeep Holla, Daniel Lezcano, Vincent Guittot,
	linux-pm, linux-arm-kernel, linux-kernel, Ulf Hansson

To add support for keeping track of whether there may be a pending IPI
scheduled for a CPU or a group of CPUs, let's implement
cpus_has_pending_ipi() for arm64.

Note, the implementation is intentionally lightweight and doesn't use any
additional lock. This is good enough for cpuidle based decisions.

Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 arch/arm64/kernel/smp.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 68cea3a4a35c..dd1acfa91d44 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -55,6 +55,8 @@
 
 #include <trace/events/ipi.h>
 
+static DEFINE_PER_CPU(bool, pending_ipi);
+
 /*
  * as from 2.5, kernels no longer have an init_tasks structure
  * so we need some other way of telling a new secondary core
@@ -1012,6 +1014,8 @@ static void do_handle_IPI(int ipinr)
 
 	if ((unsigned)ipinr < NR_IPI)
 		trace_ipi_exit(ipi_types[ipinr]);
+
+	per_cpu(pending_ipi, cpu) = false;
 }
 
 static irqreturn_t ipi_handler(int irq, void *data)
@@ -1024,10 +1028,26 @@ static irqreturn_t ipi_handler(int irq, void *data)
 
 static void smp_cross_call(const struct cpumask *target, unsigned int ipinr)
 {
+	unsigned int cpu;
+
+	for_each_cpu(cpu, target)
+		per_cpu(pending_ipi, cpu) = true;
+
 	trace_ipi_raise(target, ipi_types[ipinr]);
 	arm64_send_ipi(target, ipinr);
 }
 
+bool cpus_has_pending_ipi(const struct cpumask *mask)
+{
+	unsigned int cpu;
+
+	for_each_cpu(cpu, mask) {
+		if (per_cpu(pending_ipi, cpu))
+			return true;
+	}
+	return false;
+}
+
 static bool ipi_should_be_nmi(enum ipi_msg_type ipi)
 {
 	if (!system_uses_irq_prio_masking())
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] pmdomain: Extend the genpd governor for CPUs to account for IPIs
  2025-10-03 15:02 [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs Ulf Hansson
  2025-10-03 15:02 ` [PATCH 1/3] smp: Introduce a weak helper function to check for pending IPIs Ulf Hansson
  2025-10-03 15:02 ` [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi() Ulf Hansson
@ 2025-10-03 15:02 ` Ulf Hansson
  2025-10-06 15:36 ` [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs Sudeep Holla
  3 siblings, 0 replies; 16+ messages in thread
From: Ulf Hansson @ 2025-10-03 15:02 UTC (permalink / raw)
  To: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner
  Cc: Maulik Shah, Sudeep Holla, Daniel Lezcano, Vincent Guittot,
	linux-pm, linux-arm-kernel, linux-kernel, Ulf Hansson

When the genpd governor for CPUs, tries to select the most optimal
idlestate for a group of CPUs managed in a PM domain, it fails far too
often.

On a Dragonboard 410c, which is an arm64 based platform with 4 CPUs
in one cluster that is using PSCI OS-initiated mode, we can observe that we
often fail when trying to enter the selected idlestate. This is certainly a
suboptimal behaviour that leads to many unnecessary requests being sent to
the PSCI FW.

A simple dd operation that reads from the eMMC, to generate some IRQs and
I/O handling helps us to understand the problem, while also monitoring the
rejected counters in debugfs for the corresponding idlestates of the genpd
in question.

 Menu governor:
cat /sys/kernel/debug/pm_genpd/power-domain-cluster/idle_states
State          Time Spent(ms) Usage      Rejected   Above      Below
S0             1451           437        91         149        0
S1             65194          558        149        172        0
dd if=/dev/mmcblk0 of=/dev/null bs=1M count=500
524288000 bytes (500.0MB) copied, 3.562698 seconds, 140.3MB/s
cat /sys/kernel/debug/pm_genpd/power-domain-cluster/idle_states
State          Time Spent(ms) Usage      Rejected   Above      Below
S0             2694           1073       265        892        1
S1             74567          829        561        790        0

 The dd completed in ~3.6 seconds and rejects increased with 586.

 Teo governor:
cat /sys/kernel/debug/pm_genpd/power-domain-cluster/idle_states
State          Time Spent(ms) Usage      Rejected   Above      Below
S0             4976           2096       392        1721       2
S1             160661         1893       1309       1904       0
dd if=/dev/mmcblk0 of=/dev/null bs=1M count=500
524288000 bytes (500.0MB) copied, 3.543225 seconds, 141.1MB/s
cat /sys/kernel/debug/pm_genpd/power-domain-cluster/idle_states
State          Time Spent(ms) Usage      Rejected   Above      Below
S0             5192           2194       433        1830       2
S1             167677         2891       3184       4729       0

 The dd completed in ~3.6 seconds and rejects increased with 1916.

The main reason to the above problem is pending IPIs for one of the CPUs
that is affected by the idlestate that the genpd governor selected. This
leads to that the PSCI FW refuses to enter it. To improve the behaviour,
let's start to take into account pending IPIs for CPUs in the genpd
governor, hence we fallback to use the shallower per CPU idlestate.

 Re-testing with this change shows a significant improved behaviour.

 - Menu governor:
cat /sys/kernel/debug/pm_genpd/power-domain-cluster/idle_states
State          Time Spent(ms) Usage      Rejected   Above      Below
S0             1994           551        10         24         0
S1             115602         801        4          56         0
dd if=/dev/mmcblk0 of=/dev/null bs=1M count=500
524288000 bytes (500.0MB) copied, 3.622631 seconds, 138.0MB/s
cat /sys/kernel/debug/pm_genpd/power-domain-cluster/idle_states
State          Time Spent(ms) Usage      Rejected   Above      Below
S0             2462           766        14         202        0
S1             119559         1031       9          253        0

 The dd completed in ~3.6 seconds and rejects increased with 9.

 - Teo governor
cat /sys/kernel/debug/pm_genpd/power-domain-cluster/idle_states
State          Time Spent(ms) Usage      Rejected   Above      Below
S0             3212           990        16         245        0
S1             202442         2459       13         1184       0
dd if=/dev/mmcblk0 of=/dev/null bs=1M count=500
524288000 bytes (500.0MB) copied, 3.284563 seconds, 152.2MB/s
cat /sys/kernel/debug/pm_genpd/power-domain-cluster/idle_states
State          Time Spent(ms) Usage      Rejected   Above      Below
S0             3387           1046       16         265        0
S1             206074         2826       19         1524       0

 The dd completed in ~3.3 seconds and rejects increased with 6.

Note that, the rejected counters in genpd are also being accumulated in the
rejected counters that are managed by cpuidle, yet on a per CPU idlestates
basis. Comparing these counters before/after this change, through cpuidle's
sysfs interface shows the similar improvements.

Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/pmdomain/governor.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/drivers/pmdomain/governor.c b/drivers/pmdomain/governor.c
index 39359811a930..7e81dc383269 100644
--- a/drivers/pmdomain/governor.c
+++ b/drivers/pmdomain/governor.c
@@ -404,15 +404,21 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd)
 		if ((idle_duration_ns >= (genpd->states[i].residency_ns +
 		    genpd->states[i].power_off_latency_ns)) &&
 		    (global_constraint >= (genpd->states[i].power_on_latency_ns +
-		    genpd->states[i].power_off_latency_ns))) {
-			genpd->state_idx = i;
-			genpd->gd->last_enter = now;
-			genpd->gd->reflect_residency = true;
-			return true;
-		}
+		    genpd->states[i].power_off_latency_ns)))
+			break;
+
 	} while (--i >= 0);
 
-	return false;
+	if (i < 0)
+		return false;
+
+	if (cpus_has_pending_ipi(genpd->cpus))
+		return false;
+
+	genpd->state_idx = i;
+	genpd->gd->last_enter = now;
+	genpd->gd->reflect_residency = true;
+	return true;
 }
 
 struct dev_power_governor pm_domain_cpu_gov = {
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-03 15:02 ` [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi() Ulf Hansson
@ 2025-10-06 10:54   ` Sudeep Holla
  2025-10-06 12:22     ` Ulf Hansson
  2025-10-06 15:55   ` Marc Zyngier
  2025-10-17 14:01   ` Thomas Gleixner
  2 siblings, 1 reply; 16+ messages in thread
From: Sudeep Holla @ 2025-10-06 10:54 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: Rafael J . Wysocki, Sudeep Holla, Catalin Marinas, Will Deacon,
	Mark Rutland, Thomas Gleixner, Maulik Shah, Daniel Lezcano,
	Vincent Guittot, linux-pm, linux-arm-kernel, linux-kernel

On Fri, Oct 03, 2025 at 05:02:44PM +0200, Ulf Hansson wrote:
> To add support for keeping track of whether there may be a pending IPI
> scheduled for a CPU or a group of CPUs, let's implement
> cpus_has_pending_ipi() for arm64.
> 
> Note, the implementation is intentionally lightweight and doesn't use any
> additional lock. This is good enough for cpuidle based decisions.
> 

I’m not completely against this change, but I’d like to discuss a few points
based on my understanding (which might also be incorrect):

1. For systems that don’t use PM domains for idle, wouldn’t this be
   unnecessary? It might be worth making this conditional if we decide to
   proceed.

2. I understand this is intended for the DragonBoard 410c, where the firmware
   can’t be updated. However, ideally, the PSCI firmware should handle checking
   for pending IPIs if that’s important for the platform. The firmware could
   perform this check at the CPU PPU/HW level and prevent entering the
   state if needed.

3. I’m not an expert, but on systems with a large number of CPUs, tracking
   this for idle (which may or may not be enabled) seems a bit excessive,
   especially under heavy load when the system isn’t really idling.

-- 
Regards,
Sudeep


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-06 10:54   ` Sudeep Holla
@ 2025-10-06 12:22     ` Ulf Hansson
  2025-10-06 14:41       ` Sudeep Holla
  0 siblings, 1 reply; 16+ messages in thread
From: Ulf Hansson @ 2025-10-06 12:22 UTC (permalink / raw)
  To: Sudeep Holla
  Cc: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner, Maulik Shah, Daniel Lezcano, Vincent Guittot,
	linux-pm, linux-arm-kernel, linux-kernel

On Mon, 6 Oct 2025 at 12:54, Sudeep Holla <sudeep.holla@arm.com> wrote:
>
> On Fri, Oct 03, 2025 at 05:02:44PM +0200, Ulf Hansson wrote:
> > To add support for keeping track of whether there may be a pending IPI
> > scheduled for a CPU or a group of CPUs, let's implement
> > cpus_has_pending_ipi() for arm64.
> >
> > Note, the implementation is intentionally lightweight and doesn't use any
> > additional lock. This is good enough for cpuidle based decisions.
> >
>
> I’m not completely against this change, but I’d like to discuss a few points
> based on my understanding (which might also be incorrect):
>
> 1. For systems that don’t use PM domains for idle, wouldn’t this be
>    unnecessary? It might be worth making this conditional if we decide to
>    proceed.

For the non PM domain case, cpuidle_idle_call() calls need_resched()
and bails out if it returns true. I think that does the job, for other
more common cases.

Making this conditional could make sense. Not sure how costly it is to
update the per CPU variables.

>
> 2. I understand this is intended for the DragonBoard 410c, where the firmware
>    can’t be updated. However, ideally, the PSCI firmware should handle checking
>    for pending IPIs if that’s important for the platform. The firmware could
>    perform this check at the CPU PPU/HW level and prevent entering the
>    state if needed.

I think this is exactly what is happening on Dragonboard 410c (see the
stats I shared in the commit message in patch3).

The PSCI FW refuses to enter the suggested idlestate and the call fails.

>
> 3. I’m not an expert, but on systems with a large number of CPUs, tracking
>    this for idle (which may or may not be enabled) seems a bit excessive,
>    especially under heavy load when the system isn’t really idling.

Right, making the tracking mechanism conditional sounds like worth
exploring. I guess the trick is to find a good way to dynamically
enable it.

Kind regards
Uffe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-06 12:22     ` Ulf Hansson
@ 2025-10-06 14:41       ` Sudeep Holla
  2025-10-10  8:03         ` Ulf Hansson
  0 siblings, 1 reply; 16+ messages in thread
From: Sudeep Holla @ 2025-10-06 14:41 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: Rafael J . Wysocki, Sudeep Holla, Catalin Marinas, Will Deacon,
	Mark Rutland, Thomas Gleixner, Maulik Shah, Daniel Lezcano,
	Vincent Guittot, linux-pm, linux-arm-kernel, linux-kernel

On Mon, Oct 06, 2025 at 02:22:49PM +0200, Ulf Hansson wrote:
> On Mon, 6 Oct 2025 at 12:54, Sudeep Holla <sudeep.holla@arm.com> wrote:
> >
> > 2. I understand this is intended for the DragonBoard 410c, where the firmware
> >    can’t be updated. However, ideally, the PSCI firmware should handle checking
> >    for pending IPIs if that’s important for the platform. The firmware could
> >    perform this check at the CPU PPU/HW level and prevent entering the
> >    state if needed.
> 
> I think this is exactly what is happening on Dragonboard 410c (see the
> stats I shared in the commit message in patch3).
> 
> The PSCI FW refuses to enter the suggested idlestate and the call fails.
> 

Ah OK, the PSCI FW is doing the job correctly, we are just attempting to
reduce the failures by catching few cases earlier in the OSPM itself ?
Sure it only reduces the failures but it can't eliminate those as IPI might
be issued after this check in the OSPM. I understand the call to firmware
can be prevented.

-- 
Regards,
Sudeep


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs
  2025-10-03 15:02 [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs Ulf Hansson
                   ` (2 preceding siblings ...)
  2025-10-03 15:02 ` [PATCH 3/3] pmdomain: Extend the genpd governor for CPUs to account for IPIs Ulf Hansson
@ 2025-10-06 15:36 ` Sudeep Holla
  2025-10-10  7:52   ` Ulf Hansson
  3 siblings, 1 reply; 16+ messages in thread
From: Sudeep Holla @ 2025-10-06 15:36 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: Rafael J . Wysocki, Catalin Marinas, Sudeep Holla, Will Deacon,
	Mark Rutland, Thomas Gleixner, Maulik Shah, Daniel Lezcano,
	Vincent Guittot, linux-pm, linux-arm-kernel, linux-kernel

On Fri, Oct 03, 2025 at 05:02:42PM +0200, Ulf Hansson wrote:
> Platforms using the genpd governor for CPUs are relying on it to find the most
> optimal idlestate for a group of CPUs. Although, observations tells us that
> there are some significant improvement that can be made around this.
> 
> These improvement are based upon allowing us to take pending IPIs into account
> for the group of CPUs that the genpd governor is in control of. If there is
> pending IPI for any of these CPUs, we should not request an idlestate that
> affects the group, but rather pick a shallower state that affects only the CPU.
>

Thinking about this further, I’m not sure this issue is really specific to
pmdomain. In my view, the proposed solution could apply equally well to
platforms that don’t use pmdomain for cpuidle. Also, I don’t see why the
solution needs to be architecture-specific.

Thoughts ?

I understand it won’t handle all IPI cases, but generic helpers like
local_softirq_pending() and irq_work_needs_cpu()
should already cover some of them in a platform-independent way.

-- 
Regards,
Sudeep


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-03 15:02 ` [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi() Ulf Hansson
  2025-10-06 10:54   ` Sudeep Holla
@ 2025-10-06 15:55   ` Marc Zyngier
  2025-10-10  8:30     ` Ulf Hansson
  2025-10-17 14:01   ` Thomas Gleixner
  2 siblings, 1 reply; 16+ messages in thread
From: Marc Zyngier @ 2025-10-06 15:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner, Maulik Shah, Sudeep Holla, Daniel Lezcano,
	Vincent Guittot, linux-pm, linux-arm-kernel, linux-kernel

On Fri, 03 Oct 2025 16:02:44 +0100,
Ulf Hansson <ulf.hansson@linaro.org> wrote:
> 
> To add support for keeping track of whether there may be a pending IPI
> scheduled for a CPU or a group of CPUs, let's implement
> cpus_has_pending_ipi() for arm64.
> 
> Note, the implementation is intentionally lightweight and doesn't use any
> additional lock. This is good enough for cpuidle based decisions.
> 
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> ---
>  arch/arm64/kernel/smp.c | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 68cea3a4a35c..dd1acfa91d44 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -55,6 +55,8 @@
>  
>  #include <trace/events/ipi.h>
>  
> +static DEFINE_PER_CPU(bool, pending_ipi);
> +
>  /*
>   * as from 2.5, kernels no longer have an init_tasks structure
>   * so we need some other way of telling a new secondary core
> @@ -1012,6 +1014,8 @@ static void do_handle_IPI(int ipinr)
>  
>  	if ((unsigned)ipinr < NR_IPI)
>  		trace_ipi_exit(ipi_types[ipinr]);
> +
> +	per_cpu(pending_ipi, cpu) = false;
>  }
>  
>  static irqreturn_t ipi_handler(int irq, void *data)
> @@ -1024,10 +1028,26 @@ static irqreturn_t ipi_handler(int irq, void *data)
>  
>  static void smp_cross_call(const struct cpumask *target, unsigned int ipinr)
>  {
> +	unsigned int cpu;
> +
> +	for_each_cpu(cpu, target)
> +		per_cpu(pending_ipi, cpu) = true;
> +

Why isn't all of this part of the core IRQ management? We already
track things like timers, I assume for similar reasons. If IPIs have
to be singled out, I'd rather this is done in common code, and not on
a per architecture basis.

>  	trace_ipi_raise(target, ipi_types[ipinr]);
>  	arm64_send_ipi(target, ipinr);
>  }
>  
> +bool cpus_has_pending_ipi(const struct cpumask *mask)
> +{
> +	unsigned int cpu;
> +
> +	for_each_cpu(cpu, mask) {
> +		if (per_cpu(pending_ipi, cpu))
> +			return true;
> +	}
> +	return false;
> +}
> +

The lack of memory barriers makes me wonder how reliable this is.
Maybe this is relying on the IPIs themselves acting as such, but
that's extremely racy no matter how you look at it.

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs
  2025-10-06 15:36 ` [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs Sudeep Holla
@ 2025-10-10  7:52   ` Ulf Hansson
  0 siblings, 0 replies; 16+ messages in thread
From: Ulf Hansson @ 2025-10-10  7:52 UTC (permalink / raw)
  To: Sudeep Holla
  Cc: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner, Maulik Shah, Daniel Lezcano, Vincent Guittot,
	linux-pm, linux-arm-kernel, linux-kernel

On Mon, 6 Oct 2025 at 17:36, Sudeep Holla <sudeep.holla@arm.com> wrote:
>
> On Fri, Oct 03, 2025 at 05:02:42PM +0200, Ulf Hansson wrote:
> > Platforms using the genpd governor for CPUs are relying on it to find the most
> > optimal idlestate for a group of CPUs. Although, observations tells us that
> > there are some significant improvement that can be made around this.
> >
> > These improvement are based upon allowing us to take pending IPIs into account
> > for the group of CPUs that the genpd governor is in control of. If there is
> > pending IPI for any of these CPUs, we should not request an idlestate that
> > affects the group, but rather pick a shallower state that affects only the CPU.
> >
>
> Thinking about this further, I’m not sure this issue is really specific to
> pmdomain. In my view, the proposed solution could apply equally well to
> platforms that don’t use pmdomain for cpuidle. Also, I don’t see why the
> solution needs to be architecture-specific.
>
> Thoughts ?

From PSCI PC-mode point of view (I assume that's your main target with
this above comment?), it would *not* make sense to bail out for
idle-states that could affect other CPUs too - because the CPU only
votes itself.

However, if there would be an IPI pending for the current CPU that is
about to enter idle, we should bail out. Although, as stated in the
other thread, we already have the need_resched() thing that helps out
with that, I think.

That said, I think this change is mostly interesting from pmdomain
point of view.

Let me comment on the architecture-specific part in the other thread,
as it seems like Marc also had some comments around that.

>
> I understand it won’t handle all IPI cases, but generic helpers like
> local_softirq_pending() and irq_work_needs_cpu()
> should already cover some of them in a platform-independent way.

Thanks for your suggestion, but unfortunately these don't really help
as they only have information about the current CPU.

Kind regards
Uffe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-06 14:41       ` Sudeep Holla
@ 2025-10-10  8:03         ` Ulf Hansson
  0 siblings, 0 replies; 16+ messages in thread
From: Ulf Hansson @ 2025-10-10  8:03 UTC (permalink / raw)
  To: Sudeep Holla
  Cc: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner, Maulik Shah, Daniel Lezcano, Vincent Guittot,
	linux-pm, linux-arm-kernel, linux-kernel

On Mon, 6 Oct 2025 at 16:41, Sudeep Holla <sudeep.holla@arm.com> wrote:
>
> On Mon, Oct 06, 2025 at 02:22:49PM +0200, Ulf Hansson wrote:
> > On Mon, 6 Oct 2025 at 12:54, Sudeep Holla <sudeep.holla@arm.com> wrote:
> > >
> > > 2. I understand this is intended for the DragonBoard 410c, where the firmware
> > >    can’t be updated. However, ideally, the PSCI firmware should handle checking
> > >    for pending IPIs if that’s important for the platform. The firmware could
> > >    perform this check at the CPU PPU/HW level and prevent entering the
> > >    state if needed.
> >
> > I think this is exactly what is happening on Dragonboard 410c (see the
> > stats I shared in the commit message in patch3).
> >
> > The PSCI FW refuses to enter the suggested idlestate and the call fails.
> >
>
> Ah OK, the PSCI FW is doing the job correctly, we are just attempting to
> reduce the failures by catching few cases earlier in the OSPM itself ?

Correct!

> Sure it only reduces the failures but it can't eliminate those as IPI might
> be issued after this check in the OSPM. I understand the call to firmware
> can be prevented.

Yes!

Although, it seems we are ending up doing a ping-pong game with the
FW. Note that, if the FW responds with an error because we have tried
to enter an idlestate for a group of CPUs, nothing prevents idling the
CPU again and hence we might re-try with the same idlestate (at least
until the pending IPI gets delivered).

My point is, this problem seems not negligible, as you can see from
the stats I have shared in the commit message of patch3.

Kind regards
Uffe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-06 15:55   ` Marc Zyngier
@ 2025-10-10  8:30     ` Ulf Hansson
  2025-10-10  9:48       ` Marc Zyngier
  2025-10-10  9:55       ` Mark Rutland
  0 siblings, 2 replies; 16+ messages in thread
From: Ulf Hansson @ 2025-10-10  8:30 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner, Maulik Shah, Sudeep Holla, Daniel Lezcano,
	Vincent Guittot, linux-pm, linux-arm-kernel, linux-kernel

On Mon, 6 Oct 2025 at 17:55, Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 03 Oct 2025 16:02:44 +0100,
> Ulf Hansson <ulf.hansson@linaro.org> wrote:
> >
> > To add support for keeping track of whether there may be a pending IPI
> > scheduled for a CPU or a group of CPUs, let's implement
> > cpus_has_pending_ipi() for arm64.
> >
> > Note, the implementation is intentionally lightweight and doesn't use any
> > additional lock. This is good enough for cpuidle based decisions.
> >
> > Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> > ---
> >  arch/arm64/kernel/smp.c | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> >
> > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > index 68cea3a4a35c..dd1acfa91d44 100644
> > --- a/arch/arm64/kernel/smp.c
> > +++ b/arch/arm64/kernel/smp.c
> > @@ -55,6 +55,8 @@
> >
> >  #include <trace/events/ipi.h>
> >
> > +static DEFINE_PER_CPU(bool, pending_ipi);
> > +
> >  /*
> >   * as from 2.5, kernels no longer have an init_tasks structure
> >   * so we need some other way of telling a new secondary core
> > @@ -1012,6 +1014,8 @@ static void do_handle_IPI(int ipinr)
> >
> >       if ((unsigned)ipinr < NR_IPI)
> >               trace_ipi_exit(ipi_types[ipinr]);
> > +
> > +     per_cpu(pending_ipi, cpu) = false;
> >  }
> >
> >  static irqreturn_t ipi_handler(int irq, void *data)
> > @@ -1024,10 +1028,26 @@ static irqreturn_t ipi_handler(int irq, void *data)
> >
> >  static void smp_cross_call(const struct cpumask *target, unsigned int ipinr)
> >  {
> > +     unsigned int cpu;
> > +
> > +     for_each_cpu(cpu, target)
> > +             per_cpu(pending_ipi, cpu) = true;
> > +
>
> Why isn't all of this part of the core IRQ management? We already
> track things like timers, I assume for similar reasons. If IPIs have
> to be singled out, I'd rather this is done in common code, and not on
> a per architecture basis.

The idea was to start simple, avoid running code for architectures
that don't seem to need it, by using this opt-in and lightweight
approach.

I guess we could do this in generic IRQ code too. Perhaps making it
conditional behind a Kconfig, if required.

>
> >       trace_ipi_raise(target, ipi_types[ipinr]);
> >       arm64_send_ipi(target, ipinr);
> >  }
> >
> > +bool cpus_has_pending_ipi(const struct cpumask *mask)
> > +{
> > +     unsigned int cpu;
> > +
> > +     for_each_cpu(cpu, mask) {
> > +             if (per_cpu(pending_ipi, cpu))
> > +                     return true;
> > +     }
> > +     return false;
> > +}
> > +
>
> The lack of memory barriers makes me wonder how reliable this is.
> Maybe this is relying on the IPIs themselves acting as such, but
> that's extremely racy no matter how you look at it.

It's deliberately lightweight. I am worried about introducing
locking/barriers, as those could be costly and introduce latencies in
these paths.

Still this is good enough to significantly improve cpuidle based
decisions in this regard. Please have a look at the commit message of
patch3.

That said, for sure I am open to suggestions on how to improve the
"racyness", while still keeping it lightweight.

Kind regards
Uffe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-10  8:30     ` Ulf Hansson
@ 2025-10-10  9:48       ` Marc Zyngier
  2025-10-10  9:55       ` Mark Rutland
  1 sibling, 0 replies; 16+ messages in thread
From: Marc Zyngier @ 2025-10-10  9:48 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Mark Rutland,
	Thomas Gleixner, Maulik Shah, Sudeep Holla, Daniel Lezcano,
	Vincent Guittot, linux-pm, linux-arm-kernel, linux-kernel

On Fri, 10 Oct 2025 09:30:11 +0100,
Ulf Hansson <ulf.hansson@linaro.org> wrote:
> 
> On Mon, 6 Oct 2025 at 17:55, Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Fri, 03 Oct 2025 16:02:44 +0100,
> > Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > >
> > > To add support for keeping track of whether there may be a pending IPI
> > > scheduled for a CPU or a group of CPUs, let's implement
> > > cpus_has_pending_ipi() for arm64.
> > >
> > > Note, the implementation is intentionally lightweight and doesn't use any
> > > additional lock. This is good enough for cpuidle based decisions.
> > >
> > > Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> > > ---
> > >  arch/arm64/kernel/smp.c | 20 ++++++++++++++++++++
> > >  1 file changed, 20 insertions(+)
> > >
> > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > > index 68cea3a4a35c..dd1acfa91d44 100644
> > > --- a/arch/arm64/kernel/smp.c
> > > +++ b/arch/arm64/kernel/smp.c
> > > @@ -55,6 +55,8 @@
> > >
> > >  #include <trace/events/ipi.h>
> > >
> > > +static DEFINE_PER_CPU(bool, pending_ipi);
> > > +
> > >  /*
> > >   * as from 2.5, kernels no longer have an init_tasks structure
> > >   * so we need some other way of telling a new secondary core
> > > @@ -1012,6 +1014,8 @@ static void do_handle_IPI(int ipinr)
> > >
> > >       if ((unsigned)ipinr < NR_IPI)
> > >               trace_ipi_exit(ipi_types[ipinr]);
> > > +
> > > +     per_cpu(pending_ipi, cpu) = false;
> > >  }
> > >
> > >  static irqreturn_t ipi_handler(int irq, void *data)
> > > @@ -1024,10 +1028,26 @@ static irqreturn_t ipi_handler(int irq, void *data)
> > >
> > >  static void smp_cross_call(const struct cpumask *target, unsigned int ipinr)
> > >  {
> > > +     unsigned int cpu;
> > > +
> > > +     for_each_cpu(cpu, target)
> > > +             per_cpu(pending_ipi, cpu) = true;
> > > +
> >
> > Why isn't all of this part of the core IRQ management? We already
> > track things like timers, I assume for similar reasons. If IPIs have
> > to be singled out, I'd rather this is done in common code, and not on
> > a per architecture basis.
> 
> The idea was to start simple, avoid running code for architectures
> that don't seem to need it, by using this opt-in and lightweight
> approach.

If this stuff is remotely useful, then it is useful to everyone, and I
don't see the point in littering the arch code with it. We have plenty
of buy-in features that can be selected by an architecture and ignored
by others if they see fit.

> 
> I guess we could do this in generic IRQ code too. Perhaps making it
> conditional behind a Kconfig, if required.
> 
> >
> > >       trace_ipi_raise(target, ipi_types[ipinr]);
> > >       arm64_send_ipi(target, ipinr);
> > >  }
> > >
> > > +bool cpus_has_pending_ipi(const struct cpumask *mask)
> > > +{
> > > +     unsigned int cpu;
> > > +
> > > +     for_each_cpu(cpu, mask) {
> > > +             if (per_cpu(pending_ipi, cpu))
> > > +                     return true;
> > > +     }
> > > +     return false;
> > > +}
> > > +
> >
> > The lack of memory barriers makes me wonder how reliable this is.
> > Maybe this is relying on the IPIs themselves acting as such, but
> > that's extremely racy no matter how you look at it.
> 
> It's deliberately lightweight. I am worried about introducing
> locking/barriers, as those could be costly and introduce latencies in
> these paths.

"I've made this car 10% faster by removing the brakes. It's great! Try
it!"

> Still this is good enough to significantly improve cpuidle based
> decisions in this regard. Please have a look at the commit message of
> patch3.

If I can't see how this thing is *correct*, I really don't care how
fast it is. You might as well remove most locks and barriers from the
kernel -- it will be even faster!

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-10  8:30     ` Ulf Hansson
  2025-10-10  9:48       ` Marc Zyngier
@ 2025-10-10  9:55       ` Mark Rutland
  1 sibling, 0 replies; 16+ messages in thread
From: Mark Rutland @ 2025-10-10  9:55 UTC (permalink / raw)
  To: Ulf Hansson
  Cc: Marc Zyngier, Rafael J . Wysocki, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Maulik Shah, Sudeep Holla, Daniel Lezcano,
	Vincent Guittot, linux-pm, linux-arm-kernel, linux-kernel

On Fri, Oct 10, 2025 at 10:30:11AM +0200, Ulf Hansson wrote:
> On Mon, 6 Oct 2025 at 17:55, Marc Zyngier <maz@kernel.org> wrote:
> > On Fri, 03 Oct 2025 16:02:44 +0100,
> > Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > To add support for keeping track of whether there may be a pending IPI
> > > scheduled for a CPU or a group of CPUs, let's implement
> > > cpus_has_pending_ipi() for arm64.
> > >
> > > Note, the implementation is intentionally lightweight and doesn't use any
> > > additional lock. This is good enough for cpuidle based decisions.
> > >
> > > Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>

> > > +bool cpus_has_pending_ipi(const struct cpumask *mask)
> > > +{
> > > +     unsigned int cpu;
> > > +
> > > +     for_each_cpu(cpu, mask) {
> > > +             if (per_cpu(pending_ipi, cpu))
> > > +                     return true;
> > > +     }
> > > +     return false;
> > > +}
> > > +
> >
> > The lack of memory barriers makes me wonder how reliable this is.
> > Maybe this is relying on the IPIs themselves acting as such, but
> > that's extremely racy no matter how you look at it.
> 
> It's deliberately lightweight. I am worried about introducing
> locking/barriers, as those could be costly and introduce latencies in
> these paths.

I think the concern is that the naming implies a precise semantic that
the code doesn't actually provide. As written and commented, this
function definitely has false positives and false negatives.

The commit message says "This is good enough for cpuidle based
decisions", but doesn't say what those decisions require nor why this is
good enough.

If false positives and/or false negatives are ok, add a comment block
above the function to mention that those are acceptable. Presumably
there's some boundary at which incorrectness is not acceptable (e.g. if
it's wrong 50% of the time), and we'd want to understand how we can
ensure that we're the right side of that boundary.

Mark.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-03 15:02 ` [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi() Ulf Hansson
  2025-10-06 10:54   ` Sudeep Holla
  2025-10-06 15:55   ` Marc Zyngier
@ 2025-10-17 14:01   ` Thomas Gleixner
  2025-10-20 13:15     ` Ulf Hansson
  2 siblings, 1 reply; 16+ messages in thread
From: Thomas Gleixner @ 2025-10-17 14:01 UTC (permalink / raw)
  To: Ulf Hansson, Rafael J . Wysocki, Catalin Marinas, Will Deacon,
	Mark Rutland
  Cc: Maulik Shah, Sudeep Holla, Daniel Lezcano, Vincent Guittot,
	linux-pm, linux-arm-kernel, linux-kernel, Ulf Hansson

On Fri, Oct 03 2025 at 17:02, Ulf Hansson wrote:
> Note, the implementation is intentionally lightweight and doesn't use
> any

By some definition of lightweight.

>  static void smp_cross_call(const struct cpumask *target, unsigned int ipinr)
>  {
> +	unsigned int cpu;
> +
> +	for_each_cpu(cpu, target)
> +		per_cpu(pending_ipi, cpu) = true;

Iterating over a full cpumask on a big system is not necessarily
considered lightweight. And that comes on top of the loop in
smp_call_function_many_cond() plus the potential loop in
arm64_send_ipi()...

None of this is actually needed. If you want a lightweight racy check
whether there is an IPI en route to a set of CPUs then you can simply do
that in kernel/smp.c:

bool smp_pending_ipis_crystalball(mask)
{
	for_each_cpu(cpu, mask) {
                if (!llist_empty(per_cpu_ptr(&call_single_queue, cpu)))
                	return true;
        }
        return false;
}

No?

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
  2025-10-17 14:01   ` Thomas Gleixner
@ 2025-10-20 13:15     ` Ulf Hansson
  0 siblings, 0 replies; 16+ messages in thread
From: Ulf Hansson @ 2025-10-20 13:15 UTC (permalink / raw)
  To: Thomas Gleixner, Mark Rutland, Marc Zyngier
  Cc: Rafael J . Wysocki, Catalin Marinas, Will Deacon, Maulik Shah,
	Sudeep Holla, Daniel Lezcano, Vincent Guittot, linux-pm,
	linux-arm-kernel, linux-kernel

+ Marc

On Fri, 17 Oct 2025 at 16:01, Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Fri, Oct 03 2025 at 17:02, Ulf Hansson wrote:
> > Note, the implementation is intentionally lightweight and doesn't use
> > any
>
> By some definition of lightweight.
>
> >  static void smp_cross_call(const struct cpumask *target, unsigned int ipinr)
> >  {
> > +     unsigned int cpu;
> > +
> > +     for_each_cpu(cpu, target)
> > +             per_cpu(pending_ipi, cpu) = true;
>
> Iterating over a full cpumask on a big system is not necessarily
> considered lightweight. And that comes on top of the loop in
> smp_call_function_many_cond() plus the potential loop in
> arm64_send_ipi()...
>
> None of this is actually needed. If you want a lightweight racy check
> whether there is an IPI en route to a set of CPUs then you can simply do
> that in kernel/smp.c:
>
> bool smp_pending_ipis_crystalball(mask)
> {
>         for_each_cpu(cpu, mask) {
>                 if (!llist_empty(per_cpu_ptr(&call_single_queue, cpu)))
>                         return true;
>         }
>         return false;
> }
>
> No?

Indeed this is way better, thanks for your suggestion!

I have also tried this out and can confirm that it gives the same
improved results on the Dragonboard 410c!

I will submit a new version of the series and I will try to
incorporate all the valuable feedback I have received.

Thanks everyone and kind regards
Uffe


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2025-10-20 13:16 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-03 15:02 [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs Ulf Hansson
2025-10-03 15:02 ` [PATCH 1/3] smp: Introduce a weak helper function to check for pending IPIs Ulf Hansson
2025-10-03 15:02 ` [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi() Ulf Hansson
2025-10-06 10:54   ` Sudeep Holla
2025-10-06 12:22     ` Ulf Hansson
2025-10-06 14:41       ` Sudeep Holla
2025-10-10  8:03         ` Ulf Hansson
2025-10-06 15:55   ` Marc Zyngier
2025-10-10  8:30     ` Ulf Hansson
2025-10-10  9:48       ` Marc Zyngier
2025-10-10  9:55       ` Mark Rutland
2025-10-17 14:01   ` Thomas Gleixner
2025-10-20 13:15     ` Ulf Hansson
2025-10-03 15:02 ` [PATCH 3/3] pmdomain: Extend the genpd governor for CPUs to account for IPIs Ulf Hansson
2025-10-06 15:36 ` [PATCH 0/3] pmdomain: Improve idlestate selection for CPUs Sudeep Holla
2025-10-10  7:52   ` Ulf Hansson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).