* [RFC][PATCH v2 0/5] cpuidle,teo: Improve TEO vs NOHZ
@ 2023-08-02 13:24 Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 1/5] tick/nohz: Introduce tick_get_sleep_length() Peter Zijlstra
` (4 more replies)
0 siblings, 5 replies; 7+ messages in thread
From: Peter Zijlstra @ 2023-08-02 13:24 UTC (permalink / raw)
To: anna-maria, rafael, tglx, frederic, gautham.shenoy
Cc: linux-kernel, peterz, daniel.lezcano, linux-pm, mingo, juri.lelli,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
bristot, vschneid, kajetan.puchalski
Patches attempting to solve the TEO tick/nohz vs timer-pull issue.
Very lightly tested..
Since last time:
- 'split' tick_nohz_get_sleep_length()
- replaced the tick heuristic
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC][PATCH v2 1/5] tick/nohz: Introduce tick_get_sleep_length()
2023-08-02 13:24 [RFC][PATCH v2 0/5] cpuidle,teo: Improve TEO vs NOHZ Peter Zijlstra
@ 2023-08-02 13:24 ` Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 2/5] cpuidle: Inject tick boundary state Peter Zijlstra
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2023-08-02 13:24 UTC (permalink / raw)
To: anna-maria, rafael, tglx, frederic, gautham.shenoy
Cc: linux-kernel, peterz, daniel.lezcano, linux-pm, mingo, juri.lelli,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
bristot, vschneid, kajetan.puchalski
Add a variant of tick_nohz_get_sleep_length() that conditionally does
the NOHZ part.
tick_get_sleep_length(false) returns the delta_next return value of
tick_nohz_get_sleep_length(), while tick_get_sleep_length(true)
returns the regular return of tick_nohz_get_sleep_length().
This allows eliding tick_nohz_next_event() -- which is going to be
expensive with timer-pull.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
include/linux/tick.h | 5 +++++
kernel/time/tick-sched.c | 35 +++++++++++++++++++++++------------
2 files changed, 28 insertions(+), 12 deletions(-)
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -136,6 +136,7 @@ extern void tick_nohz_irq_exit(void);
extern bool tick_nohz_idle_got_tick(void);
extern ktime_t tick_nohz_get_next_hrtimer(void);
extern ktime_t tick_nohz_get_sleep_length(ktime_t *delta_next);
+extern ktime_t tick_get_sleep_length(bool nohz);
extern unsigned long tick_nohz_get_idle_calls(void);
extern unsigned long tick_nohz_get_idle_calls_cpu(int cpu);
extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
@@ -168,6 +169,10 @@ static inline ktime_t tick_nohz_get_slee
*delta_next = TICK_NSEC;
return *delta_next;
}
+static inline ktime_t tick_get_sleep_length(bool nohz)
+{
+ return TICK_NSEC;
+}
static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -1218,17 +1218,7 @@ ktime_t tick_nohz_get_next_hrtimer(void)
return __this_cpu_read(tick_cpu_device.evtdev)->next_event;
}
-/**
- * tick_nohz_get_sleep_length - return the expected length of the current sleep
- * @delta_next: duration until the next event if the tick cannot be stopped
- *
- * Called from power state control code with interrupts disabled.
- *
- * The return value of this function and/or the value returned by it through the
- * @delta_next pointer can be negative which must be taken into account by its
- * callers.
- */
-ktime_t tick_nohz_get_sleep_length(ktime_t *delta_next)
+static ktime_t __tick_nohz_get_sleep_length(ktime_t *delta_next, bool nohz)
{
struct clock_event_device *dev = __this_cpu_read(tick_cpu_device.evtdev);
struct tick_sched *ts = this_cpu_ptr(&tick_cpu_sched);
@@ -1244,7 +1234,7 @@ ktime_t tick_nohz_get_sleep_length(ktime
*delta_next = ktime_sub(dev->next_event, now);
- if (!can_stop_idle_tick(cpu, ts))
+ if (!nohz || !can_stop_idle_tick(cpu, ts))
return *delta_next;
next_event = tick_nohz_next_event(ts, cpu);
@@ -1262,6 +1252,27 @@ ktime_t tick_nohz_get_sleep_length(ktime
}
/**
+ * tick_nohz_get_sleep_length - return the expected length of the current sleep
+ * @delta_next: duration until the next event if the tick cannot be stopped
+ *
+ * Called from power state control code with interrupts disabled.
+ *
+ * The return value of this function and/or the value returned by it through the
+ * @delta_next pointer can be negative which must be taken into account by its
+ * callers.
+ */
+ktime_t tick_nohz_get_sleep_length(ktime_t *delta_next)
+{
+ return __tick_nohz_get_sleep_length(delta_next, true);
+}
+
+ktime_t tick_get_sleep_length(bool nohz)
+{
+ ktime_t delta;
+ return __tick_nohz_get_sleep_length(&delta, nohz);
+}
+
+/**
* tick_nohz_get_idle_calls_cpu - return the current idle calls counter value
* for a particular CPU.
*
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC][PATCH v2 2/5] cpuidle: Inject tick boundary state
2023-08-02 13:24 [RFC][PATCH v2 0/5] cpuidle,teo: Improve TEO vs NOHZ Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 1/5] tick/nohz: Introduce tick_get_sleep_length() Peter Zijlstra
@ 2023-08-02 13:24 ` Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 3/5] cpuidle/teo: Simplify a little Peter Zijlstra
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2023-08-02 13:24 UTC (permalink / raw)
To: anna-maria, rafael, tglx, frederic, gautham.shenoy
Cc: linux-kernel, peterz, daniel.lezcano, linux-pm, mingo, juri.lelli,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
bristot, vschneid, kajetan.puchalski
In order to facilitate governors that track history in idle-state
buckets (TEO) making a useful decision about NOHZ, make sure we have a
bucket that counts tick-and-longer.
In order to be inclusive of the tick itself -- after all, if we do not
disable NOHZ we'll sleep for a full tick, the actual boundary should
be just short of a full tick.
IOW, when registering the idle-states, add one that is always
disabled, just to have a bucket.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
drivers/cpuidle/cpuidle.h | 2 +
drivers/cpuidle/driver.c | 48 +++++++++++++++++++++++++++++++++++++++++++++-
include/linux/cpuidle.h | 2 -
3 files changed, 50 insertions(+), 2 deletions(-)
--- a/drivers/cpuidle/cpuidle.h
+++ b/drivers/cpuidle/cpuidle.h
@@ -72,4 +72,6 @@ static inline void cpuidle_coupled_unreg
}
#endif
+#define SHORT_TICK_NSEC (TICK_NSEC - TICK_NSEC/32)
+
#endif /* __DRIVER_CPUIDLE_H */
--- a/drivers/cpuidle/driver.c
+++ b/drivers/cpuidle/driver.c
@@ -147,13 +147,37 @@ static void cpuidle_setup_broadcast_time
tick_broadcast_disable();
}
+static int tick_enter(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv,
+ int index)
+{
+ return -ENODEV;
+}
+
+static void __cpuidle_state_init_tick(struct cpuidle_state *s)
+{
+ strcpy(s->name, "TICK");
+ strcpy(s->desc, "(no-op)");
+
+ s->target_residency_ns = SHORT_TICK_NSEC;
+ s->target_residency = div_u64(SHORT_TICK_NSEC, NSEC_PER_USEC);
+
+ s->exit_latency_ns = 0;
+ s->exit_latency = 0;
+
+ s->flags |= CPUIDLE_FLAG_UNUSABLE;
+
+ s->enter = tick_enter;
+ s->enter_s2idle = tick_enter;
+}
+
/**
* __cpuidle_driver_init - initialize the driver's internal data
* @drv: a valid pointer to a struct cpuidle_driver
*/
static void __cpuidle_driver_init(struct cpuidle_driver *drv)
{
- int i;
+ int tick = 0, i;
/*
* Use all possible CPUs as the default, because if the kernel boots
@@ -163,6 +187,9 @@ static void __cpuidle_driver_init(struct
if (!drv->cpumask)
drv->cpumask = (struct cpumask *)cpu_possible_mask;
+ if (WARN_ON_ONCE(drv->state_count >= CPUIDLE_STATE_MAX-2))
+ tick = 1;
+
for (i = 0; i < drv->state_count; i++) {
struct cpuidle_state *s = &drv->states[i];
@@ -192,6 +219,25 @@ static void __cpuidle_driver_init(struct
s->exit_latency_ns = 0;
else
s->exit_latency = div_u64(s->exit_latency_ns, NSEC_PER_USEC);
+
+ if (!tick && s->target_residency_ns >= SHORT_TICK_NSEC) {
+ tick = 1;
+
+ if (s->target_residency_ns == SHORT_TICK_NSEC)
+ continue;
+
+ memmove(&drv->states[i+1], &drv->states[i],
+ sizeof(struct cpuidle_state) * (CPUIDLE_STATE_MAX - i - 1));
+ __cpuidle_state_init_tick(s);
+ drv->state_count++;
+ i++;
+ }
+ }
+
+ if (!tick) {
+ struct cpuidle_state *s = &drv->states[i];
+ __cpuidle_state_init_tick(s);
+ drv->state_count++;
}
}
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -16,7 +16,7 @@
#include <linux/hrtimer.h>
#include <linux/context_tracking.h>
-#define CPUIDLE_STATE_MAX 10
+#define CPUIDLE_STATE_MAX 16
#define CPUIDLE_NAME_LEN 16
#define CPUIDLE_DESC_LEN 32
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC][PATCH v2 3/5] cpuidle/teo: Simplify a little
2023-08-02 13:24 [RFC][PATCH v2 0/5] cpuidle,teo: Improve TEO vs NOHZ Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 1/5] tick/nohz: Introduce tick_get_sleep_length() Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 2/5] cpuidle: Inject tick boundary state Peter Zijlstra
@ 2023-08-02 13:24 ` Peter Zijlstra
2023-08-03 10:12 ` Kajetan Puchalski
2023-08-02 13:24 ` [RFC][PATCH v2 4/5] cpuidle/teo: Avoid tick_nohz_next_event() Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 5/5] cpuidle,teo: Improve state selection Peter Zijlstra
4 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2023-08-02 13:24 UTC (permalink / raw)
To: anna-maria, rafael, tglx, frederic, gautham.shenoy
Cc: linux-kernel, peterz, daniel.lezcano, linux-pm, mingo, juri.lelli,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
bristot, vschneid, kajetan.puchalski
Remove some of the early exit cases that rely on state_count, since we
have the additional tick state. Declutters some of the next patches, can
possibly be re-instated later if desired.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
drivers/cpuidle/governors/teo.c | 31 +------------------------------
1 file changed, 1 insertion(+), 30 deletions(-)
--- a/drivers/cpuidle/governors/teo.c
+++ b/drivers/cpuidle/governors/teo.c
@@ -187,7 +187,6 @@ struct teo_bin {
* @next_recent_idx: Index of the next @recent_idx entry to update.
* @recent_idx: Indices of bins corresponding to recent "intercepts".
* @util_threshold: Threshold above which the CPU is considered utilized
- * @utilized: Whether the last sleep on the CPU happened while utilized
*/
struct teo_cpu {
s64 time_span_ns;
@@ -197,7 +196,6 @@ struct teo_cpu {
int next_recent_idx;
int recent_idx[NR_RECENT];
unsigned long util_threshold;
- bool utilized;
};
static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
@@ -379,33 +377,6 @@ static int teo_select(struct cpuidle_dri
duration_ns = tick_nohz_get_sleep_length(&delta_tick);
cpu_data->sleep_length_ns = duration_ns;
- /* Check if there is any choice in the first place. */
- if (drv->state_count < 2) {
- idx = 0;
- goto end;
- }
- if (!dev->states_usage[0].disable) {
- idx = 0;
- if (drv->states[1].target_residency_ns > duration_ns)
- goto end;
- }
-
- cpu_data->utilized = teo_cpu_is_utilized(dev->cpu, cpu_data);
- /*
- * If the CPU is being utilized over the threshold and there are only 2
- * states to choose from, the metrics need not be considered, so choose
- * the shallowest non-polling state and exit.
- */
- if (drv->state_count < 3 && cpu_data->utilized) {
- for (i = 0; i < drv->state_count; ++i) {
- if (!dev->states_usage[i].disable &&
- !(drv->states[i].flags & CPUIDLE_FLAG_POLLING)) {
- idx = i;
- goto end;
- }
- }
- }
-
/*
* Find the deepest idle state whose target residency does not exceed
* the current sleep length and the deepest idle state not deeper than
@@ -541,7 +512,7 @@ static int teo_select(struct cpuidle_dri
* If the CPU is being utilized over the threshold, choose a shallower
* non-polling state to improve latency
*/
- if (cpu_data->utilized)
+ if (teo_cpu_is_utilized(dev->cpu, cpu_data))
idx = teo_find_shallower_state(drv, dev, idx, duration_ns, true);
end:
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC][PATCH v2 4/5] cpuidle/teo: Avoid tick_nohz_next_event()
2023-08-02 13:24 [RFC][PATCH v2 0/5] cpuidle,teo: Improve TEO vs NOHZ Peter Zijlstra
` (2 preceding siblings ...)
2023-08-02 13:24 ` [RFC][PATCH v2 3/5] cpuidle/teo: Simplify a little Peter Zijlstra
@ 2023-08-02 13:24 ` Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 5/5] cpuidle,teo: Improve state selection Peter Zijlstra
4 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2023-08-02 13:24 UTC (permalink / raw)
To: anna-maria, rafael, tglx, frederic, gautham.shenoy
Cc: linux-kernel, peterz, daniel.lezcano, linux-pm, mingo, juri.lelli,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
bristot, vschneid, kajetan.puchalski
Use the new tick_get_sleep_length() call in conjunction with the new
TICK state to elide tick_nohz_next_event() when possible.
Specifically, start the state selection using the existing next timer
(tick or earlier). And only when state selection lands on the TICK
state ask for the NOHZ next timer.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
drivers/cpuidle/governors/teo.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
--- a/drivers/cpuidle/governors/teo.c
+++ b/drivers/cpuidle/governors/teo.c
@@ -139,6 +139,7 @@
#include <linux/sched/clock.h>
#include <linux/sched/topology.h>
#include <linux/tick.h>
+#include "../cpuidle.h"
/*
* The number of bits to shift the CPU's capacity by in order to determine
@@ -363,8 +364,7 @@ static int teo_select(struct cpuidle_dri
int constraint_idx = 0;
int idx0 = 0, idx = -1;
bool alt_intercepts, alt_recent;
- ktime_t delta_tick;
- s64 duration_ns;
+ s64 duration_ns, tick_ns;
int i;
if (dev->last_state_idx >= 0) {
@@ -374,8 +374,7 @@ static int teo_select(struct cpuidle_dri
cpu_data->time_span_ns = local_clock();
- duration_ns = tick_nohz_get_sleep_length(&delta_tick);
- cpu_data->sleep_length_ns = duration_ns;
+ duration_ns = tick_ns = tick_get_sleep_length(false);
/*
* Find the deepest idle state whose target residency does not exceed
@@ -407,6 +406,14 @@ static int teo_select(struct cpuidle_dri
if (s->target_residency_ns > duration_ns)
break;
+ if (s->target_residency_ns == SHORT_TICK_NSEC) {
+ /*
+ * We hit the tick state, see if it makes sense to
+ * disable the tick and go deeper still.
+ */
+ duration_ns = tick_get_sleep_length(true);
+ }
+
idx = i;
if (s->exit_latency_ns <= latency_req)
@@ -417,6 +424,8 @@ static int teo_select(struct cpuidle_dri
idx_recent_sum = recent_sum;
}
+ cpu_data->sleep_length_ns = duration_ns;
+
/* Avoid unnecessary overhead. */
if (idx < 0) {
idx = 0; /* No states enabled, must use 0. */
@@ -531,8 +540,8 @@ static int teo_select(struct cpuidle_dri
* that.
*/
if (idx > idx0 &&
- drv->states[idx].target_residency_ns > delta_tick)
- idx = teo_find_shallower_state(drv, dev, idx, delta_tick, false);
+ drv->states[idx].target_residency_ns > tick_ns)
+ idx = teo_find_shallower_state(drv, dev, idx, tick_ns, false);
}
return idx;
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC][PATCH v2 5/5] cpuidle,teo: Improve state selection
2023-08-02 13:24 [RFC][PATCH v2 0/5] cpuidle,teo: Improve TEO vs NOHZ Peter Zijlstra
` (3 preceding siblings ...)
2023-08-02 13:24 ` [RFC][PATCH v2 4/5] cpuidle/teo: Avoid tick_nohz_next_event() Peter Zijlstra
@ 2023-08-02 13:24 ` Peter Zijlstra
4 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2023-08-02 13:24 UTC (permalink / raw)
To: anna-maria, rafael, tglx, frederic, gautham.shenoy
Cc: linux-kernel, peterz, daniel.lezcano, linux-pm, mingo, juri.lelli,
vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
bristot, vschneid, kajetan.puchalski
When selecting a state, stop when history tells us 66% of recent idles
were at or below our current state.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
drivers/cpuidle/governors/teo.c | 6 ++++++
1 file changed, 6 insertions(+)
--- a/drivers/cpuidle/governors/teo.c
+++ b/drivers/cpuidle/governors/teo.c
@@ -361,6 +361,7 @@ static int teo_select(struct cpuidle_dri
unsigned int recent_sum = 0;
unsigned int idx_hit_sum = 0;
unsigned int hit_sum = 0;
+ unsigned int thresh_sum = 0;
int constraint_idx = 0;
int idx0 = 0, idx = -1;
bool alt_intercepts, alt_recent;
@@ -376,6 +377,8 @@ static int teo_select(struct cpuidle_dri
duration_ns = tick_ns = tick_get_sleep_length(false);
+ thresh_sum = 2 * cpu_data->total / 3; /* 66% */
+
/*
* Find the deepest idle state whose target residency does not exceed
* the current sleep length and the deepest idle state not deeper than
@@ -406,6 +409,9 @@ static int teo_select(struct cpuidle_dri
if (s->target_residency_ns > duration_ns)
break;
+ if (intercept_sum + hit_sum > thresh_sum)
+ break;
+
if (s->target_residency_ns == SHORT_TICK_NSEC) {
/*
* We hit the tick state, see if it makes sense to
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC][PATCH v2 3/5] cpuidle/teo: Simplify a little
2023-08-02 13:24 ` [RFC][PATCH v2 3/5] cpuidle/teo: Simplify a little Peter Zijlstra
@ 2023-08-03 10:12 ` Kajetan Puchalski
0 siblings, 0 replies; 7+ messages in thread
From: Kajetan Puchalski @ 2023-08-03 10:12 UTC (permalink / raw)
To: Peter Zijlstra
Cc: anna-maria, rafael, tglx, frederic, gautham.shenoy, linux-kernel,
daniel.lezcano, linux-pm, mingo, juri.lelli, vincent.guittot,
dietmar.eggemann, rostedt, bsegall, mgorman, bristot, vschneid,
kajetan.puchalski
On Wed, Aug 02, 2023 at 03:24:34PM +0200, Peter Zijlstra wrote:
> Remove some of the early exit cases that rely on state_count, since we
> have the additional tick state. Declutters some of the next patches, can
> possibly be re-instated later if desired.
How does having that added tick state compensate for not checking the
amount of states that the governor can choose from in the first place?
[...]
> - /* Check if there is any choice in the first place. */
> - if (drv->state_count < 2) {
> - idx = 0;
> - goto end;
> - }
> - if (!dev->states_usage[0].disable) {
> - idx = 0;
> - if (drv->states[1].target_residency_ns > duration_ns)
> - goto end;
> - }
> -
> - cpu_data->utilized = teo_cpu_is_utilized(dev->cpu, cpu_data);
> - /*
> - * If the CPU is being utilized over the threshold and there are only 2
> - * states to choose from, the metrics need not be considered, so choose
> - * the shallowest non-polling state and exit.
> - */
> - if (drv->state_count < 3 && cpu_data->utilized) {
> - for (i = 0; i < drv->state_count; ++i) {
> - if (!dev->states_usage[i].disable &&
> - !(drv->states[i].flags & CPUIDLE_FLAG_POLLING)) {
> - idx = i;
> - goto end;
> - }
> - }
> - }
What exactly is the benefit of removing this part? On systems with 2
idle states this will just make the governor pointlessly execute the
metrics code when the result is already known regardless. Seems like
pure added overhead.
> -
> /*
> * Find the deepest idle state whose target residency does not exceed
> * the current sleep length and the deepest idle state not deeper than
> @@ -541,7 +512,7 @@ static int teo_select(struct cpuidle_dri
> * If the CPU is being utilized over the threshold, choose a shallower
> * non-polling state to improve latency
> */
> - if (cpu_data->utilized)
> + if (teo_cpu_is_utilized(dev->cpu, cpu_data))
> idx = teo_find_shallower_state(drv, dev, idx, duration_ns, true);
>
> end:
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-08-03 10:12 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-02 13:24 [RFC][PATCH v2 0/5] cpuidle,teo: Improve TEO vs NOHZ Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 1/5] tick/nohz: Introduce tick_get_sleep_length() Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 2/5] cpuidle: Inject tick boundary state Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 3/5] cpuidle/teo: Simplify a little Peter Zijlstra
2023-08-03 10:12 ` Kajetan Puchalski
2023-08-02 13:24 ` [RFC][PATCH v2 4/5] cpuidle/teo: Avoid tick_nohz_next_event() Peter Zijlstra
2023-08-02 13:24 ` [RFC][PATCH v2 5/5] cpuidle,teo: Improve state selection Peter Zijlstra
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).