public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 17/24] sched/fair: Implement delayed dequeue
       [not found]     ` <5618d029-769a-4690-a581-2df8939f26a9@samsung.com>
@ 2024-10-10  2:49       ` Sean Christopherson
  2024-10-10  7:57         ` Mike Galbraith
  2024-10-10  8:19         ` Peter Zijlstra
  0 siblings, 2 replies; 7+ messages in thread
From: Sean Christopherson @ 2024-10-10  2:49 UTC (permalink / raw)
  To: Marek Szyprowski
  Cc: Peter Zijlstra, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	linux-kernel, kprateek.nayak, wuyun.abel, youssefesmat, tglx,
	efault, kvm

+KVM

On Thu, Aug 29, 2024, Marek Szyprowski wrote:
> On 27.07.2024 12:27, Peter Zijlstra wrote:
> > Extend / fix 86bfbb7ce4f6 ("sched/fair: Add lag based placement") by
> > noting that lag is fundamentally a temporal measure. It should not be
> > carried around indefinitely.
> >
> > OTOH it should also not be instantly discarded, doing so will allow a
> > task to game the system by purposefully (micro) sleeping at the end of
> > its time quantum.
> >
> > Since lag is intimately tied to the virtual time base, a wall-time
> > based decay is also insufficient, notably competition is required for
> > any of this to make sense.
> >
> > Instead, delay the dequeue and keep the 'tasks' on the runqueue,
> > competing until they are eligible.
> >
> > Strictly speaking, we only care about keeping them until the 0-lag
> > point, but that is a difficult proposition, instead carry them around
> > until they get picked again, and dequeue them at that point.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> 
> This patch landed recently in linux-next as commit 152e11f6df29 
> ("sched/fair: Implement delayed dequeue"). In my tests on some of the 
> ARM 32bit boards it causes a regression in rtcwake tool behavior - from 
> time to time this simple call never ends:
> 
> # time rtcwake -s 10 -m on
> 
> Reverting this commit (together with its compile dependencies) on top of 
> linux-next fixes this issue. Let me know how can I help debugging this 
> issue.

This commit broke KVM's posted interrupt handling (and other things), and the root
cause may be the same underlying issue.

TL;DR: Code that checks task_struct.on_rq may be broken by this commit.

KVM's breakage boils down to the preempt notifiers, i.e. kvm_sched_out(), being
invoked with current->on_rq "true" after KVM has explicitly called schedule().
kvm_sched_out() uses current->on_rq to determine if the vCPU is being preempted
(voluntarily or not, doesn't matter), and so waiting until some later point in
time to call __block_task() causes KVM to think the task was preempted, when in
reality it was not.

  static void kvm_sched_out(struct preempt_notifier *pn,
 			  struct task_struct *next)
  {
	struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn);

	WRITE_ONCE(vcpu->scheduled_out, true);

	if (current->on_rq && vcpu->wants_to_run) {  <================
		WRITE_ONCE(vcpu->preempted, true);
		WRITE_ONCE(vcpu->ready, true);
	}
	kvm_arch_vcpu_put(vcpu);
	__this_cpu_write(kvm_running_vcpu, NULL);
  }

KVM uses vcpu->preempted for a variety of things, but the most visibly problematic
is waking a vCPU from (virtual) HLT via posted interrupt wakeup.  When a vCPU
HLTs, KVM ultimate calls schedule() to schedule out the vCPU until it receives
a wake event.

When a device or another vCPU can post an interrupt as a wake event, KVM mucks
with the blocking vCPU's posted interrupt descriptor so that posted interrupts
that should be wake events get delivered on a dedicated host IRQ vector, so that
KVM can kick and wake the target vCPU.

But when vcpu->preempted is true, KVM suppresses posted interrupt notifications,
knowing that the vCPU will be scheduled back in.  Because a vCPU (task) can be
preempted while KVM is emulating HLT, KVM keys off vcpu->preempted to set PID.SN,
and doesn't exempt the blocking case.  In short, KVM uses vcpu->preempted, i.e.
current->on_rq, to differentiate between the vCPU getting preempted and KVM
executing schedule().

As a result, the false positive for vcpu->preempted causes KVM to suppress posted
interrupt notifications and the target vCPU never gets its wake event.


Peter,

Any thoughts on how best to handle this?  The below hack-a-fix resolves the issue,
but it's obviously not appropriate.  KVM uses vcpu->preempted for more than just
posted interrupts, so KVM needs equivalent functionality to current->on-rq as it
was before this commit.

@@ -6387,7 +6390,7 @@ static void kvm_sched_out(struct preempt_notifier *pn,
 
        WRITE_ONCE(vcpu->scheduled_out, true);
 
-       if (current->on_rq && vcpu->wants_to_run) {
+       if (se_runnable(&current->se) && vcpu->wants_to_run) {
                WRITE_ONCE(vcpu->preempted, true);
                WRITE_ONCE(vcpu->ready, true);
        }

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/24] sched/fair: Implement delayed dequeue
  2024-10-10  2:49       ` [PATCH 17/24] sched/fair: Implement delayed dequeue Sean Christopherson
@ 2024-10-10  7:57         ` Mike Galbraith
  2024-10-10 16:18           ` Sean Christopherson
  2024-10-10  8:19         ` Peter Zijlstra
  1 sibling, 1 reply; 7+ messages in thread
From: Mike Galbraith @ 2024-10-10  7:57 UTC (permalink / raw)
  To: Sean Christopherson, Marek Szyprowski
  Cc: Peter Zijlstra, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	linux-kernel, kprateek.nayak, wuyun.abel, youssefesmat, tglx, kvm

On Wed, 2024-10-09 at 19:49 -0700, Sean Christopherson wrote:
>
> Any thoughts on how best to handle this?  The below hack-a-fix resolves the issue,
> but it's obviously not appropriate.  KVM uses vcpu->preempted for more than just
> posted interrupts, so KVM needs equivalent functionality to current->on-rq as it
> was before this commit.
>
> @@ -6387,7 +6390,7 @@ static void kvm_sched_out(struct preempt_notifier *pn,
>  
>         WRITE_ONCE(vcpu->scheduled_out, true);
>  
> -       if (current->on_rq && vcpu->wants_to_run) {
> +       if (se_runnable(&current->se) && vcpu->wants_to_run) {
>                 WRITE_ONCE(vcpu->preempted, true);
>                 WRITE_ONCE(vcpu->ready, true);
>         }

Why is that deemed "obviously not appropriate"?  ->on_rq in and of
itself meaning only "on rq" doesn't seem like a bad thing.

	-Mike

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/24] sched/fair: Implement delayed dequeue
  2024-10-10  2:49       ` [PATCH 17/24] sched/fair: Implement delayed dequeue Sean Christopherson
  2024-10-10  7:57         ` Mike Galbraith
@ 2024-10-10  8:19         ` Peter Zijlstra
  2024-10-10  9:18           ` Peter Zijlstra
  1 sibling, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2024-10-10  8:19 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marek Szyprowski, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	linux-kernel, kprateek.nayak, wuyun.abel, youssefesmat, tglx,
	efault, kvm

On Wed, Oct 09, 2024 at 07:49:54PM -0700, Sean Christopherson wrote:

> TL;DR: Code that checks task_struct.on_rq may be broken by this commit.

Correct, and while I did look at quite a few, I did miss KVM used it,
damn.

> Peter,
> 
> Any thoughts on how best to handle this?  The below hack-a-fix resolves the issue,
> but it's obviously not appropriate.  KVM uses vcpu->preempted for more than just
> posted interrupts, so KVM needs equivalent functionality to current->on-rq as it
> was before this commit.
> 
> @@ -6387,7 +6390,7 @@ static void kvm_sched_out(struct preempt_notifier *pn,
>  
>         WRITE_ONCE(vcpu->scheduled_out, true);
>  
> -       if (current->on_rq && vcpu->wants_to_run) {
> +       if (se_runnable(&current->se) && vcpu->wants_to_run) {
>                 WRITE_ONCE(vcpu->preempted, true);
>                 WRITE_ONCE(vcpu->ready, true);
>         }

se_runnable() isn't quite right, but yes, a helper along those lines is
probably best. Let me try and grep more to see if there's others I
missed as well :/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/24] sched/fair: Implement delayed dequeue
  2024-10-10  8:19         ` Peter Zijlstra
@ 2024-10-10  9:18           ` Peter Zijlstra
  2024-10-10 18:23             ` Sean Christopherson
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2024-10-10  9:18 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marek Szyprowski, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	linux-kernel, kprateek.nayak, wuyun.abel, youssefesmat, tglx,
	efault, kvm

On Thu, Oct 10, 2024 at 10:19:40AM +0200, Peter Zijlstra wrote:
> On Wed, Oct 09, 2024 at 07:49:54PM -0700, Sean Christopherson wrote:
> 
> > TL;DR: Code that checks task_struct.on_rq may be broken by this commit.
> 
> Correct, and while I did look at quite a few, I did miss KVM used it,
> damn.
> 
> > Peter,
> > 
> > Any thoughts on how best to handle this?  The below hack-a-fix resolves the issue,
> > but it's obviously not appropriate.  KVM uses vcpu->preempted for more than just
> > posted interrupts, so KVM needs equivalent functionality to current->on-rq as it
> > was before this commit.
> > 
> > @@ -6387,7 +6390,7 @@ static void kvm_sched_out(struct preempt_notifier *pn,
> >  
> >         WRITE_ONCE(vcpu->scheduled_out, true);
> >  
> > -       if (current->on_rq && vcpu->wants_to_run) {
> > +       if (se_runnable(&current->se) && vcpu->wants_to_run) {
> >                 WRITE_ONCE(vcpu->preempted, true);
> >                 WRITE_ONCE(vcpu->ready, true);
> >         }
> 
> se_runnable() isn't quite right, but yes, a helper along those lines is
> probably best. Let me try and grep more to see if there's others I
> missed as well :/

How's the below? I remember looking at the freezer thing before and
deciding it isn't a correctness thing, but given I added the helper, I
changed it anyway. I've added a bunch of comments and the perf thing is
similar to KVM, it wants to know about preemptions so that had to change
too.

---
 include/linux/sched.h         |  5 +++++
 kernel/events/core.c          |  2 +-
 kernel/freezer.c              |  7 ++++++-
 kernel/rcu/tasks.h            |  9 +++++++++
 kernel/sched/core.c           | 12 +++++++++---
 kernel/time/tick-sched.c      |  5 +++++
 kernel/trace/trace_selftest.c |  2 +-
 virt/kvm/kvm_main.c           |  2 +-
 8 files changed, 37 insertions(+), 7 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0053f0664847..2b1f454e4575 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2134,6 +2134,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
 
 #endif /* CONFIG_SMP */
 
+static inline bool task_is_runnable(struct task_struct *p)
+{
+	return p->on_rq && !p->se.sched_delayed;
+}
+
 extern bool sched_task_on_rq(struct task_struct *p);
 extern unsigned long get_wchan(struct task_struct *p);
 extern struct task_struct *cpu_curr_snapshot(int cpu);
diff --git a/kernel/events/core.c b/kernel/events/core.c
index e3589c4287cb..cdd09769e6c5 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -9251,7 +9251,7 @@ static void perf_event_switch(struct task_struct *task,
 		},
 	};
 
-	if (!sched_in && task->on_rq) {
+	if (!sched_in && task_is_runnable(task)) {
 		switch_event.event_id.header.misc |=
 				PERF_RECORD_MISC_SWITCH_OUT_PREEMPT;
 	}
diff --git a/kernel/freezer.c b/kernel/freezer.c
index 44bbd7dbd2c8..8d530d0949ff 100644
--- a/kernel/freezer.c
+++ b/kernel/freezer.c
@@ -109,7 +109,12 @@ static int __set_task_frozen(struct task_struct *p, void *arg)
 {
 	unsigned int state = READ_ONCE(p->__state);
 
-	if (p->on_rq)
+	/*
+	 * Allow freezing the sched_delayed tasks; they will not execute until
+	 * ttwu() fixes them up, so it is safe to swap their state now, instead
+	 * of waiting for them to get fully dequeued.
+	 */
+	if (task_is_runnable(p))
 		return 0;
 
 	if (p != current && task_curr(p))
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 6333f4ccf024..4d7ee95df06e 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -985,6 +985,15 @@ static bool rcu_tasks_is_holdout(struct task_struct *t)
 	if (!READ_ONCE(t->on_rq))
 		return false;
 
+	/*
+	 * t->on_rq && !t->se.sched_delayed *could* be considered sleeping but
+	 * since it is a spurious state (it will transition into the
+	 * traditional blocked state or get woken up without outside
+	 * dependencies), not considering it such should only affect timing.
+	 *
+	 * Be conservative for now and not include it.
+	 */
+
 	/*
 	 * Idle tasks (or idle injection) within the idle loop are RCU-tasks
 	 * quiescent states. But CPU boot code performed by the idle task
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0bacc5cd3693..be5c04eb5ba0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -548,6 +548,11 @@ sched_core_dequeue(struct rq *rq, struct task_struct *p, int flags) { }
  *   ON_RQ_MIGRATING state is used for migration without holding both
  *   rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
  *
+ *   Additionally it is possible to be ->on_rq but still be considered not
+ *   runnable when p->se.sched_delayed is true. These tasks are on the runqueue
+ *   but will be dequeued as soon as they get picked again. See the
+ *   task_is_runnable() helper.
+ *
  * p->on_cpu <- { 0, 1 }:
  *
  *   is set by prepare_task() and cleared by finish_task() such that it will be
@@ -4358,9 +4363,10 @@ static bool __task_needs_rq_lock(struct task_struct *p)
  * @arg: Argument to function.
  *
  * Fix the task in it's current state by avoiding wakeups and or rq operations
- * and call @func(@arg) on it.  This function can use ->on_rq and task_curr()
- * to work out what the state is, if required.  Given that @func can be invoked
- * with a runqueue lock held, it had better be quite lightweight.
+ * and call @func(@arg) on it.  This function can use task_is_runnable() and
+ * task_curr() to work out what the state is, if required.  Given that @func
+ * can be invoked with a runqueue lock held, it had better be quite
+ * lightweight.
  *
  * Returns:
  *   Whatever @func returns
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 753a184c7090..59efa14ce185 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -435,6 +435,11 @@ static void tick_nohz_kick_task(struct task_struct *tsk)
 	 *   tick_nohz_task_switch()
 	 *     LOAD p->tick_dep_mask
 	 */
+	// XXX given a task picks up the dependency on schedule(), should we
+	// only care about tasks that are currently on the CPU instead of all
+	// that are on the runqueue?
+	//
+	// That is, does this want to be: task_on_cpu() / task_curr()?
 	if (!sched_task_on_rq(tsk))
 		return;
 
diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
index c4ad7cd7e778..1469dd8075fa 100644
--- a/kernel/trace/trace_selftest.c
+++ b/kernel/trace/trace_selftest.c
@@ -1485,7 +1485,7 @@ trace_selftest_startup_wakeup(struct tracer *trace, struct trace_array *tr)
 	/* reset the max latency */
 	tr->max_latency = 0;
 
-	while (p->on_rq) {
+	while (task_is_runnable(p)) {
 		/*
 		 * Sleep to make sure the -deadline thread is asleep too.
 		 * On virtual machines we can't rely on timings,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 05cbb2548d99..0c666f1870af 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -6387,7 +6387,7 @@ static void kvm_sched_out(struct preempt_notifier *pn,
 
 	WRITE_ONCE(vcpu->scheduled_out, true);
 
-	if (current->on_rq && vcpu->wants_to_run) {
+	if (task_is_runnable(current) && vcpu->wants_to_run) {
 		WRITE_ONCE(vcpu->preempted, true);
 		WRITE_ONCE(vcpu->ready, true);
 	}

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/24] sched/fair: Implement delayed dequeue
  2024-10-10  7:57         ` Mike Galbraith
@ 2024-10-10 16:18           ` Sean Christopherson
  2024-10-10 17:12             ` Mike Galbraith
  0 siblings, 1 reply; 7+ messages in thread
From: Sean Christopherson @ 2024-10-10 16:18 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Marek Szyprowski, Peter Zijlstra, mingo, juri.lelli,
	vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
	vschneid, linux-kernel, kprateek.nayak, wuyun.abel, youssefesmat,
	tglx, kvm

On Thu, Oct 10, 2024, Mike Galbraith wrote:
> On Wed, 2024-10-09 at 19:49 -0700, Sean Christopherson wrote:
> >
> > Any thoughts on how best to handle this?  The below hack-a-fix resolves the issue,
> > but it's obviously not appropriate.  KVM uses vcpu->preempted for more than just
> > posted interrupts, so KVM needs equivalent functionality to current->on-rq as it
> > was before this commit.
> >
> > @@ -6387,7 +6390,7 @@ static void kvm_sched_out(struct preempt_notifier *pn,
> >  
> >         WRITE_ONCE(vcpu->scheduled_out, true);
> >  
> > -       if (current->on_rq && vcpu->wants_to_run) {
> > +       if (se_runnable(&current->se) && vcpu->wants_to_run) {
> >                 WRITE_ONCE(vcpu->preempted, true);
> >                 WRITE_ONCE(vcpu->ready, true);
> >         }
> 
> Why is that deemed "obviously not appropriate"?  ->on_rq in and of
> itself meaning only "on rq" doesn't seem like a bad thing.

Doh, my wording was unclear.  I didn't mean the logic was inappropriate, I meant
that KVM shouldn't be poking into an internal sched/ helper.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/24] sched/fair: Implement delayed dequeue
  2024-10-10 16:18           ` Sean Christopherson
@ 2024-10-10 17:12             ` Mike Galbraith
  0 siblings, 0 replies; 7+ messages in thread
From: Mike Galbraith @ 2024-10-10 17:12 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Marek Szyprowski, Peter Zijlstra, mingo, juri.lelli,
	vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
	vschneid, linux-kernel, kprateek.nayak, wuyun.abel, youssefesmat,
	tglx, kvm

On Thu, 2024-10-10 at 09:18 -0700, Sean Christopherson wrote:
> On Thu, Oct 10, 2024, Mike Galbraith wrote:
> > On Wed, 2024-10-09 at 19:49 -0700, Sean Christopherson wrote:
> > >
> > > Any thoughts on how best to handle this?  The below hack-a-fix resolves the issue,
> > > but it's obviously not appropriate.  KVM uses vcpu->preempted for more than just
> > > posted interrupts, so KVM needs equivalent functionality to current->on-rq as it
> > > was before this commit.
> > >
> > > @@ -6387,7 +6390,7 @@ static void kvm_sched_out(struct preempt_notifier *pn,
> > >  
> > >         WRITE_ONCE(vcpu->scheduled_out, true);
> > >  
> > > -       if (current->on_rq && vcpu->wants_to_run) {
> > > +       if (se_runnable(&current->se) && vcpu->wants_to_run) {
> > >                 WRITE_ONCE(vcpu->preempted, true);
> > >                 WRITE_ONCE(vcpu->ready, true);
> > >         }
> >
> > Why is that deemed "obviously not appropriate"?  ->on_rq in and of
> > itself meaning only "on rq" doesn't seem like a bad thing.
>
> Doh, my wording was unclear.  I didn't mean the logic was inappropriate, I meant
> that KVM shouldn't be poking into an internal sched/ helper.

Ah, confusion all better.  (yeah, swiping other's toys is naughty)

	-Mike

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 17/24] sched/fair: Implement delayed dequeue
  2024-10-10  9:18           ` Peter Zijlstra
@ 2024-10-10 18:23             ` Sean Christopherson
  0 siblings, 0 replies; 7+ messages in thread
From: Sean Christopherson @ 2024-10-10 18:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Marek Szyprowski, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, vschneid,
	linux-kernel, kprateek.nayak, wuyun.abel, youssefesmat, tglx,
	efault, kvm

On Thu, Oct 10, 2024, Peter Zijlstra wrote:
> On Thu, Oct 10, 2024 at 10:19:40AM +0200, Peter Zijlstra wrote:
> > On Wed, Oct 09, 2024 at 07:49:54PM -0700, Sean Christopherson wrote:
> > 
> > > TL;DR: Code that checks task_struct.on_rq may be broken by this commit.
> > 
> > Correct, and while I did look at quite a few, I did miss KVM used it,
> > damn.
> > 
> > > Peter,
> > > 
> > > Any thoughts on how best to handle this?  The below hack-a-fix resolves the issue,
> > > but it's obviously not appropriate.  KVM uses vcpu->preempted for more than just
> > > posted interrupts, so KVM needs equivalent functionality to current->on-rq as it
> > > was before this commit.
> > > 
> > > @@ -6387,7 +6390,7 @@ static void kvm_sched_out(struct preempt_notifier *pn,
> > >  
> > >         WRITE_ONCE(vcpu->scheduled_out, true);
> > >  
> > > -       if (current->on_rq && vcpu->wants_to_run) {
> > > +       if (se_runnable(&current->se) && vcpu->wants_to_run) {
> > >                 WRITE_ONCE(vcpu->preempted, true);
> > >                 WRITE_ONCE(vcpu->ready, true);
> > >         }
> > 
> > se_runnable() isn't quite right, but yes, a helper along those lines is
> > probably best. Let me try and grep more to see if there's others I
> > missed as well :/
> 
> How's the below? I remember looking at the freezer thing before and
> deciding it isn't a correctness thing, but given I added the helper, I
> changed it anyway. I've added a bunch of comments and the perf thing is
> similar to KVM, it wants to know about preemptions so that had to change
> too.

Fixes KVM's woes!  Thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-10-10 18:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20240727102732.960974693@infradead.org>
     [not found] ` <20240727105030.226163742@infradead.org>
     [not found]   ` <CGME20240828223802eucas1p16755f4531ed0611dc4871649746ea774@eucas1p1.samsung.com>
     [not found]     ` <5618d029-769a-4690-a581-2df8939f26a9@samsung.com>
2024-10-10  2:49       ` [PATCH 17/24] sched/fair: Implement delayed dequeue Sean Christopherson
2024-10-10  7:57         ` Mike Galbraith
2024-10-10 16:18           ` Sean Christopherson
2024-10-10 17:12             ` Mike Galbraith
2024-10-10  8:19         ` Peter Zijlstra
2024-10-10  9:18           ` Peter Zijlstra
2024-10-10 18:23             ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox