public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/swait: Reduce lock contention in swake_up_all
@ 2026-05-05  9:04 lirongqing
  2026-05-05 16:05 ` K Prateek Nayak
  0 siblings, 1 reply; 2+ messages in thread
From: lirongqing @ 2026-05-05  9:04 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, K Prateek Nayak, linux-kernel
  Cc: Li RongQing

From: Li RongQing <lirongqing@baidu.com>

The entire task list have been moved a local list under the lock,
it is unnecessary to hold the lock to wake tasks, This reduces lock
operations from O(n) to O(1).

Move list_del_init before wake_up_state to prevent potential
use-after-free if the woken task exits immediately and releases
its memory.

Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
 kernel/sched/swait.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c
index 0fef649..ee4e658 100644
--- a/kernel/sched/swait.c
+++ b/kernel/sched/swait.c
@@ -66,19 +66,13 @@ void swake_up_all(struct swait_queue_head *q)
 
 	raw_spin_lock_irq(&q->lock);
 	list_splice_init(&q->task_list, &tmp);
+	raw_spin_unlock_irq(&q->lock);
 	while (!list_empty(&tmp)) {
 		curr = list_first_entry(&tmp, typeof(*curr), task_list);
 
-		wake_up_state(curr->task, TASK_NORMAL);
 		list_del_init(&curr->task_list);
-
-		if (list_empty(&tmp))
-			break;
-
-		raw_spin_unlock_irq(&q->lock);
-		raw_spin_lock_irq(&q->lock);
+		wake_up_state(curr->task, TASK_NORMAL);
 	}
-	raw_spin_unlock_irq(&q->lock);
 }
 EXPORT_SYMBOL(swake_up_all);
 
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] sched/swait: Reduce lock contention in swake_up_all
  2026-05-05  9:04 [PATCH] sched/swait: Reduce lock contention in swake_up_all lirongqing
@ 2026-05-05 16:05 ` K Prateek Nayak
  0 siblings, 0 replies; 2+ messages in thread
From: K Prateek Nayak @ 2026-05-05 16:05 UTC (permalink / raw)
  To: lirongqing, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Valentin Schneider, linux-kernel

Hello Li,

On 5/5/2026 2:34 PM, lirongqing wrote:
> From: Li RongQing <lirongqing@baidu.com>
> 
> The entire task list have been moved a local list under the lock,
> it is unnecessary to hold the lock to wake tasks, This reduces lock
> operations from O(n) to O(1).
> 
> Move list_del_init before wake_up_state to prevent potential
> use-after-free if the woken task exits immediately and releases
> its memory.
> 
> Signed-off-by: Li RongQing <lirongqing@baidu.com>
> ---
>  kernel/sched/swait.c | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c
> index 0fef649..ee4e658 100644
> --- a/kernel/sched/swait.c
> +++ b/kernel/sched/swait.c
> @@ -66,19 +66,13 @@ void swake_up_all(struct swait_queue_head *q)
>  
>  	raw_spin_lock_irq(&q->lock);
>  	list_splice_init(&q->task_list, &tmp);
> +	raw_spin_unlock_irq(&q->lock);
>  	while (!list_empty(&tmp)) {
>  		curr = list_first_entry(&tmp, typeof(*curr), task_list);
>  
> -		wake_up_state(curr->task, TASK_NORMAL);
>  		list_del_init(&curr->task_list);
> -
> -		if (list_empty(&tmp))
> -			break;
> -
> -		raw_spin_unlock_irq(&q->lock);
> -		raw_spin_lock_irq(&q->lock);
> +		wake_up_state(curr->task, TASK_NORMAL);

So I'm not fully convinced this is safe. Quick scenario I can think of
is:


    CPU0: swake_up_all()                                                            CPU1: Signal task "curr"
    ====================                                                            ========================

swake_up_all(q)
  ...
  list_splice_init(&q->task_list, &tmp);
  raw_spin_unlock_irq(&q->lock);

  while (!list_empty(&tmp)) {
    curr = ...;                                                                 /* Task curr gets a signal */
    ======> Interrupted                                                           wake_up_task(curr) /* same as curr */
                                                                                <====== curr switches in
                                                                                finish_swait()
                                                                                  list_del_init(&curr->task_list)
                                                                                    __list_del_entry(&curr->task_list.prev, &curr->task_list.next)
                                                                                      next->prev = prev;
                                                                                      prev->next = next;
                                                                                    INIT_LIST_HEAD(&curr->task_list)
                                                                                      WRITE_ONCE(curr->task_list.next, &curr->task_list);
                                                                                      ========> Interrupted

    /*
     * At this point curr->task_list, looks like:
     *
     *   curr->task_list.next = &curr->task_list
     *   curr->task_list.prev = &tmp
     */

    <===== Interrupt return
    list_del_init(&curr->task_list);
      __list_del_entry(&curr->task_list.prev, &curr->task_list.next)
         next->prev = prev; /* Write &tmp back to curr->task_list.prev */
         prev->next = next; /* Writes tmp's next as curr's list head */
      INIT_LIST_HEAD(&curr->task_list)
         WRITE_ONCE(curr->task_list.next, &curr->task_list);
         WRITE_ONCE(curr->task_list.prev, &curr->task_list);


So at this point, your list looks like:

  tmp:              prev = /* tail of list */
                    next = &curr->task_list

  curr->task_list:  prev = &curr->task_list
                    next = &curr->task_list

  actual_next:      prev = &tmp
                    next = /* Next element */

  ...

which seems like a list corruption unless I'm missing something.

I think the wakeup can be done outside of the "&q->lock" but the
list removal, even on the tmp list, has to be synchronized by
&q->lock at the very least but I think there is some ordering
required wrt wakeup and the list removal.

I'll let others comment if there are more subtleties involved
wrt the task wakeup itself - perhaps there are cases where the
task wakes up, decides to wait in an exclusive mode for a
condition on another wake queue, then:

- New wake queue gets a swake_up_one() for the head.
- previous swake_up_all() finishes and wakes up this task.
- Both tasks see "condition" and begin running even though
  they opted for exclusive wait and perhaps break some
  assumption.

>  	}
> -	raw_spin_unlock_irq(&q->lock);
>  }
>  EXPORT_SYMBOL(swake_up_all);
>  

-- 
Thanks and Regards,
Prateek


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-05-05 16:10 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-05  9:04 [PATCH] sched/swait: Reduce lock contention in swake_up_all lirongqing
2026-05-05 16:05 ` K Prateek Nayak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox