Linux-mediatek Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Kuyo Chang <kuyo.chang@mediatek.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	"Ben Segall" <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	"Valentin Schneider" <vschneid@redhat.com>,
	Matthias Brugger <matthias.bgg@gmail.com>,
	AngeloGioacchino Del Regno
	<angelogioacchino.delregno@collabora.com>,
	<linux-kernel@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>,
	<linux-mediatek@lists.infradead.org>
Subject: Re: [PATCH 1/1] sched/core: Fix migrate_swap() vs. hotplug
Date: Fri, 6 Jun 2025 11:46:57 +0800	[thread overview]
Message-ID: <8e1018116ad7c5c325eced2cb17b65c73ca2ceca.camel@mediatek.com> (raw)
In-Reply-To: <20250605100009.GO39944@noisy.programming.kicks-ass.net>

On Thu, 2025-06-05 at 12:00 +0200, Peter Zijlstra wrote:
> 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
> 
> 
> On Mon, Jun 02, 2025 at 03:22:13PM +0800, Kuyo Chang wrote:
> 
> How easy can you reproduce this?
> 

The probability of duplication is very low, roughly with an occurrence
frequency of about every 1~2 weeks.
I think this issue can only occur if all below types of races happened.
1.stop_two_cpus vs. hotplug
2.cpu1 schedule()
3.ttwu queue ipi latency 

So my initial intention to fix this is by adding
cpus_read_lock/cpus_read_unlock around stop_two_cpus-(1)

> > So, the potential race scenario is:
> > 
> >       CPU0                                                    CPU1
> >       // doing migrate_swap(cpu0/cpu1)
> >       stop_two_cpus()
> >                                                         ...
> >                                                        // doing
> > _cpu_down()
> >                                                            
> > sched_cpu_deactivate()
> >                                                              
> > set_cpu_active(cpu, false);
> >                                                              
> > balance_push_set(cpu, true);
> >       cpu_stop_queue_two_works
> >           __cpu_stop_queue_work(stopper1,...);
> >           __cpu_stop_queue_work(stopper2,..);
> >       stop_cpus_in_progress -> true
> >               preempt_enable();
> >                                                               ...
> >                                                       1st
> > balance_push
> >                                                      
> > stop_one_cpu_nowait
> >                                                      
> > cpu_stop_queue_work
> >                                                      
> > __cpu_stop_queue_work
> >                                                      
> > list_add_tail  -> 1st add push_work
> >                                                      
> > wake_up_q(&wakeq);  -> "wakeq is empty.
> >                                                                    
> >            This implies that the stopper is at wakeq@migrate_swap."
> >       preempt_disable
> >       wake_up_q(&wakeq);
> >               wake_up_process // wakeup migrate/0
> >                   try_to_wake_up
> >                       ttwu_queue
> >                           ttwu_queue_cond ->meet below case
> >                               if (cpu == smp_processor_id())
> >                                return false;
> >                       ttwu_do_activate
> >                       //migrate/0 wakeup done
> >               wake_up_process // wakeup migrate/1
> >                  try_to_wake_up
> >                   ttwu_queue
> >                       ttwu_queue_cond
> >                       ttwu_queue_wakelist
> >                       __ttwu_queue_wakelist
> >                       __smp_call_single_queue
> >       preempt_enable();
> > 
> >                                                       2nd
> > balance_push
> >                                                      
> > stop_one_cpu_nowait
> >                                                      
> > cpu_stop_queue_work
> >                                                      
> > __cpu_stop_queue_work
> >                                                      
> > list_add_tail  -> 2nd add push_work, so the double list add is
> > detected
> >                                                       ...
> >                                                       ...
> >                                                       cpu1 get ipi,
> > do sched_ttwu_pending, wakeup migrate/1
> > 
> 
> So this balance_push() is part of schedule(), and schedule() is
> supposed
> to switch to stopper task, but because of this race condition,
> stopper
> task is stuck in WAKING state and not actually visible to be picked.
> 
> Therefore CPU1 can do another schedule() and end up doing another
> balance_push() even though the last one hasn't been done yet.
> 
> So how about we do something like this? Does this help?
> 

Thank you for your patch.
I believe this patch also effectively addresses this race condition.
I will queue it in our test pool for testing.

> ---
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 62b3416f5e43..c37b80bd53e6 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3939,6 +3939,11 @@ static inline bool ttwu_queue_cond(struct
> task_struct *p, int cpu)
>         if (!scx_allow_ttwu_queue(p))
>                 return false;
> 
> +#ifdef CONFIG_SMP
> +       if (p->sched_class == &stop_sched_class)
> +               return false;
> +#endif
> +
>         /*
>          * Do not complicate things with the async wake_list while
> the CPU is
>          * in hotplug state.
> diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
> index 5d2d0562115b..8855a50cc216 100644
> --- a/kernel/stop_machine.c
> +++ b/kernel/stop_machine.c
> @@ -82,18 +82,15 @@ static void cpu_stop_signal_done(struct
> cpu_stop_done *done)
>  }
> 
>  static void __cpu_stop_queue_work(struct cpu_stopper *stopper,
> -                                       struct cpu_stop_work *work,
> -                                       struct wake_q_head *wakeq)
> +                                 struct cpu_stop_work *work)
>  {
>         list_add_tail(&work->list, &stopper->works);
> -       wake_q_add(wakeq, stopper->thread);
>  }
> 
>  /* queue @work to @stopper.  if offline, @work is completed
> immediately */
>  static bool cpu_stop_queue_work(unsigned int cpu, struct
> cpu_stop_work *work)
>  {
>         struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
> -       DEFINE_WAKE_Q(wakeq);
>         unsigned long flags;
>         bool enabled;
> 
> @@ -101,12 +98,12 @@ static bool cpu_stop_queue_work(unsigned int
> cpu, struct cpu_stop_work *work)
>         raw_spin_lock_irqsave(&stopper->lock, flags);
>         enabled = stopper->enabled;
>         if (enabled)
> -               __cpu_stop_queue_work(stopper, work, &wakeq);
> +               __cpu_stop_queue_work(stopper, work);
>         else if (work->done)
>                 cpu_stop_signal_done(work->done);
>         raw_spin_unlock_irqrestore(&stopper->lock, flags);
> 
> -       wake_up_q(&wakeq);
> +       wake_up_process(stopper->thread);

BTW, should we add enabled check here?
	if (enabled) 
		wake_up_process(stopper->thread);

>         preempt_enable();
> 
>         return enabled;
> @@ -264,7 +261,6 @@ static int cpu_stop_queue_two_works(int cpu1,
> struct cpu_stop_work *work1,
>  {
>         struct cpu_stopper *stopper1 = per_cpu_ptr(&cpu_stopper,
> cpu1);
>         struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper,
> cpu2);
> -       DEFINE_WAKE_Q(wakeq);
>         int err;
> 
>  retry:
> @@ -300,8 +296,8 @@ static int cpu_stop_queue_two_works(int cpu1,
> struct cpu_stop_work *work1,
>         }
> 
>         err = 0;
> -       __cpu_stop_queue_work(stopper1, work1, &wakeq);
> -       __cpu_stop_queue_work(stopper2, work2, &wakeq);
> +       __cpu_stop_queue_work(stopper1, work1);
> +       __cpu_stop_queue_work(stopper2, work2);
> 
>  unlock:
>         raw_spin_unlock(&stopper2->lock);
> @@ -316,7 +312,8 @@ static int cpu_stop_queue_two_works(int cpu1,
> struct cpu_stop_work *work1,
>                 goto retry;
>         }
> 
> -       wake_up_q(&wakeq);
> +       wake_up_process(stopper1->thread);
> +       wake_up_process(stopper2->thread);
>         preempt_enable();
> 
>         return err;


  reply	other threads:[~2025-06-06  3:47 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-02  7:22 [PATCH 1/1] sched/core: Fix migrate_swap() vs. hotplug Kuyo Chang
2025-06-05 10:00 ` Peter Zijlstra
2025-06-06  3:46   ` Kuyo Chang [this message]
2025-06-06  8:28     ` Peter Zijlstra
2025-06-13  7:47       ` Kuyo Chang
2025-06-26 12:53         ` Peter Zijlstra
2025-06-26  2:23       ` [SPAM]Re: " Kuyo Chang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8e1018116ad7c5c325eced2cb17b65c73ca2ceca.camel@mediatek.com \
    --to=kuyo.chang@mediatek.com \
    --cc=angelogioacchino.delregno@collabora.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=matthias.bgg@gmail.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox