linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Padmanabha Srinivasaiah <treasure4paddy@gmail.com>
To: Tejun Heo <tj@kernel.org>
Cc: jiangshanlai@gmail.com, linux-kernel@vger.kernel.org,
	paulmck@kernel.org, linux-rpi-kernel@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH] workqueue: Fix race in schedule and flush work
Date: Wed, 16 Feb 2022 19:49:39 +0100	[thread overview]
Message-ID: <20220216184939.GA3868@pswork> (raw)
In-Reply-To: <Ygqw+EHo//6VGs6q@slm.duckdns.org>

On Mon, Feb 14, 2022 at 09:43:52AM -1000, Tejun Heo wrote:
> Hello,
> 
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index 33f1106b4f99..a3f53f859e9d 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -3326,28 +3326,38 @@ EXPORT_SYMBOL(cancel_delayed_work_sync);
> >   */
> >  int schedule_on_each_cpu(work_func_t func)
> >  {
> > -	int cpu;
> >  	struct work_struct __percpu *works;
> > +	cpumask_var_t sched_cpumask;
> > +	int cpu, ret = 0;
> >  
> > -	works = alloc_percpu(struct work_struct);
> > -	if (!works)
> > +	if (!alloc_cpumask_var(&sched_cpumask, GFP_KERNEL))
> >  		return -ENOMEM;
> >  
> > +	works = alloc_percpu(struct work_struct);
> > +	if (!works) {
> > +		ret = -ENOMEM;
> > +		goto free_cpumask;
> > +	}
> > +
> >  	cpus_read_lock();
> >  
> > -	for_each_online_cpu(cpu) {
> > +	cpumask_copy(sched_cpumask, cpu_online_mask);
> > +	for_each_cpu_and(cpu, sched_cpumask, cpu_online_mask) {
> 
> This definitely would need a comment explaining what's going on cuz it looks
> weird to be copying the cpumask which is supposed to stay stable due to the
> cpus_read_lock().Given that it can only happen during early boot and the
> online cpus can only be expanding, maybe just add sth like:
> 
>         if (early_during_boot) {
>                 for_each_possible_cpu(cpu)
>                         INIT_WORK(per_cpu_ptr(works, cpu), func);
>         }
> 

Thanks tejun for the reply and suggestions.

Yes, unfortunately cpus_read_lock not keeping cpumask stable at
secondary boot. Not sure, may be it only gurantee 'cpu' dont go down
under cpus_read_[lock/unlock].

As suggested will tryout something like:
	if (system_state != RUNNING) {
		:
	}
> BTW, who's calling schedule_on_each_cpu() that early during boot. It makes
> no sense to do this while the cpumasks can't be stabilized.
>
It is  implemenation of CONFIG_TASKS_RUDE_RCU.

> Thanks.
> 
> -- 
> tejun

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-02-16 18:51 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-10 18:43 [PATCH] workqueue: Fix race in schedule and flush work Padmanabha Srinivasaiah
2022-02-14 19:43 ` Tejun Heo
2022-02-16 18:49   ` Padmanabha Srinivasaiah [this message]
2022-02-16 19:07     ` Paul E. McKenney
2022-02-17 15:39       ` Padmanabha Srinivasaiah

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220216184939.GA3868@pswork \
    --to=treasure4paddy@gmail.com \
    --cc=jiangshanlai@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rpi-kernel@lists.infradead.org \
    --cc=paulmck@kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).