From: Frederic Weisbecker <frederic@kernel.org>
To: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: David Woodhouse <dwmw2@infradead.org>,
paulmck@kernel.org, josh@joshtriplett.org, rostedt@goodmis.org,
mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com,
joel@joelfernandes.org, rcu@vger.kernel.org,
linux-kernel@vger.kernel.org, urezki@gmail.com,
boqun.feng@gmail.com
Subject: Re: [PATCH v2] rcu/nocb: Handle concurrent nocb kthreads creation
Date: Mon, 13 Dec 2021 12:22:46 +0100 [thread overview]
Message-ID: <20211213112246.GA782195@lothringen> (raw)
In-Reply-To: <601ecb12-ae2e-9608-7127-c2cddc8038a6@quicinc.com>
On Mon, Dec 13, 2021 at 02:25:30PM +0530, Neeraj Upadhyay wrote:
> Hi David,
>
> Thanks for the review; some replies inline.
>
> On 12/13/2021 1:48 PM, David Woodhouse wrote:
> > On Sat, 2021-12-11 at 22:31 +0530, Neeraj Upadhyay wrote:
> > > When multiple CPUs in the same nocb gp/cb group concurrently
> > > come online, they might try to concurrently create the same
> > > rcuog kthread. Fix this by using nocb gp CPU's spawn mutex to
> > > provide mutual exclusion for the rcuog kthread creation code.
> > >
> > > Signed-off-by: Neeraj Upadhyay <quic_neeraju@quicinc.com>
> > > ---
> > > Change in v2:
> > > Fix missing mutex_unlock in nocb gp kthread creation err path.
> >
> > I think this ends up being not strictly necessary in the short term too
> > because we aren't currently planning to run rcutree_prepare_cpu()
> > concurrently anyway. But harmless and worth fixing in the longer term.
> >
> > Although, if I've already added a mutex for adding the boost thread,
> > could we manage to use the *same* mutex instead of adding another one?
> >
>
> Let me think about it; the nocb-gp and nocb-cb kthreads are grouped based on
> rcu_nocb_gp_stride; whereas, boost kthreads are per rnp. So, I need to see
> how we can use a common mutex for both.
>
>
> > Acked-by: David Woodhouse <dwmw@amazon.co.uk>
> > + mutex_unlock(&rdp_gp->nocb_gp_kthread_mutex);
> > > return;
> > > + }
> > > WRITE_ONCE(rdp_gp->nocb_gp_kthread, t);
> > > }
> > > + mutex_unlock(&rdp_gp->nocb_gp_kthread_mutex);
> > >
> > > /* Spawn the kthread for this CPU. */
> >
> > Some whitespace damage there.
>
> Will fix in next version.
I was about to ack the patch but, should we really add code that isn't going to
be necessary before a long while?
Thanks!
>
> Thanks
> Neeraj
>
> >
next prev parent reply other threads:[~2021-12-13 11:22 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-11 17:01 [PATCH v2] rcu/nocb: Handle concurrent nocb kthreads creation Neeraj Upadhyay
2021-12-13 8:18 ` [EXTERNAL] " David Woodhouse
2021-12-13 8:55 ` Neeraj Upadhyay
2021-12-13 11:22 ` Frederic Weisbecker [this message]
2021-12-13 11:28 ` David Woodhouse
2021-12-13 13:14 ` Frederic Weisbecker
2021-12-13 19:00 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211213112246.GA782195@lothringen \
--to=frederic@kernel.org \
--cc=boqun.feng@gmail.com \
--cc=dwmw2@infradead.org \
--cc=jiangshanlai@gmail.com \
--cc=joel@joelfernandes.org \
--cc=josh@joshtriplett.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=paulmck@kernel.org \
--cc=quic_neeraju@quicinc.com \
--cc=rcu@vger.kernel.org \
--cc=rostedt@goodmis.org \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox