public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Frederic Weisbecker <frederic@kernel.org>
To: LKML <linux-kernel@vger.kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>,
	Boqun Feng <boqun.feng@gmail.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Neeraj Upadhyay <neeraj.upadhyay@amd.com>,
	"Paul E . McKenney" <paulmck@kernel.org>,
	Uladzislau Rezki <urezki@gmail.com>, rcu <rcu@vger.kernel.org>
Subject: [PATCH 0/7] rcu: Fix expedited GP deadlock
Date: Fri, 12 Jan 2024 16:46:14 +0100	[thread overview]
Message-ID: <20240112154621.261852-1-frederic@kernel.org> (raw)

TREE04 can trigger a writer stall if run with memory pressure. This
is due to a circular dependency between waiting for expedited grace
period and polling on expedited grace period when workqueues go back
to mayday serialization.

Here is a proposal fix.

Changes since v2 (no functional changes, just renames and reorganization):

_ Move nocb cleanups to their own series
_ Rename can_queue parameter to use_worker [2/7]
_ Better explain the rename of the mutex [3/7]
_ New commit with just code move to ease review [4/7]
_ Comment declaration of the new rnp->exp_worker field [5/7]

Thanks.

Frederic Weisbecker (7):
  rcu/exp: Fix RCU expedited parallel grace period kworker allocation
    failure recovery
  rcu/exp: Handle RCU expedited grace period kworker allocation failure
  rcu: s/boost_kthread_mutex/kthread_mutex
  rcu/exp: Move expedited kthread worker creation functions above
    rcutree_prepare_cpu()
  rcu/exp: Make parallel exp gp kworker per rcu node
  rcu/exp: Handle parallel exp gp kworkers affinity
  rcu/exp: Remove rcu_par_gp_wq

 kernel/rcu/rcu.h         |   5 --
 kernel/rcu/tree.c        | 175 +++++++++++++++++++++++++++------------
 kernel/rcu/tree.h        |  11 ++-
 kernel/rcu/tree_exp.h    |  78 +++--------------
 kernel/rcu/tree_plugin.h |  52 ++----------
 5 files changed, 142 insertions(+), 179 deletions(-)

-- 
2.34.1


             reply	other threads:[~2024-01-12 15:46 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-12 15:46 Frederic Weisbecker [this message]
2024-01-12 15:46 ` [PATCH 1/7] rcu/exp: Fix RCU expedited parallel grace period kworker allocation failure recovery Frederic Weisbecker
2024-01-12 15:46 ` [PATCH 2/7] rcu/exp: Handle RCU expedited grace period kworker allocation failure Frederic Weisbecker
2024-01-12 15:46 ` [PATCH 3/7] rcu: s/boost_kthread_mutex/kthread_mutex Frederic Weisbecker
2024-01-12 15:46 ` [PATCH 4/7] rcu/exp: Move expedited kthread worker creation functions above rcutree_prepare_cpu() Frederic Weisbecker
2024-01-12 15:46 ` [PATCH 5/7] rcu/exp: Make parallel exp gp kworker per rcu node Frederic Weisbecker
2024-01-12 15:46 ` [PATCH 6/7] rcu/exp: Handle parallel exp gp kworkers affinity Frederic Weisbecker
2024-01-12 15:46 ` [PATCH 7/7] rcu/exp: Remove rcu_par_gp_wq Frederic Weisbecker
2024-01-16 19:11 ` [PATCH 0/7] rcu: Fix expedited GP deadlock Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240112154621.261852-1-frederic@kernel.org \
    --to=frederic@kernel.org \
    --cc=boqun.feng@gmail.com \
    --cc=joel@joelfernandes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=neeraj.upadhyay@amd.com \
    --cc=paulmck@kernel.org \
    --cc=rcu@vger.kernel.org \
    --cc=urezki@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox