public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Rik van Riel <riel@surriel.com>
To: Tejun Heo <tj@kernel.org>, Jens Axboe <axboe@kernel.dk>
Cc: linux-kernel@vger.kernel.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Jan Kara <jack@suse.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	kernel-team@fb.com
Subject: Re: [PATCH] bdi: Move cgroup bdi_writeback to a dedicated low concurrency workqueue
Date: Wed, 23 May 2018 18:03:15 -0400	[thread overview]
Message-ID: <1527112995.7898.31.camel@surriel.com> (raw)
In-Reply-To: <20180523175632.GO1718769@devbig577.frc2.facebook.com>

[-- Attachment #1: Type: text/plain, Size: 1515 bytes --]

On Wed, 2018-05-23 at 10:56 -0700, Tejun Heo wrote:

> The events leading to the lockup are...
> 
> 1. A lot of cgwb_release_workfn() is queued at the same time and all
>    system_wq kworkers are assigned to execute them.
> 
> 2. They all end up calling synchronize_rcu_expedited().  One of them
>    wins and tries to perform the expedited synchronization.
> 
> 3. However, that invovles queueing rcu_exp_work to system_wq and
>    waiting for it.  Because #1 is holding all available kworkers on
>    system_wq, rcu_exp_work can't be executed.  cgwb_release_workfn()
>    is waiting for synchronize_rcu_expedited() which in turn is
> waiting
>    for cgwb_release_workfn() to free up some of the kworkers.
> 
> We shouldn't be scheduling hundreds of cgwb_release_workfn() at the
> same time.  There's nothing to be gained from that.  This patch
> updates cgwb release path to use a dedicated percpu workqueue with
> @max_active of 1.

Dumb question.  Does setting max_active to 1 mean
that every cgwb_release_workfn() ends up forcing
another RCU grace period on the whole system, while
today you might have a bunch of them waiting on the
same RCU grace period advance?

Would it be faster to have some number (up to 16?)
push RCU once, at the same time, instead of having
each of them push RCU into a next grace period one
after another?

I may be overlooking something fundamental here,
but I thought I'd at least ask the question, just
in case :)

-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  parent reply	other threads:[~2018-05-23 22:03 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-23 17:56 [PATCH] bdi: Move cgroup bdi_writeback to a dedicated low concurrency workqueue Tejun Heo
2018-05-23 18:39 ` Paul E. McKenney
2018-05-23 18:51   ` Tejun Heo
2018-05-23 19:10     ` Paul E. McKenney
2018-05-23 21:29 ` Jens Axboe
2018-05-23 22:03 ` Rik van Riel [this message]
2018-05-23 23:17   ` Tejun Heo
2018-05-23 23:25 ` [PATCH] bdi: Increase the concurrecy level of cgwb_release_wq Tejun Heo
2018-05-24 10:19 ` [PATCH] bdi: Move cgroup bdi_writeback to a dedicated low concurrency workqueue Jan Kara
2018-05-24 14:00   ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1527112995.7898.31.camel@surriel.com \
    --to=riel@surriel.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=jack@suse.com \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox