From: "Darrick J. Wong" <djwong@kernel.org>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org, hch@infradead.org
Subject: Re: [PATCH v2.1 1/3] xfs: increase the default parallelism levels of pwork clients
Date: Tue, 26 Jan 2021 15:32:12 -0800 [thread overview]
Message-ID: <20210126233212.GC7698@magnolia> (raw)
In-Reply-To: <20210126204612.GJ4662@dread.disaster.area>
On Wed, Jan 27, 2021 at 07:46:12AM +1100, Dave Chinner wrote:
> On Mon, Jan 25, 2021 at 09:04:52PM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <djwong@kernel.org>
> >
> > Increase the parallelism level for pwork clients to the workqueue
> > defaults so that we can take advantage of computers with a lot of CPUs
> > and a lot of hardware. On fast systems this will speed up quotacheck by
> > a large factor, and the following posteof/cowblocks cleanup series will
> > use the functionality presented in this patch to run garbage collection
> > as quickly as possible.
> >
> > We do this by switching the pwork workqueue to unbounded, since the
> > current user (quotacheck) runs lengthy scans for each work item and we
> > don't care about dispatching the work on a warm cpu cache or anything
> > like that. Also set WQ_SYSFS so that we can monitor where the wq is
> > running.
> >
> > Signed-off-by: Darrick J. Wong <djwong@kernel.org>
> > ---
> > v2.1: document the workqueue knobs, kill the nr_threads argument to
> > pwork, and convert it to unbounded all in one patch
> > ---
> > Documentation/admin-guide/xfs.rst | 33 +++++++++++++++++++++++++++++++++
> > fs/xfs/xfs_iwalk.c | 5 +----
> > fs/xfs/xfs_pwork.c | 25 +++++--------------------
> > fs/xfs/xfs_pwork.h | 4 +---
> > 4 files changed, 40 insertions(+), 27 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/xfs.rst b/Documentation/admin-guide/xfs.rst
> > index 86de8a1ad91c..5fd14556c6fe 100644
> > --- a/Documentation/admin-guide/xfs.rst
> > +++ b/Documentation/admin-guide/xfs.rst
> > @@ -495,3 +495,36 @@ the class and error context. For example, the default values for
> > "metadata/ENODEV" are "0" rather than "-1" so that this error handler defaults
> > to "fail immediately" behaviour. This is done because ENODEV is a fatal,
> > unrecoverable error no matter how many times the metadata IO is retried.
> > +
> > +Workqueue Concurrency
> > +=====================
> > +
> > +XFS uses kernel workqueues to parallelize metadata update processes. This
> > +enables it to take advantage of storage hardware that can service many IO
> > +operations simultaneously.
> > +
> > +The control knobs for a filesystem's workqueues are organized by task at hand
> > +and the short name of the data device. They all can be found in:
> > +
> > + /sys/bus/workqueue/devices/${task}!${device}
> > +
> > +================ ===========
> > + Task Description
> > +================ ===========
> > + xfs_iwalk-$pid Inode scans of the entire filesystem. Currently limited to
> > + mount time quotacheck.
> > +================ ===========
> > +
> > +For example, the knobs for the quotacheck workqueue for /dev/nvme0n1 would be
> > +found in /sys/bus/workqueue/devices/xfs_iwalk-1111!nvme0n1/.
> > +
> > +The interesting knobs for XFS workqueues are as follows:
> > +
> > +============ ===========
> > + Knob Description
> > +============ ===========
> > + max_active Maximum number of background threads that can be started to
> > + run the work.
> > + cpumask CPUs upon which the threads are allowed to run.
> > + nice Relative priority of scheduling the threads. These are the
> > + same nice levels that can be applied to userspace processes.
>
> I'd suggest that a comment be added here along the lines of:
>
> This interface exposes an internal implementation detail of XFS, and
> as such is explicitly not part of any userspace API/ABI guarantee
> the kernel may give userspace. These are undocumented features of
> the generic workqueue implementation XFS uses for concurrency, and
> they are provided here purely for diagnostic and tuning purposes and
> may change at any time in the future.
>
> Otherwise looks ok.
Done, thanks for taking a look at this.
--D
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
next prev parent reply other threads:[~2021-01-27 7:31 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-23 18:53 [PATCHSET 0/3] xfs: speed up parallel workqueues Darrick J. Wong
2021-01-23 18:53 ` [PATCH 1/3] xfs: increase the default parallelism levels of pwork clients Darrick J. Wong
2021-01-24 9:57 ` Christoph Hellwig
2021-01-25 23:07 ` Darrick J. Wong
2021-01-26 5:04 ` [PATCH v2.1 " Darrick J. Wong
2021-01-26 20:46 ` Dave Chinner
2021-01-26 23:32 ` Darrick J. Wong [this message]
2021-01-23 18:53 ` [PATCH 2/3] xfs: use unbounded workqueues for parallel work Darrick J. Wong
2021-01-24 9:51 ` Christoph Hellwig
2021-01-25 23:18 ` Darrick J. Wong
2021-01-23 18:53 ` [PATCH 3/3] xfs: set WQ_SYSFS on all workqueues Darrick J. Wong
2021-01-24 9:54 ` Christoph Hellwig
2021-01-25 23:30 ` Darrick J. Wong
2021-01-26 5:06 ` [PATCH 4/3] xfs: set WQ_SYSFS on all workqueues in debug mode Darrick J. Wong
2021-01-26 20:48 ` Dave Chinner
2021-01-27 17:03 ` Christoph Hellwig
2021-01-27 23:29 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210126233212.GC7698@magnolia \
--to=djwong@kernel.org \
--cc=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox