linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: linux-xfs@vger.kernel.org
Cc: chris@onthe.net.au
Subject: [RFC PATCH 0/2] xfs: non-blocking inodegc pushes
Date: Tue, 24 May 2022 16:38:00 +1000	[thread overview]
Message-ID: <20220524063802.1938505-1-david@fromorbit.com> (raw)

Hi folks,

I've had time to forward port the non-blocking inodegc push changes
I had in a different LOD to the current for-next tree. I've run it
through fstests auto group a couple of times and it hasn't caused
any space accounting related failures on the machines I've run it
on.

The first patch introduces the bound maximum work start time for the
inodegc queues - it's short, only 10ms (IIRC), but we don't want to
delay inodegc for an arbitrarily long period of time. However, it
means that work always starts quickly and so that reduces the need
for statfs() to have to wait for background inodegc to start and
complete to catch space "freed" by recent unlinks.

The second patch converts statfs to use a "push" rather than a
"flush". The push simply schedules any pending work that hasn't yet
timed out to run immediately and returns. It does not wait for the
inodegc work to complete - that's what a flush does, and that's what
caused all the problems for statfs(). Hence statfs() is converted to
push semantics at the same time, thereby removing the blocking
behaviour it currently has.

This should prevent a large amount of the issues that Chris has been
seeing with lots of processes stuck in statfs() - that will no long
happen. The only time user processes should get stuck now is when
the inodegc throttle kicks in (unlinks only at this point) or if we
are waiting for a lock a long running inodegc operation holds to be
released. We had those specific problems before background inodegc -
they manifested as unkillable unlink operations that had every
backed up on them instead of background inodegc that has everything
backed up on them.

Hence I think these patches largely restore the status quo that we
had before the background inodegc code was added.

Comments, thoughts and testing appreciated.

Cheers,

Dave.


             reply	other threads:[~2022-05-24  6:38 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-24  6:38 Dave Chinner [this message]
2022-05-24  6:38 ` [PATCH 1/2] xfs: bound maximum wait time for inodegc work Dave Chinner
2022-05-24 16:54   ` Darrick J. Wong
2022-05-24 23:03     ` Dave Chinner
2022-05-26  9:05   ` [xfs] 55a3d6bbc5: aim7.jobs-per-min 19.8% improvement kernel test robot
2022-05-27  9:12   ` [xfs] 55a3d6bbc5: BUG:KASAN:use-after-free_in_xfs_attr3_node_inactive[xfs] kernel test robot
2022-05-24  6:38 ` [PATCH 2/2] xfs: introduce xfs_inodegc_push() Dave Chinner
2022-05-24 10:47   ` Amir Goldstein
2022-05-24 16:14     ` Darrick J. Wong
2022-05-24 18:05       ` Amir Goldstein
2022-05-24 23:17     ` Dave Chinner
2022-05-24 16:17   ` Darrick J. Wong
2022-05-24 23:07     ` Dave Chinner
2022-05-26  3:00   ` [xfs] 1e3a7e46a4: stress-ng.rename.ops_per_sec 248.5% improvement kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220524063802.1938505-1-david@fromorbit.com \
    --to=david@fromorbit.com \
    --cc=chris@onthe.net.au \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).