linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: linux-xfs@vger.kernel.org
Subject: [PATCH 0/2 V2] xfs: xfs: non-blocking inodegc pushes
Date: Thu, 16 Jun 2022 08:04:14 +1000	[thread overview]
Message-ID: <20220615220416.3681870-1-david@fromorbit.com> (raw)

Hi folks,

These patches introduce non-blocking inodegc pushes to fix long
hold-offs in statfs() operations when inodegc is performing long
running inode inactivation operations.

The first patch introduces the bound maximum work start time for the
inodegc queues - it's short, only 1 jiffie (10ms) - but we don't
want to delay inodegc for an arbitrarily long period of time.
However, it means that work always starts quickly and so that
reduces the need for statfs() to have to wait for background inodegc
to start and complete to catch space "freed" by recent unlinks.

The second patch converts statfs to use a "push" rather than a
"flush". The push simply schedules any pending work that hasn't yet
timed out to run immediately and returns. It does not wait for the
inodegc work to complete - that's what a flush does, and that's what
caused all the problems for statfs(). Hence statfs() is converted to
push semantics at the same time, thereby removing the blocking
behaviour it currently has.

This should prevent a large amount of the issues that have been
seeing with lots of processes stuck in statfs() - that will no long
happen. The only time user processes should get stuck now is when
the inodegc throttle kicks in (unlinks only at this point) or if we
are waiting for a lock a long running inodegc operation holds to be
released. We had those specific problems before background inodegc -
they manifested as unkillable unlink operations that had every
backed up on them instead of background inodegc that has everything
backed up on them.

This patch has been running in my test environment for nearly a
month now without regressions occurring. While there are likely
still going to be inodegc flush related hold-offs in certain
circumstances, nothing appears to be impacting the correctness of
fstests tests or creating new issues. The 0-day kernel testing bot
also indicates that certain benchmarks (such as aim7 and
stress-ng.rename) run significantly faster with bound maximum delays
and non-blocking statfs operations.

Comments, thoughts and testing appreciated.

-Dave.

Version 2:
- Also convert quota reportting inodegc flushes to a push.



             reply	other threads:[~2022-06-15 22:04 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-15 22:04 Dave Chinner [this message]
2022-06-15 22:04 ` [PATCH 1/2] xfs: bound maximum wait time for inodegc work Dave Chinner
2022-06-17 16:34   ` Brian Foster
2022-06-17 21:52     ` Dave Chinner
2022-06-22  5:20       ` Darrick J. Wong
2022-06-22 16:13         ` Brian Foster
2022-06-23  0:25           ` Darrick J. Wong
2022-06-23 11:49             ` Brian Foster
2022-06-23 19:56               ` Darrick J. Wong
2022-06-24 12:39                 ` Brian Foster
2022-06-25  1:03                   ` Darrick J. Wong
2022-06-15 22:04 ` [PATCH 2/2] xfs: introduce xfs_inodegc_push() Dave Chinner
2022-06-22  5:21   ` Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220615220416.3681870-1-david@fromorbit.com \
    --to=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).