linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] xfs: non-blocking inodegc pushes
@ 2022-05-24  6:38 Dave Chinner
  2022-05-24  6:38 ` [PATCH 1/2] xfs: bound maximum wait time for inodegc work Dave Chinner
  2022-05-24  6:38 ` [PATCH 2/2] xfs: introduce xfs_inodegc_push() Dave Chinner
  0 siblings, 2 replies; 14+ messages in thread
From: Dave Chinner @ 2022-05-24  6:38 UTC (permalink / raw)
  To: linux-xfs; +Cc: chris

Hi folks,

I've had time to forward port the non-blocking inodegc push changes
I had in a different LOD to the current for-next tree. I've run it
through fstests auto group a couple of times and it hasn't caused
any space accounting related failures on the machines I've run it
on.

The first patch introduces the bound maximum work start time for the
inodegc queues - it's short, only 10ms (IIRC), but we don't want to
delay inodegc for an arbitrarily long period of time. However, it
means that work always starts quickly and so that reduces the need
for statfs() to have to wait for background inodegc to start and
complete to catch space "freed" by recent unlinks.

The second patch converts statfs to use a "push" rather than a
"flush". The push simply schedules any pending work that hasn't yet
timed out to run immediately and returns. It does not wait for the
inodegc work to complete - that's what a flush does, and that's what
caused all the problems for statfs(). Hence statfs() is converted to
push semantics at the same time, thereby removing the blocking
behaviour it currently has.

This should prevent a large amount of the issues that Chris has been
seeing with lots of processes stuck in statfs() - that will no long
happen. The only time user processes should get stuck now is when
the inodegc throttle kicks in (unlinks only at this point) or if we
are waiting for a lock a long running inodegc operation holds to be
released. We had those specific problems before background inodegc -
they manifested as unkillable unlink operations that had every
backed up on them instead of background inodegc that has everything
backed up on them.

Hence I think these patches largely restore the status quo that we
had before the background inodegc code was added.

Comments, thoughts and testing appreciated.

Cheers,

Dave.


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-05-27  9:13 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-05-24  6:38 [RFC PATCH 0/2] xfs: non-blocking inodegc pushes Dave Chinner
2022-05-24  6:38 ` [PATCH 1/2] xfs: bound maximum wait time for inodegc work Dave Chinner
2022-05-24 16:54   ` Darrick J. Wong
2022-05-24 23:03     ` Dave Chinner
2022-05-26  9:05   ` [xfs] 55a3d6bbc5: aim7.jobs-per-min 19.8% improvement kernel test robot
2022-05-27  9:12   ` [xfs] 55a3d6bbc5: BUG:KASAN:use-after-free_in_xfs_attr3_node_inactive[xfs] kernel test robot
2022-05-24  6:38 ` [PATCH 2/2] xfs: introduce xfs_inodegc_push() Dave Chinner
2022-05-24 10:47   ` Amir Goldstein
2022-05-24 16:14     ` Darrick J. Wong
2022-05-24 18:05       ` Amir Goldstein
2022-05-24 23:17     ` Dave Chinner
2022-05-24 16:17   ` Darrick J. Wong
2022-05-24 23:07     ` Dave Chinner
2022-05-26  3:00   ` [xfs] 1e3a7e46a4: stress-ng.rename.ops_per_sec 248.5% improvement kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).