linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 1/2] xfs: flush inodegc workqueue tasks before cancel
Date: Thu, 13 Jan 2022 10:35:28 -0800	[thread overview]
Message-ID: <20220113183528.GE19198@magnolia> (raw)
In-Reply-To: <20220113133701.629593-2-bfoster@redhat.com>

On Thu, Jan 13, 2022 at 08:37:00AM -0500, Brian Foster wrote:
> The xfs_inodegc_stop() helper performs a high level flush of pending
> work on the percpu queues and then runs a cancel_work_sync() on each
> of the percpu work tasks to ensure all work has completed before
> returning.  While cancel_work_sync() waits for wq tasks to complete,
> it does not guarantee work tasks have started. This means that the
> _stop() helper can queue and instantly cancel a wq task without
> having completed the associated work. This can be observed by
> tracepoint inspection of a simple "rm -f <file>; fsfreeze -f <mnt>"
> test:
> 
> 	xfs_destroy_inode: ... ino 0x83 ...
> 	xfs_inode_set_need_inactive: ... ino 0x83 ...
> 	xfs_inodegc_stop: ...
> 	...
> 	xfs_inodegc_start: ...
> 	xfs_inodegc_worker: ...
> 	xfs_inode_inactivating: ... ino 0x83 ...
> 
> The first few lines show that the inode is removed and need inactive
> state set, but the inactivation work has not completed before the
> inodegc mechanism stops. The inactivation doesn't actually occur
> until the fs is unfrozen and the gc mechanism starts back up. Note
> that this test requires fsfreeze to reproduce because xfs_freeze
> indirectly invokes xfs_fs_statfs(), which calls xfs_inodegc_flush().
> 
> When this occurs, the workqueue try_to_grab_pending() logic first
> tries to steal the pending bit, which does not succeed because the
> bit has been set by queue_work_on(). Subsequently, it checks for
> association of a pool workqueue from the work item under the pool
> lock. This association is set at the point a work item is queued and
> cleared when dequeued for processing. If the association exists, the
> work item is removed from the queue and cancel_work_sync() returns
> true. If the pwq association is cleared, the remove attempt assumes
> the task is busy and retries (eventually returning false to the
> caller after waiting for the work task to complete).
> 
> To avoid this race, we can flush each work item explicitly before
> cancel. However, since the _queue_all() already schedules each
> underlying work item, the workqueue level helpers are sufficient to
> achieve the same ordering effect. E.g., the inodegc enabled flag
> prevents scheduling any further work in the _stop() case. Use the
> drain_workqueue() helper in this particular case to make the intent
> a bit more self explanatory.

/me wonders why he didn't think of drain/flush_workqueue in the first
place.  Hm.  Well, the logic looks sound so,

Reviewed-by: Darrick J. Wong <djwong@kernel.org>

and I'll go give this a spin.

--D

> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
>  fs/xfs/xfs_icache.c | 22 ++++------------------
>  1 file changed, 4 insertions(+), 18 deletions(-)
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index d019c98eb839..7a2a5e2be3cf 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -1852,28 +1852,20 @@ xfs_inodegc_worker(
>  }
>  
>  /*
> - * Force all currently queued inode inactivation work to run immediately, and
> - * wait for the work to finish. Two pass - queue all the work first pass, wait
> - * for it in a second pass.
> + * Force all currently queued inode inactivation work to run immediately and
> + * wait for the work to finish.
>   */
>  void
>  xfs_inodegc_flush(
>  	struct xfs_mount	*mp)
>  {
> -	struct xfs_inodegc	*gc;
> -	int			cpu;

>  	if (!xfs_is_inodegc_enabled(mp))
>  		return;
>  
>  	trace_xfs_inodegc_flush(mp, __return_address);
>  
>  	xfs_inodegc_queue_all(mp);
> -
> -	for_each_online_cpu(cpu) {
> -		gc = per_cpu_ptr(mp->m_inodegc, cpu);
> -		flush_work(&gc->work);
> -	}
> +	flush_workqueue(mp->m_inodegc_wq);
>  }
>  
>  /*
> @@ -1884,18 +1876,12 @@ void
>  xfs_inodegc_stop(
>  	struct xfs_mount	*mp)
>  {
> -	struct xfs_inodegc	*gc;
> -	int			cpu;
> -
>  	if (!xfs_clear_inodegc_enabled(mp))
>  		return;
>  
>  	xfs_inodegc_queue_all(mp);
> +	drain_workqueue(mp->m_inodegc_wq);
>  
> -	for_each_online_cpu(cpu) {
> -		gc = per_cpu_ptr(mp->m_inodegc, cpu);
> -		cancel_work_sync(&gc->work);
> -	}
>  	trace_xfs_inodegc_stop(mp, __return_address);
>  }
>  
> -- 
> 2.31.1
> 

  reply	other threads:[~2022-01-13 18:35 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-13 13:36 [PATCH 0/2] xfs: a couple misc/small deferred inactivation tweaks Brian Foster
2022-01-13 13:37 ` [PATCH 1/2] xfs: flush inodegc workqueue tasks before cancel Brian Foster
2022-01-13 18:35   ` Darrick J. Wong [this message]
2022-01-13 22:19   ` Dave Chinner
2022-01-13 13:37 ` [PATCH 2/2] xfs: run blockgc on freeze to avoid iget stalls after reclaim Brian Foster
2022-01-13 17:13   ` Darrick J. Wong
2022-01-13 19:58     ` Brian Foster
2022-01-13 20:43       ` Darrick J. Wong
2022-01-13 21:01         ` Darrick J. Wong
2022-01-13 22:38   ` Dave Chinner
2022-01-14 17:35     ` Darrick J. Wong
2022-01-14 19:45       ` Brian Foster
2022-01-14 21:30         ` Darrick J. Wong
2022-01-15  4:09           ` Darrick J. Wong
2022-01-15 22:40           ` Dave Chinner
2022-01-17 13:37           ` Brian Foster
2022-01-18 18:56             ` Darrick J. Wong
2022-01-19 20:07               ` Brian Foster
2022-01-20  0:36                 ` Darrick J. Wong
2022-01-20  5:18                   ` Dave Chinner
2022-01-24 16:57                   ` Brian Foster
2022-02-02  2:22                     ` Dave Chinner
2022-02-10 19:03                       ` Brian Foster
2022-02-10 23:08                         ` Dave Chinner
2022-02-15  1:54                           ` Darrick J. Wong
2022-02-15  9:26                             ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220113183528.GE19198@magnolia \
    --to=djwong@kernel.org \
    --cc=bfoster@redhat.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).