ceph-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@kernel.org>
To: ceph-devel@vger.kernel.org, idryomov@gmail.com
Subject: Re: [PATCH] ceph: ensure we flush delayed caps when unmounting
Date: Thu, 03 Jun 2021 12:57:22 -0400	[thread overview]
Message-ID: <6cd5b19cbcee46474709a97b273c4270088fb241.camel@kernel.org> (raw)
In-Reply-To: <20210603134812.80276-1-jlayton@kernel.org>

On Thu, 2021-06-03 at 09:48 -0400, Jeff Layton wrote:
> I've seen some warnings when testing recently that indicate that there
> are caps still delayed on the delayed list even after we've started
> unmounting.
> 
> When checking delayed caps, process the whole list if we're unmounting,
> and check for delayed caps after setting the stopping var and flushing
> dirty caps.
> 
> Signed-off-by: Jeff Layton <jlayton@kernel.org>
> ---
>  fs/ceph/caps.c       | 3 ++-
>  fs/ceph/mds_client.c | 1 +
>  2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
> index a5e93b185515..68b4c6dfe4db 100644
> --- a/fs/ceph/caps.c
> +++ b/fs/ceph/caps.c
> @@ -4236,7 +4236,8 @@ void ceph_check_delayed_caps(struct ceph_mds_client *mdsc)
>  		ci = list_first_entry(&mdsc->cap_delay_list,
>  				      struct ceph_inode_info,
>  				      i_cap_delay_list);
> -		if ((ci->i_ceph_flags & CEPH_I_FLUSH) == 0 &&
> +		if (!mdsc->stopping &&
> +		    (ci->i_ceph_flags & CEPH_I_FLUSH) == 0 &&
>  		    time_before(jiffies, ci->i_hold_caps_max))
>  			break;
>  		list_del_init(&ci->i_cap_delay_list);
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index e5af591d3bd4..916af5497829 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -4691,6 +4691,7 @@ void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc)
>  
>  	lock_unlock_sessions(mdsc);
>  	ceph_flush_dirty_caps(mdsc);
> +	ceph_check_delayed_caps(mdsc);
>  	wait_requests(mdsc);
>  
>  	/*

I'm going to self-NAK this patch for now. Initially this looked good in
testing, but I think it's just papering over the real problem, which is
that ceph_async_iput can queue a job to a workqueue after the point
where we've flushed that workqueue on umount.

I think the right approach is to look at how to ensure that calling iput
doesn't end up taking these coarse-grained locks so we don't need to
queue it in so many codepaths.
-- 
Jeff Layton <jlayton@kernel.org>


  reply	other threads:[~2021-06-03 16:57 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-03 13:48 [PATCH] ceph: ensure we flush delayed caps when unmounting Jeff Layton
2021-06-03 16:57 ` Jeff Layton [this message]
2021-06-04  9:35   ` Luis Henriques
2021-06-04 12:26     ` Jeff Layton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6cd5b19cbcee46474709a97b273c4270088fb241.camel@kernel.org \
    --to=jlayton@kernel.org \
    --cc=ceph-devel@vger.kernel.org \
    --cc=idryomov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).