linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Trond Myklebust <trondmy@hammerspace.com>
To: "bcodding@redhat.com" <bcodding@redhat.com>
Cc: "linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>,
	"anna.schumaker@netapp.com" <anna.schumaker@netapp.com>
Subject: Re: [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence
Date: Wed, 23 Sep 2020 19:39:48 +0000	[thread overview]
Message-ID: <3053ab24162fbb9431f4918373aaa919c3b55a34.camel@hammerspace.com> (raw)
In-Reply-To: <24324678-510B-4524-8C17-EB7365C2A3CD@redhat.com>

On Wed, 2020-09-23 at 15:29 -0400, Benjamin Coddington wrote:
> On 23 Sep 2020, at 14:53, Trond Myklebust wrote:
> 
> > On Wed, 2020-09-23 at 13:37 -0400, Benjamin Coddington wrote:
> > > Since commit 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
> > > CLOSE/OPEN_DOWNGRADE") the following livelock may occur if a
> > > CLOSE
> > > races
> > > with the update of the nfs_state:
> > > 
> > > Process 1           Process 2           Server
> > > =========           =========           ========
> > >  OPEN file
> > >                     OPEN file
> > >                                         Reply OPEN (1)
> > >                                         Reply OPEN (2)
> > >  Update state (1)
> > >  CLOSE file (1)
> > >                                         Reply OLD_STATEID (1)
> > >  CLOSE file (2)
> > >                                         Reply CLOSE (-1)
> > >                     Update state (2)
> > >                     wait for state change
> > >  OPEN file
> > >                     wake
> > >  CLOSE file
> > >  OPEN file
> > >                     wake
> > >  CLOSE file
> > >  ...
> > >                     ...
> > > 
> > > As long as the first process continues updating state, the second
> > > process
> > > will fail to exit the loop in
> > > nfs_set_open_stateid_locked().  This
> > > livelock
> > > has been observed in generic/168.
> > > 
> > > Fix this by detecting the case in nfs_need_update_open_stateid()
> > > and
> > > then exit the loop if:
> > >  - the state is NFS_OPEN_STATE, and
> > >  - the stateid sequence is > 1, and
> > >  - the stateid doesn't match the current open stateid
> > > 
> > > Fixes: 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
> > > CLOSE/OPEN_DOWNGRADE")
> > > Cc: stable@vger.kernel.org # v5.4+
> > > Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
> > > ---
> > >  fs/nfs/nfs4proc.c | 16 +++++++++-------
> > >  1 file changed, 9 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> > > index 6e95c85fe395..8c2bb91127ee 100644
> > > --- a/fs/nfs/nfs4proc.c
> > > +++ b/fs/nfs/nfs4proc.c
> > > @@ -1588,19 +1588,21 @@ static void
> > > nfs_test_and_clear_all_open_stateid(struct nfs4_state *state)
> > >  static bool nfs_need_update_open_stateid(struct nfs4_state
> > > *state,
> > >  		const nfs4_stateid *stateid)
> > >  {
> > > -	if (test_bit(NFS_OPEN_STATE, &state->flags) == 0 ||
> > > -	    !nfs4_stateid_match_other(stateid, &state->open_stateid)) {
> > > +	if (test_bit(NFS_OPEN_STATE, &state->flags)) {
> > > +		/* The common case - we're updating to a new sequence
> > > number */
> > > +		if (nfs4_stateid_match_other(stateid, &state-
> > > > open_stateid) &&
> > > +			nfs4_stateid_is_newer(stateid, &state-
> > > > open_stateid)) {
> > > +			nfs_state_log_out_of_order_open_stateid(state,
> > > stateid);
> > > +			return true;
> > > +		}
> > > +	} else {
> > > +		/* This is the first OPEN */
> > >  		if (stateid->seqid == cpu_to_be32(1))
> > >  			nfs_state_log_update_open_stateid(state);
> > >  		else
> > >  			set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
> > 
> > Isn't this going to cause a reopen of the file on the client if it
> > ends
> > up processing the reply to the second OPEN after it processes the
> > successful CLOSE?
> 
> Yes, that's true - but that's a different bug that I haven't noticed
> or
> considered.  This patch isn't introducing it.
> 
> > Isn't the real problem here rather that the reply to CLOSE needs to
> > be
> > processed in order too?
> 
> Not just the reply, the actual request as well.  If we have a way to
> properly serialize procssing of CLOSE responses, we could just not
> send the
> CLOSE in the first place.
> 
> I'd rather not send the CLOSE if there's another OPEN in play, and if
> that's
> the barrier to getting this particular bug fixed, I'll work on
> that.  What
> mechanism can be used?  What if the client kept a separate "pending"
> stateid
> that could be updated before each operation that would attempt to
> predict
> what the server's resulting change would be?
> 
> Maybe better would be a counter that gets incremented for each
> transition
> to/from NFS_OPEN_STATE so we can check if the stateid is in the same
> generation and a counter for outstanding operations that are expected
> to
> bump the sequence.
> 

The client can't predict what is going to happen w.r.t. an OPEN call.
If it does an open by name, it doesn't even know which file is going to
get opened. That's why we have the wait loop
in nfs_set_open_stateid_locked(). Why should we not do the same in
CLOSE and OPEN_DOWNGRADE? It's the same problem.

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



  reply	other threads:[~2020-09-23 19:39 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-23 17:37 [PATCH 0/2 v3] a stateid race and a cleanup Benjamin Coddington
2020-09-23 17:37 ` [PATCH 1/2 v3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence Benjamin Coddington
2020-09-23 18:53   ` Trond Myklebust
2020-09-23 19:29     ` Benjamin Coddington
2020-09-23 19:39       ` Trond Myklebust [this message]
2020-09-23 19:46         ` Benjamin Coddington
2020-09-23 19:55           ` Trond Myklebust
2020-09-23 17:37 ` [PATCH 2/2 v3] NFSv4: cleanup unused zero_stateid copy Benjamin Coddington

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3053ab24162fbb9431f4918373aaa919c3b55a34.camel@hammerspace.com \
    --to=trondmy@hammerspace.com \
    --cc=anna.schumaker@netapp.com \
    --cc=bcodding@redhat.com \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).