From: "Frank Filz" <ffilzlnx@mindspring.com>
To: "'Bruce Fields'" <bfields@redhat.com>, "'NeilBrown'" <neilb@suse.de>
Cc: "'Petr Vorel'" <pvorel@suse.cz>,
"'Linux NFS Mailing List'" <linux-nfs@vger.kernel.org>,
"'Yong Sun'" <yosun@suse.com>
Subject: RE: pynfs: [NFS 4.0] SEC7, LOCK24 test failures
Date: Tue, 25 Jan 2022 16:14:42 -0800 [thread overview]
Message-ID: <0b8e01d81249$b7c4f300$274ed900$@mindspring.com> (raw)
In-Reply-To: <CAPL3RVE8+zYOLotpUQ6QWFy5rYS8o1NV6XbKE4-D6XpVSoYw3w@mail.gmail.com>
Yes, I believe I wrote this test to recreate a condition we saw in the wild. There is no guarantee the client doesn't send LOCK with an OPEN stateid and requesting new lock owner when you already have a LOCK stateid for that lock owner. This test case forces that condition.
It looks like we were having troubles with FREE_STATEID racing with LOCK. A LOCK following a FREE_STATEID MUST use the OPEN stateid and ask for a new lock owner (since the LOCK stateid was freed), but if the LOCK wins the race, the old LOCK stateid still exists, so we get an LOCK with OPEN stateid requesting new lock owner where the STATEID will already exist.
Now maybe there's a different way to resolve the race, but if the LOCK truly arrives before Ganesha even sees the FREE_STATEID then it has no knowledge that would allow it to delay the LOCK request. Before we made changes to allow this I believe we replied with an error that broke things client side.
Here's a Ganesha patch trying to resolve the race and creating the condition that LOCK24 was then written to test:
https://github.com/nfs-ganesha/nfs-ganesha/commit/7d0fb8e9328c40fcfae03ac950a854f56689bb44
Of course the client may have changed to eliminate the race...
If need be, just change this from an "all" test to a "ganesha" test.
Frank
> -----Original Message-----
> From: Bruce Fields [mailto:bfields@redhat.com]
> Sent: Tuesday, January 25, 2022 2:47 PM
> To: NeilBrown <neilb@suse.de>
> Cc: Petr Vorel <pvorel@suse.cz>; Linux NFS Mailing List <linux-
> nfs@vger.kernel.org>; Yong Sun <yosun@suse.com>; Frank S. Filz
> <ffilzlnx@mindspring.com>
> Subject: Re: pynfs: [NFS 4.0] SEC7, LOCK24 test failures
>
> Frank added this test in 4299316fb357, and I don't really understand his
> description, but it *sounds* like he really wanted it to do the new-lockowner
> case. Frank?
>
> --b.
>
> On Tue, Jan 18, 2022 at 12:01 AM NeilBrown <neilb@suse.de> wrote:
> >
> > On Wed, 02 Jun 2021, J. Bruce Fields wrote:
> > > On Tue, Jun 01, 2021 at 04:01:08PM +0200, Petr Vorel wrote:
> > >
> > > > LOCK24 st_lock.testOpenUpgradeLock : FAILURE
> > > > OP_LOCK should return NFS4_OK, instead got
> > > > NFS4ERR_BAD_SEQID
> > >
> > > I suspect the server's actually OK here, but I need to look more
> > > closely.
> > >
> > I agree.
> > I think this patch fixes the test.
> >
> > NeilBrown
> >
> > From: NeilBrown <neilb@suse.de>
> > Date: Tue, 18 Jan 2022 15:50:37 +1100
> > Subject: [PATCH] Fix NFSv4.0 LOCK24 test
> >
> > Only the first lock request for a given open-owner can use lock_file.
> > Subsequent lock request must use relock_file.
> >
> > Signed-off-by: NeilBrown <neilb@suse.de>
> > ---
> > nfs4.0/servertests/st_lock.py | 11 +++++++++--
> > 1 file changed, 9 insertions(+), 2 deletions(-)
> >
> > diff --git a/nfs4.0/servertests/st_lock.py
> > b/nfs4.0/servertests/st_lock.py index 468672403ffe..db08fbeedac4
> > 100644
> > --- a/nfs4.0/servertests/st_lock.py
> > +++ b/nfs4.0/servertests/st_lock.py
> > @@ -886,6 +886,7 @@ class open_sequence:
> > self.client = client
> > self.owner = owner
> > self.lockowner = lockowner
> > + self.lockseq = 0
> > def open(self, access):
> > self.fh, self.stateid = self.client.create_confirm(self.owner,
> > access=access, @@
> > -899,15 +900,21 @@ class open_sequence:
> > def close(self):
> > self.client.close_file(self.owner, self.fh, self.stateid)
> > def lock(self, type):
> > - res = self.client.lock_file(self.owner, self.fh, self.stateid,
> > - type=type, lockowner=self.lockowner)
> > + if self.lockseq == 0:
> > + res = self.client.lock_file(self.owner, self.fh, self.stateid,
> > + type=type, lockowner=self.lockowner)
> > + else:
> > + res = self.client.relock_file(self.lockseq, self.fh, self.lockstateid,
> > + type=type)
> > check(res)
> > if res.status == NFS4_OK:
> > self.lockstateid = res.lockid
> > + self.lockseq = self.lockseq + 1
> > def unlock(self):
> > res = self.client.unlock_file(1, self.fh, self.lockstateid)
> > if res.status == NFS4_OK:
> > self.lockstateid = res.lockid
> > + self.lockseq = self.lockseq + 1
> >
> > def testOpenUpgradeLock(t, env):
> > """Try open, lock, open, downgrade, close
> > --
> > 2.34.1
> >
next prev parent reply other threads:[~2022-01-26 0:14 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-01 14:01 pynfs: [NFS 4.0] SEC7, LOCK24 test failures Petr Vorel
2021-06-01 15:31 ` J. Bruce Fields
2021-06-02 7:58 ` Petr Vorel
2022-01-18 4:52 ` NeilBrown
2022-01-20 10:51 ` Petr Vorel
2022-01-25 22:46 ` Bruce Fields
2022-01-25 23:48 ` NeilBrown
2022-01-26 0:14 ` Frank Filz [this message]
2022-04-01 13:30 ` Petr Vorel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='0b8e01d81249$b7c4f300$274ed900$@mindspring.com' \
--to=ffilzlnx@mindspring.com \
--cc=bfields@redhat.com \
--cc=linux-nfs@vger.kernel.org \
--cc=neilb@suse.de \
--cc=pvorel@suse.cz \
--cc=yosun@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox