public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Bruce Fields <bfields@fieldses.org>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: Trond Myklebust <trondmy@hammerspace.com>,
	Jeff Layton <jlayton@redhat.com>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: CPU lockup in or near new filecache code
Date: Fri, 3 Jan 2020 11:47:11 -0500	[thread overview]
Message-ID: <20200103164711.GB24306@fieldses.org> (raw)
In-Reply-To: <980CB8E4-0E7F-4F1D-B223-81176BE15A39@oracle.com>

On Wed, Dec 18, 2019 at 06:20:56PM -0500, Chuck Lever wrote:
> > On Dec 13, 2019, at 3:12 PM, Trond Myklebust <trondmy@hammerspace.com> wrote:
> > Does something like the following help?
> > 
> > 8<---------------------------------------------------
> > From caf515c82ed572e4f92ac8293e5da4818da0c6ce Mon Sep 17 00:00:00 2001
> > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > Date: Fri, 13 Dec 2019 15:07:33 -0500
> > Subject: [PATCH] nfsd: Fix a soft lockup race in
> > nfsd_file_mark_find_or_create()
> > 
> > If nfsd_file_mark_find_or_create() keeps winning the race for the
> > nfsd_file_fsnotify_group->mark_mutex against nfsd_file_mark_put()
> > then it can soft lock up, since fsnotify_add_inode_mark() ends
> > up always finding an existing entry.
> > 
> > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> > ---
> > fs/nfsd/filecache.c | 8 ++++++--
> > 1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
> > index 9c2b29e07975..f275c11c4e28 100644
> > --- a/fs/nfsd/filecache.c
> > +++ b/fs/nfsd/filecache.c
> > @@ -132,9 +132,13 @@ nfsd_file_mark_find_or_create(struct nfsd_file *nf)
> > 						 struct nfsd_file_mark,
> > 						 nfm_mark));
> > 			mutex_unlock(&nfsd_file_fsnotify_group->mark_mutex);
> > -			fsnotify_put_mark(mark);
> > -			if (likely(nfm))
> > +			if (nfm) {
> > +				fsnotify_put_mark(mark);
> > 				break;
> > +			}
> > +			/* Avoid soft lockup race with nfsd_file_mark_put() */
> > +			fsnotify_destroy_mark(mark, nfsd_file_fsnotify_group);
> > +			fsnotify_put_mark(mark);
> > 		} else
> > 			mutex_unlock(&nfsd_file_fsnotify_group->mark_mutex);
> > 
> 
> I've tried to reproduce the lockup for three days with this patch
> applied to my server. No lockup.
> 
> Tested-by: Chuck Lever <chuck.lever@oracle.com>

I'm applying this for 5.5 with Chuck's tested-by and:

    Fixes: 65294c1f2c5e "nfsd: add a new struct file caching facility to nfsd"

--b.

  reply	other threads:[~2020-01-03 16:47 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-10 16:27 CPU lockup in or near new filecache code Chuck Lever
2019-12-10 18:49 ` Bruce Fields
2019-12-10 20:45 ` Trond Myklebust
2019-12-11 18:14   ` Chuck Lever
2019-12-11 20:01     ` Chuck Lever
2019-12-13 20:12       ` Trond Myklebust
2019-12-13 20:26         ` Chuck Lever
2019-12-18 23:20         ` Chuck Lever
2020-01-03 16:47           ` Bruce Fields [this message]
2020-01-03 18:01             ` Trond Myklebust
2020-01-03 18:40               ` bfields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200103164711.GB24306@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=chuck.lever@oracle.com \
    --cc=jlayton@redhat.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=trondmy@hammerspace.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox