public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Trond Myklebust <trondmy@hammerspace.com>
To: "chuck.lever@oracle.com" <chuck.lever@oracle.com>
Cc: "bfields@fieldses.org" <bfields@fieldses.org>,
	"jlayton@redhat.com" <jlayton@redhat.com>,
	"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>
Subject: Re: CPU lockup in or near new filecache code
Date: Fri, 13 Dec 2019 20:12:25 +0000	[thread overview]
Message-ID: <aa7857e4a9ac535e78353db53448efb1b58a57f9.camel@hammerspace.com> (raw)
In-Reply-To: <A7C348BD-2543-492A-B768-7E3666734A57@oracle.com>

On Wed, 2019-12-11 at 15:01 -0500, Chuck Lever wrote:
> OK, I finally got a hit. It took a long time. I've seen this
> particular
> stack trace before, several times.
> 
> Dec 11 14:58:34 klimt kernel: watchdog: BUG: soft lockup - CPU#0
> stuck for 22s! [nfsd:2005]
> Dec 11 14:58:34 klimt kernel: Modules linked in: rpcsec_gss_krb5
> ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager
> ocfs2_stackglue ib_umad ib_ipoib mlx4_ib sb_edac x86_pkg_temp_thermal
> kvm_intel coretemp kvm irqbypass crct10dif_pclmul crc32_pclmul
> ghash_clmulni_intel iTCO_wdt ext4 iTCO_vendor_support aesni_intel
> mbcache jbd2 glue_helper rpcrdma crypto_simd cryptd rdma_ucm ib_iser
> rdma_cm pcspkr iw_cm ib_cm mei_me raid0 libiscsi lpc_ich mei sg
> scsi_transport_iscsi i2c_i801 mfd_core wmi ipmi_si ipmi_devintf
> ipmi_msghandler ioatdma acpi_power_meter nfsd nfs_acl lockd
> auth_rpcgss grace sunrpc ip_tables xfs libcrc32c mlx4_en sr_mod
> sd_mod cdrom qedr ast drm_vram_helper drm_ttm_helper ttm crc32c_intel
> drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm igb
> dca i2c_algo_bit i2c_core mlx4_core ahci libahci libata nvme
> nvme_core qede qed dm_mirror dm_region_hash dm_log dm_mod crc8
> ib_uverbs dax ib_core
> Dec 11 14:58:34 klimt kernel: CPU: 0 PID: 2005 Comm: nfsd Tainted:
> G        W         5.5.0-rc1-00003-g170e7adc2317 #1401
> Dec 11 14:58:34 klimt kernel: Hardware name: Supermicro Super
> Server/X10SRL-F, BIOS 1.0c 09/09/2015
> Dec 11 14:58:34 klimt kernel: RIP: 0010:__srcu_read_lock+0x23/0x24
> Dec 11 14:58:34 klimt kernel: Code: 07 00 0f 1f 40 00 c3 0f 1f 44 00
> 00 8b 87 c8 c3 00 00 48 8b 97 f0 c3 00 00 83 e0 01 48 63 c8 65 48 ff
> 04 ca f0 83 44 24 fc 00 <c3> 0f 1f 44 00 00 f0 83 44 24 fc 00 48 63
> f6 48 8b 87 f0 c3 00 00
> Dec 11 14:58:34 klimt kernel: RSP: 0018:ffffc90001d97bd0 EFLAGS:
> 00000246 ORIG_RAX: ffffffffffffff13
> Dec 11 14:58:34 klimt kernel: RAX: 0000000000000001 RBX:
> ffff888830d0eb78 RCX: 0000000000000001
> Dec 11 14:58:34 klimt kernel: RDX: 0000000000030f00 RSI:
> ffff888853f4da00 RDI: ffffffff82815a40
> Dec 11 14:58:34 klimt kernel: RBP: ffff88883112d828 R08:
> ffff888843540000 R09: ffffffff8121d707
> Dec 11 14:58:34 klimt kernel: R10: ffffc90001d97bf0 R11:
> 0000000000001b84 R12: ffff888853f4da00
> Dec 11 14:58:34 klimt kernel: R13: ffff8888132a1410 R14:
> ffff88883112d7e0 R15: 00000000ffffffef
> Dec 11 14:58:34 klimt kernel: FS:  0000000000000000(0000)
> GS:ffff88885fc00000(0000) knlGS:0000000000000000
> Dec 11 14:58:34 klimt kernel: CS:  0010 DS: 0000 ES: 0000 CR0:
> 0000000080050033
> Dec 11 14:58:34 klimt kernel: CR2: 00007f2d6a2d8000 CR3:
> 0000000859b38004 CR4: 00000000001606f0
> Dec 11 14:58:34 klimt kernel: Call Trace:
> Dec 11 14:58:34 klimt kernel: fsnotify_grab_connector+0x16/0x4f
> Dec 11 14:58:34 klimt kernel: fsnotify_find_mark+0x11/0x6a
> Dec 11 14:58:34 klimt kernel: nfsd_file_acquire+0x3a9/0x5b2 [nfsd]
> Dec 11 14:58:34 klimt kernel: nfs4_get_vfs_file+0x14c/0x20f [nfsd]
> Dec 11 14:58:34 klimt kernel: nfsd4_process_open2+0xcd6/0xd98 [nfsd]
> Dec 11 14:58:34 klimt kernel: ? fh_verify+0x42e/0x4ef [nfsd]
> Dec 11 14:58:34 klimt kernel: ? nfsd4_process_open1+0x233/0x29d
> [nfsd]
> Dec 11 14:58:34 klimt kernel: nfsd4_open+0x500/0x5cb [nfsd]
> Dec 11 14:58:34 klimt kernel: nfsd4_proc_compound+0x32a/0x5c7 [nfsd]
> Dec 11 14:58:34 klimt kernel: nfsd_dispatch+0x102/0x1e2 [nfsd]
> Dec 11 14:58:34 klimt kernel: svc_process_common+0x3b3/0x65d [sunrpc]
> Dec 11 14:58:34 klimt kernel: ? svc_xprt_put+0x12/0x21 [sunrpc]
> Dec 11 14:58:34 klimt kernel: ? nfsd_svc+0x2be/0x2be [nfsd]
> Dec 11 14:58:34 klimt kernel: ? nfsd_destroy+0x51/0x51 [nfsd]
> Dec 11 14:58:34 klimt kernel: svc_process+0xf6/0x115 [sunrpc]
> Dec 11 14:58:34 klimt kernel: nfsd+0xf2/0x149 [nfsd]
> Dec 11 14:58:34 klimt kernel: kthread+0xf6/0xfb
> Dec 11 14:58:34 klimt kernel: ? kthread_queue_delayed_work+0x74/0x74
> Dec 11 14:58:34 klimt kernel: ret_from_fork+0x3a/0x50
> 

Does something like the following help?

8<---------------------------------------------------
From caf515c82ed572e4f92ac8293e5da4818da0c6ce Mon Sep 17 00:00:00 2001
From: Trond Myklebust <trond.myklebust@hammerspace.com>
Date: Fri, 13 Dec 2019 15:07:33 -0500
Subject: [PATCH] nfsd: Fix a soft lockup race in
 nfsd_file_mark_find_or_create()

If nfsd_file_mark_find_or_create() keeps winning the race for the
nfsd_file_fsnotify_group->mark_mutex against nfsd_file_mark_put()
then it can soft lock up, since fsnotify_add_inode_mark() ends
up always finding an existing entry.

Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
---
 fs/nfsd/filecache.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index 9c2b29e07975..f275c11c4e28 100644
--- a/fs/nfsd/filecache.c
+++ b/fs/nfsd/filecache.c
@@ -132,9 +132,13 @@ nfsd_file_mark_find_or_create(struct nfsd_file *nf)
 						 struct nfsd_file_mark,
 						 nfm_mark));
 			mutex_unlock(&nfsd_file_fsnotify_group->mark_mutex);
-			fsnotify_put_mark(mark);
-			if (likely(nfm))
+			if (nfm) {
+				fsnotify_put_mark(mark);
 				break;
+			}
+			/* Avoid soft lockup race with nfsd_file_mark_put() */
+			fsnotify_destroy_mark(mark, nfsd_file_fsnotify_group);
+			fsnotify_put_mark(mark);
 		} else
 			mutex_unlock(&nfsd_file_fsnotify_group->mark_mutex);
 
-- 
2.23.0


-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@hammerspace.com



  reply	other threads:[~2019-12-13 20:41 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-10 16:27 CPU lockup in or near new filecache code Chuck Lever
2019-12-10 18:49 ` Bruce Fields
2019-12-10 20:45 ` Trond Myklebust
2019-12-11 18:14   ` Chuck Lever
2019-12-11 20:01     ` Chuck Lever
2019-12-13 20:12       ` Trond Myklebust [this message]
2019-12-13 20:26         ` Chuck Lever
2019-12-18 23:20         ` Chuck Lever
2020-01-03 16:47           ` Bruce Fields
2020-01-03 18:01             ` Trond Myklebust
2020-01-03 18:40               ` bfields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aa7857e4a9ac535e78353db53448efb1b58a57f9.camel@hammerspace.com \
    --to=trondmy@hammerspace.com \
    --cc=bfields@fieldses.org \
    --cc=chuck.lever@oracle.com \
    --cc=jlayton@redhat.com \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox