From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DC73C28CC0 for ; Thu, 30 May 2019 18:40:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1E6BA25FC2 for ; Thu, 30 May 2019 18:40:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726031AbfE3Sj7 (ORCPT ); Thu, 30 May 2019 14:39:59 -0400 Received: from fieldses.org ([173.255.197.46]:41070 "EHLO fieldses.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726029AbfE3Sj7 (ORCPT ); Thu, 30 May 2019 14:39:59 -0400 Received: by fieldses.org (Postfix, from userid 2815) id 73E6C1E29; Thu, 30 May 2019 14:39:58 -0400 (EDT) Date: Thu, 30 May 2019 14:39:58 -0400 To: Alan Post Cc: "linux-nfs@vger.kernel.org" Subject: Re: User process NFS write hang followed by automount hang requiring reboot Message-ID: <20190530183958.GA23001@fieldses.org> References: <20190520223324.GL4158@turtle.email> <20190521192254.GN4158@turtle.email> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190521192254.GN4158@turtle.email> User-Agent: Mutt/1.5.21 (2010-09-15) From: bfields@fieldses.org (J. Bruce Fields) Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Tue, May 21, 2019 at 01:22:54PM -0600, Alan Post wrote: > On Tue, May 21, 2019 at 03:46:03PM +0000, Trond Myklebust wrote: > > > A representative sample of stack traces from hung user-submitted > > > processes (jobs). The first here is quite a lot more common than > > > the later two: > > > > > > $ sudo cat /proc/197520/stack > > > [<0>] io_schedule+0x12/0x40 > > > [<0>] nfs_lock_and_join_requests+0x309/0x4c0 [nfs] > > > [<0>] nfs_updatepage+0x2a2/0x8b0 [nfs] > > > [<0>] nfs_write_end+0x63/0x4c0 [nfs] > > > [<0>] generic_perform_write+0x138/0x1b0 > > > [<0>] nfs_file_write+0xdc/0x200 [nfs] > > > [<0>] new_sync_write+0xfb/0x160 > > > [<0>] vfs_write+0xa5/0x1a0 > > > [<0>] ksys_write+0x4f/0xb0 > > > [<0>] do_syscall_64+0x53/0x100 > > > [<0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9 > > > [<0>] 0xffffffffffffffff > > > > > > > Have you tried upgrading to 4.19.44? There is a fix that went in not > > too long ago that deals with a request leak that can cause stack traces > > like the above that wait forever. > > > > That I haven't tried. I gather you're talking about either or both > of: > > 63b0ee126f7e > be74fddc976e > > Which I do see went in after 4.19.24 (which I've tried) but didn't > get in 4.20.9 (which I've also tried). Let me see about trying the > 4.19.44 kernel. > > > By the way, the above stack trace with "nfs_lock_and_join_requests" > > usually means that you are using a very small rsize or wsize (less than > > 4k). Is that the case? If so, you might want to look into just > > increasing the I/O size. > > > > These exports have rsize and wsize set to 1048576. Are you getting that from the mount commandline? It could be negotiated down during mount. I think you can get the negotiated values form the rsize= and wsize= values on the opts: line in /proc/self/mountstats. See also /proc/fs/nfsd/max_block_size. --b. > That decision was > before my time, and I'll guess this value was picked to match > NFSSVC_MAXBLKSIZE. > > Thank you for your help, > > -A > -- > Alan Post | Xen VPS hosting for the technically adept > PO Box 61688 | Sunnyvale, CA 94088-1681 | https://prgmr.com/ > email: adp@prgmr.com