From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BCF8C43381 for ; Tue, 12 Mar 2019 03:35:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 53ED32084F for ; Tue, 12 Mar 2019 03:35:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="a+P5awS+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726766AbfCLDfR (ORCPT ); Mon, 11 Mar 2019 23:35:17 -0400 Received: from mail-ot1-f48.google.com ([209.85.210.48]:35804 "EHLO mail-ot1-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726560AbfCLDfR (ORCPT ); Mon, 11 Mar 2019 23:35:17 -0400 Received: by mail-ot1-f48.google.com with SMTP id z25so1257334otk.2 for ; Mon, 11 Mar 2019 20:35:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=HV9Xo5Ma6lVYXnnefOkN3s7QC0UD+0lVGEF+kVfiIRk=; b=a+P5awS+rHv8SYzpZkCKu06tF99ysSWmtYV0NVebAMMHSOrnd69FxXz5cDXGXcTnNk Hqi4YJkOQDtFQKiC1IZYDviCQADNHx/H+lniP5PaNMrMIGwyP2Om2T52GGZLx+vuu3n7 VABvZk/Cu0B+gegCvVBhAy2hBpwRVEviEPbOivYhVFFnetsaobRzBkN+TLNUHE/Wgke5 1HIrT1YcrSEXFf2cAPx9jFR8UyMAfVTz683Mxya9JX7x08iPRdkxqQUhNEBuTysZ0MlB Oz4eE9TN1Y1JGr/Zjde9K+rdQQat+A7WN1bd7X8rsoQ9makNrLnvWiBIoIFkty8Fxg0A w8Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=HV9Xo5Ma6lVYXnnefOkN3s7QC0UD+0lVGEF+kVfiIRk=; b=X3gaVEzbAWf6yGg9ukROmDe4FTl0TSnOF1rPJfg58W1Yx7B6FLA2MAy2TmJGxDlsL8 1uFfLu82IcBveTSMdhGI3jRRsZwpDbBV7eSg8OIWfje3Mnhk7KDLRXWXIM8r8QmRJc8c SdsBrdNQfnTLFvark1HkwOIz5xLDCE4RgBg4sgyYBxfHxaAapNAm+E7njxQJB7jm0Rfv 0d1r6K4kM9EjmytkbcQ9u0xtyWMV692Dc/6rloM/6WC+8Yq201LiknXCiS7RobZHDRIE ajU2leaEZgVaonEVHeifkOE0LfGt3Mr7fh2OO0ITqbV9927Wcgc6XzMhHHibooMYQZvw EFIQ== X-Gm-Message-State: APjAAAUqoy3muLWh2HsON9VF1eLBIvGv5aNqfrjFaB49VoIIlkFPkony 42JRIICH3yrO8GMtO/zDsDqxLl42jyg615M/OpliqA== X-Google-Smtp-Source: APXvYqz3+xhcwjCTABnnnXaqWa211tTYNu9X+gsXADebkmDMCPT6c+SVz0UNy13y3/3bWz3gOG4LDoLrrfuMon9I190= X-Received: by 2002:a9d:4c85:: with SMTP id m5mr22472717otf.367.1552361716668; Mon, 11 Mar 2019 20:35:16 -0700 (PDT) MIME-Version: 1.0 References: <20190311150947.GD19508@bombadil.infradead.org> In-Reply-To: <20190311150947.GD19508@bombadil.infradead.org> From: Dan Williams Date: Mon, 11 Mar 2019 20:35:05 -0700 Message-ID: Subject: Re: Hang / zombie process from Xarray page-fault conversion (bisected) To: Matthew Wilcox Cc: Linux MM , linux-nvdimm , linux-fsdevel , "Barror, Robert" Content-Type: text/plain; charset="UTF-8" Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Mon, Mar 11, 2019 at 8:10 AM Matthew Wilcox wrote: > > On Thu, Mar 07, 2019 at 10:16:17PM -0800, Dan Williams wrote: > > Hi Willy, > > > > We're seeing a case where RocksDB hangs and becomes defunct when > > trying to kill the process. v4.19 succeeds and v4.20 fails. Robert was > > able to bisect this to commit b15cd800682f "dax: Convert page fault > > handlers to XArray". > > > > I see some direct usage of xa_index and wonder if there are some more > > pmd fixups to do? > > > > Other thoughts? > > I don't see why killing a process would have much to do with PMD > misalignment. The symptoms (hanging on a signal) smell much more like > leaving a locked entry in the tree. Is this easy to reproduce? Can you > get /proc/$pid/stack for a hung task? It's fairly easy to reproduce, I'll see if I can package up all the dependencies into something that fails in a VM. It's limited to xfs, no failure on ext4 to date. The hung process appears to be: kworker/53:1-xfs-sync/pmem0 ...and then the rest of the database processes grind to a halt from there. Robert was kind enough to capture /proc/$pid/stack, but nothing interesting: [<0>] worker_thread+0xb2/0x380 [<0>] kthread+0x112/0x130 [<0>] ret_from_fork+0x1f/0x40 [<0>] 0xffffffffffffffff