From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 84261C4167B for ; Wed, 1 Nov 2023 11:23:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To: Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=EeFxVPDwX8rUrPW3tfvqgv2hmUJKD8elj+/gexXEcLQ=; b=ag0CBxAWQj+NquJeBKj9qHzygC +tYMdgD+mtWpYiCz8H4LmA8b6MKN6b8LkC9ke3CgmTat0fUprErZiHlxdB2XlU7QDo50YLjOlXmy6 n5iSxfIL1zkD6NNn4B2Wy7jnmQE/FTcsdtqkkU9/COgWxrXRakiCRkrezxO9kiYvlhHoZxFdHyqO4 4rp2W4iMZ/+wGqC/7ZkASYqoal30N0yNtFcKOSddjAizcVXgWzxAh0K7nMJMjfD9tmSJ6fJZOE35d 1As4o6w2FMCXQn/1qwkwc8HRki2mYz1y/SNDd951E+pVj4u7OaCHZKapeHnZD6p0aVc/z1JKcWXIy 9snBOvhg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qy9Jp-007GMd-1d; Wed, 01 Nov 2023 11:23:29 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qy9Jm-007GLd-0t for linux-nvme@lists.infradead.org; Wed, 01 Nov 2023 11:23:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698837804; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EeFxVPDwX8rUrPW3tfvqgv2hmUJKD8elj+/gexXEcLQ=; b=ZJ6Mr80b1Wpee15bUpuhuvJEehAQ+LRkMCSo9nAlbTnYJ828flLwCeoYFbeaz5bktt32mQ 9FhP1H41CF7SqUH2DI2Q2emorsMe8/XwcTniyANOG/3ftI68VMxaOS556Sxxpw5NH1nCys Y55Yltek1h1BK/WiV83ERA6X5N+igto= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-661-NeK5-0WEMAKRnchMuU24-g-1; Wed, 01 Nov 2023 07:23:19 -0400 X-MC-Unique: NeK5-0WEMAKRnchMuU24-g-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 20AD63811F2F; Wed, 1 Nov 2023 11:23:18 +0000 (UTC) Received: from fedora (unknown [10.72.120.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 842BD10F4C; Wed, 1 Nov 2023 11:23:09 +0000 (UTC) Date: Wed, 1 Nov 2023 19:23:05 +0800 From: Ming Lei To: Hannes Reinecke Cc: Ming Lei , Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= , Jan Kara , Mikulas Patocka , Vlastimil Babka , Andrew Morton , Matthew Wilcox , Michal Hocko , stable@vger.kernel.org, regressions@lists.linux.dev, Alasdair Kergon , Mike Snitzer , dm-devel@lists.linux.dev, linux-mm@kvack.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Subject: Re: Intermittent storage (dm-crypt?) freeze - regression 6.4->6.5 Message-ID: References: <20231030155603.k3kejytq2e4vnp7z@quack3> <98aefaa9-1ac-a0e4-fb9a-89ded456750@redhat.com> <20231031140136.25bio5wajc5pmdtl@quack3> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231101_042326_404001_7C3C87E3 X-CRM114-Status: GOOD ( 39.73 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org On Wed, Nov 01, 2023 at 11:15:02AM +0100, Hannes Reinecke wrote: > On 11/1/23 04:24, Ming Lei wrote: > > On Wed, Nov 01, 2023 at 03:14:22AM +0100, Marek Marczykowski-Górecki wrote: > > > On Wed, Nov 01, 2023 at 09:27:24AM +0800, Ming Lei wrote: > > > > On Tue, Oct 31, 2023 at 11:42 PM Marek Marczykowski-Górecki > > > > wrote: > > > > > > > > > > On Tue, Oct 31, 2023 at 03:01:36PM +0100, Jan Kara wrote: > > > > > > On Tue 31-10-23 04:48:44, Marek Marczykowski-Górecki wrote: > > > > > > > Then tried: > > > > > > > - PAGE_ALLOC_COSTLY_ORDER=4, order=4 - cannot reproduce, > > > > > > > - PAGE_ALLOC_COSTLY_ORDER=4, order=5 - cannot reproduce, > > > > > > > - PAGE_ALLOC_COSTLY_ORDER=4, order=6 - freeze rather quickly > > > > > > > > > > > > > > I've retried the PAGE_ALLOC_COSTLY_ORDER=4,order=5 case several times > > > > > > > and I can't reproduce the issue there. I'm confused... > > > > > > > > > > > > And this kind of confirms that allocations > PAGE_ALLOC_COSTLY_ORDER > > > > > > causing hangs is most likely just a coincidence. Rather something either in > > > > > > the block layer or in the storage driver has problems with handling bios > > > > > > with sufficiently high order pages attached. This is going to be a bit > > > > > > painful to debug I'm afraid. How long does it take for you trigger the > > > > > > hang? I'm asking to get rough estimate how heavy tracing we can afford so > > > > > > that we don't overwhelm the system... > > > > > > > > > > Sometimes it freezes just after logging in, but in worst case it takes > > > > > me about 10min of more or less `tar xz` + `dd`. > > > > > > > > blk-mq debugfs is usually helpful for hang issue in block layer or > > > > underlying drivers: > > > > > > > > (cd /sys/kernel/debug/block && find . -type f -exec grep -aH . {} \;) > > > > > > > > BTW, you can just collect logs of the exact disks if you know what > > > > are behind dm-crypt, > > > > which can be figured out by `lsblk`, and it has to be collected after > > > > the hang is triggered. > > > > > > dm-crypt lives on the nvme disk, this is what I collected when it > > > hanged: > > > > > ... > > > nvme0n1/hctx4/cpu4/default_rq_list:000000000d41998f {.op=READ, .cmd_flags=, .rq_flags=IO_STAT, .state=idle, .tag=65, .internal_tag=-1} > > > nvme0n1/hctx4/cpu4/default_rq_list:00000000d0d04ed2 {.op=READ, .cmd_flags=, .rq_flags=IO_STAT, .state=idle, .tag=70, .internal_tag=-1} > > > > Two requests stays in sw queue, but not related with this issue. > > > > > nvme0n1/hctx4/type:default > > > nvme0n1/hctx4/dispatch_busy:9 > > > > non-zero dispatch_busy means BLK_STS_RESOURCE is returned from > > nvme_queue_rq() recently and mostly. > > > > > nvme0n1/hctx4/active:0 > > > nvme0n1/hctx4/run:20290468 > > > > ... > > > > > nvme0n1/hctx4/tags:nr_tags=1023 > > > nvme0n1/hctx4/tags:nr_reserved_tags=0 > > > nvme0n1/hctx4/tags:active_queues=0 > > > nvme0n1/hctx4/tags:bitmap_tags: > > > nvme0n1/hctx4/tags:depth=1023 > > > nvme0n1/hctx4/tags:busy=3 > > > > Just three requests in-flight, two are in sw queue, another is in hctx->dispatch. > > > > ... > > > > > nvme0n1/hctx4/dispatch:00000000b335fa89 {.op=WRITE, .cmd_flags=NOMERGE, .rq_flags=DONTPREP|IO_STAT, .state=idle, .tag=78, .internal_tag=-1} > > > nvme0n1/hctx4/flags:alloc_policy=FIFO SHOULD_MERGE > > > nvme0n1/hctx4/state:SCHED_RESTART > > > > The request staying in hctx->dispatch can't move on, and nvme_queue_rq() > > returns -BLK_STS_RESOURCE constantly, and you can verify with > > the following bpftrace when the hang is triggered: > > > > bpftrace -e 'kretfunc:nvme_queue_rq { @[retval, kstack]=count() }' > > > > It is very likely that memory allocation inside nvme_queue_rq() > > can't be done successfully, then blk-mq just have to retry by calling > > nvme_queue_rq() on the above request. > > > And that is something I've been wondering (for quite some time now): > What _is_ the appropriate error handling for -ENOMEM? It is just my guess. Actually it shouldn't fail since the sgl allocation is backed with memory pool, but there is also dma pool allocation and dma mapping. > At this time, we assume it to be a retryable error and re-run the queue > in the hope that things will sort itself out. It should not be hard to figure out why nvme_queue_rq() can't move on. > But if they don't we're stuck. > Can we somehow figure out if we make progress during submission, and (at > least) issue a warning once we detect a stall? It needs counting on request retry, and people often hate to add something to request or bio in fast path. Also this kind of issue is easy to show in blk-mq debugfs or bpftrace. Thanks, Ming