From: Ming Lei <ming.lei@redhat.com>
To: Hannes Reinecke <hare@suse.de>
Cc: "Ming Lei" <tom.leiming@gmail.com>,
"Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>,
"Jan Kara" <jack@suse.cz>,
"Mikulas Patocka" <mpatocka@redhat.com>,
"Vlastimil Babka" <vbabka@suse.cz>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Matthew Wilcox" <willy@infradead.org>,
"Michal Hocko" <mhocko@suse.com>,
stable@vger.kernel.org, regressions@lists.linux.dev,
"Alasdair Kergon" <agk@redhat.com>,
"Mike Snitzer" <snitzer@kernel.org>,
dm-devel@lists.linux.dev, linux-mm@kvack.org,
linux-block@vger.kernel.org, linux-nvme@lists.infradead.org
Subject: Re: Intermittent storage (dm-crypt?) freeze - regression 6.4->6.5
Date: Wed, 1 Nov 2023 19:23:05 +0800 [thread overview]
Message-ID: <ZUI1GSSL4vdsFVHq@fedora> (raw)
In-Reply-To: <ab02413f-4bf2-4d92-baf7-62fbd106f5df@suse.de>
On Wed, Nov 01, 2023 at 11:15:02AM +0100, Hannes Reinecke wrote:
> On 11/1/23 04:24, Ming Lei wrote:
> > On Wed, Nov 01, 2023 at 03:14:22AM +0100, Marek Marczykowski-Górecki wrote:
> > > On Wed, Nov 01, 2023 at 09:27:24AM +0800, Ming Lei wrote:
> > > > On Tue, Oct 31, 2023 at 11:42 PM Marek Marczykowski-Górecki
> > > > <marmarek@invisiblethingslab.com> wrote:
> > > > >
> > > > > On Tue, Oct 31, 2023 at 03:01:36PM +0100, Jan Kara wrote:
> > > > > > On Tue 31-10-23 04:48:44, Marek Marczykowski-Górecki wrote:
> > > > > > > Then tried:
> > > > > > > - PAGE_ALLOC_COSTLY_ORDER=4, order=4 - cannot reproduce,
> > > > > > > - PAGE_ALLOC_COSTLY_ORDER=4, order=5 - cannot reproduce,
> > > > > > > - PAGE_ALLOC_COSTLY_ORDER=4, order=6 - freeze rather quickly
> > > > > > >
> > > > > > > I've retried the PAGE_ALLOC_COSTLY_ORDER=4,order=5 case several times
> > > > > > > and I can't reproduce the issue there. I'm confused...
> > > > > >
> > > > > > And this kind of confirms that allocations > PAGE_ALLOC_COSTLY_ORDER
> > > > > > causing hangs is most likely just a coincidence. Rather something either in
> > > > > > the block layer or in the storage driver has problems with handling bios
> > > > > > with sufficiently high order pages attached. This is going to be a bit
> > > > > > painful to debug I'm afraid. How long does it take for you trigger the
> > > > > > hang? I'm asking to get rough estimate how heavy tracing we can afford so
> > > > > > that we don't overwhelm the system...
> > > > >
> > > > > Sometimes it freezes just after logging in, but in worst case it takes
> > > > > me about 10min of more or less `tar xz` + `dd`.
> > > >
> > > > blk-mq debugfs is usually helpful for hang issue in block layer or
> > > > underlying drivers:
> > > >
> > > > (cd /sys/kernel/debug/block && find . -type f -exec grep -aH . {} \;)
> > > >
> > > > BTW, you can just collect logs of the exact disks if you know what
> > > > are behind dm-crypt,
> > > > which can be figured out by `lsblk`, and it has to be collected after
> > > > the hang is triggered.
> > >
> > > dm-crypt lives on the nvme disk, this is what I collected when it
> > > hanged:
> > >
> > ...
> > > nvme0n1/hctx4/cpu4/default_rq_list:000000000d41998f {.op=READ, .cmd_flags=, .rq_flags=IO_STAT, .state=idle, .tag=65, .internal_tag=-1}
> > > nvme0n1/hctx4/cpu4/default_rq_list:00000000d0d04ed2 {.op=READ, .cmd_flags=, .rq_flags=IO_STAT, .state=idle, .tag=70, .internal_tag=-1}
> >
> > Two requests stays in sw queue, but not related with this issue.
> >
> > > nvme0n1/hctx4/type:default
> > > nvme0n1/hctx4/dispatch_busy:9
> >
> > non-zero dispatch_busy means BLK_STS_RESOURCE is returned from
> > nvme_queue_rq() recently and mostly.
> >
> > > nvme0n1/hctx4/active:0
> > > nvme0n1/hctx4/run:20290468
> >
> > ...
> >
> > > nvme0n1/hctx4/tags:nr_tags=1023
> > > nvme0n1/hctx4/tags:nr_reserved_tags=0
> > > nvme0n1/hctx4/tags:active_queues=0
> > > nvme0n1/hctx4/tags:bitmap_tags:
> > > nvme0n1/hctx4/tags:depth=1023
> > > nvme0n1/hctx4/tags:busy=3
> >
> > Just three requests in-flight, two are in sw queue, another is in hctx->dispatch.
> >
> > ...
> >
> > > nvme0n1/hctx4/dispatch:00000000b335fa89 {.op=WRITE, .cmd_flags=NOMERGE, .rq_flags=DONTPREP|IO_STAT, .state=idle, .tag=78, .internal_tag=-1}
> > > nvme0n1/hctx4/flags:alloc_policy=FIFO SHOULD_MERGE
> > > nvme0n1/hctx4/state:SCHED_RESTART
> >
> > The request staying in hctx->dispatch can't move on, and nvme_queue_rq()
> > returns -BLK_STS_RESOURCE constantly, and you can verify with
> > the following bpftrace when the hang is triggered:
> >
> > bpftrace -e 'kretfunc:nvme_queue_rq { @[retval, kstack]=count() }'
> >
> > It is very likely that memory allocation inside nvme_queue_rq()
> > can't be done successfully, then blk-mq just have to retry by calling
> > nvme_queue_rq() on the above request.
> >
> And that is something I've been wondering (for quite some time now):
> What _is_ the appropriate error handling for -ENOMEM?
It is just my guess.
Actually it shouldn't fail since the sgl allocation is backed with
memory pool, but there is also dma pool allocation and dma mapping.
> At this time, we assume it to be a retryable error and re-run the queue
> in the hope that things will sort itself out.
It should not be hard to figure out why nvme_queue_rq() can't move on.
> But if they don't we're stuck.
> Can we somehow figure out if we make progress during submission, and (at
> least) issue a warning once we detect a stall?
It needs counting on request retry, and people often hate to add something
to request or bio in fast path. Also this kind of issue is easy to show
in blk-mq debugfs or bpftrace.
Thanks,
Ming
next prev parent reply other threads:[~2023-11-01 11:23 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <ZTNH0qtmint/zLJZ@mail-itl>
[not found] ` <e427823c-e869-86a2-3549-61b3fdf29537@redhat.com>
[not found] ` <ZTiHQDY54E7WAld+@mail-itl>
[not found] ` <ZTiJ3CO8w0jauOzW@mail-itl>
2023-10-25 10:13 ` Intermittent storage (dm-crypt?) freeze - regression 6.4->6.5 Mikulas Patocka
2023-10-27 17:32 ` Mikulas Patocka
2023-10-28 9:23 ` Matthew Wilcox
2023-10-28 15:14 ` Mike Snitzer
2023-10-29 11:15 ` Marek Marczykowski-Górecki
2023-10-29 20:02 ` Vlastimil Babka
2023-10-30 7:37 ` Mikulas Patocka
2023-10-30 8:37 ` Vlastimil Babka
2023-10-30 11:22 ` Mikulas Patocka
2023-10-30 11:30 ` Vlastimil Babka
2023-10-30 11:37 ` Mikulas Patocka
2023-10-30 12:25 ` Jan Kara
2023-10-30 13:30 ` Marek Marczykowski-Górecki
2023-10-30 14:08 ` Mikulas Patocka
2023-10-30 15:56 ` Jan Kara
2023-10-30 16:51 ` Marek Marczykowski-Górecki
2023-10-30 17:50 ` Mikulas Patocka
2023-10-31 3:48 ` Marek Marczykowski-Górecki
2023-10-31 14:01 ` Jan Kara
2023-10-31 15:42 ` Marek Marczykowski-Górecki
2023-10-31 17:17 ` Mikulas Patocka
2023-10-31 17:24 ` Mikulas Patocka
2023-11-02 0:38 ` Marek Marczykowski-Górecki
2023-11-02 9:28 ` Mikulas Patocka
2023-11-02 11:45 ` Marek Marczykowski-Górecki
2023-11-02 17:06 ` Mikulas Patocka
2023-11-03 15:01 ` Marek Marczykowski-Górecki
2023-11-03 15:10 ` Keith Busch
2023-11-03 16:15 ` Marek Marczykowski-Górecki
2023-11-03 16:54 ` Keith Busch
2023-11-03 20:30 ` Marek Marczykowski-G'orecki
2023-11-03 22:42 ` Keith Busch
2023-11-04 9:27 ` Mikulas Patocka
2023-11-04 13:59 ` Keith Busch
2023-11-06 7:10 ` Christoph Hellwig
2023-11-06 14:59 ` [PATCH] swiotlb-xen: provide the "max_mapping_size" method Mikulas Patocka
2023-11-06 15:16 ` Keith Busch
2023-11-06 15:30 ` Mike Snitzer
2023-11-06 17:12 ` [PATCH v2] " Mikulas Patocka
2023-11-07 4:18 ` Stefano Stabellini
2023-11-08 7:31 ` Christoph Hellwig
2023-11-06 7:08 ` Intermittent storage (dm-crypt?) freeze - regression 6.4->6.5 Christoph Hellwig
2023-11-02 12:21 ` Jan Kara
2023-11-01 1:27 ` Ming Lei
[not found] ` <ZUG0gcRhUlFm57qN@mail-itl>
[not found] ` <ZUG016NyTms2073C@mail-itl>
2023-11-01 2:35 ` Marek Marczykowski-Górecki
2023-11-01 3:24 ` Ming Lei
2023-11-01 10:15 ` Hannes Reinecke
2023-11-01 10:26 ` Jan Kara
2023-11-01 11:23 ` Ming Lei [this message]
2023-11-02 14:02 ` Keith Busch
2023-11-01 12:16 ` Mikulas Patocka
2023-10-30 11:28 ` Jan Kara
2023-10-30 11:49 ` Mikulas Patocka
2023-10-30 12:11 ` Jan Kara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZUI1GSSL4vdsFVHq@fedora \
--to=ming.lei@redhat.com \
--cc=agk@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=dm-devel@lists.linux.dev \
--cc=hare@suse.de \
--cc=jack@suse.cz \
--cc=linux-block@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=marmarek@invisiblethingslab.com \
--cc=mhocko@suse.com \
--cc=mpatocka@redhat.com \
--cc=regressions@lists.linux.dev \
--cc=snitzer@kernel.org \
--cc=stable@vger.kernel.org \
--cc=tom.leiming@gmail.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).