From: Laurence Oberman <loberman@redhat.com>
To: Ming Lei <ming.lei@redhat.com>, David Jeffery <djeffery@redhat.com>
Cc: linux-block@vger.kernel.org, Jens Axboe <axboe@kernel.dk>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly
Date: Mon, 08 Feb 2021 13:53:52 -0500 [thread overview]
Message-ID: <0ea759fa112b495cff9e7e1da3f02e922e8cc6a0.camel@redhat.com> (raw)
In-Reply-To: <8ce70420d1dcb5dd0ffc73aaa38d8ce61eb19cff.camel@redhat.com>
On Thu, 2021-02-04 at 11:43 -0500, Laurence Oberman wrote:
> On Thu, 2021-02-04 at 10:27 +0800, Ming Lei wrote:
> > On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > > When a stacked block device inserts a request into another block
> > > device
> > > using blk_insert_cloned_request, the request's nr_phys_segments
> > > field gets
> > > recalculated by a call to blk_recalc_rq_segments in
> > > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not
> > > know how to
> > > handle multi-segment discards. For disk types which can handle
> > > multi-segment discards like nvme, this results in discard
> > > requests
> > > which
> > > claim a single segment when it should report several, triggering
> > > a
> > > warning
> > > in nvme and causing nvme to fail the discard from the invalid
> > > state.
> > >
> > > WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700
> > > nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > > ...
> > > nvme_setup_cmd+0x217/0x270 [nvme_core]
> > > nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > > __blk_mq_try_issue_directly+0xe7/0x1b0
> > > blk_mq_request_issue_directly+0x41/0x70
> > > ? blk_account_io_start+0x40/0x50
> > > dm_mq_queue_rq+0x200/0x3e0
> > > blk_mq_dispatch_rq_list+0x10a/0x7d0
> > > ? __sbitmap_queue_get+0x25/0x90
> > > ? elv_rb_del+0x1f/0x30
> > > ? deadline_remove_request+0x55/0xb0
> > > ? dd_dispatch_request+0x181/0x210
> > > __blk_mq_do_dispatch_sched+0x144/0x290
> > > ? bio_attempt_discard_merge+0x134/0x1f0
> > > __blk_mq_sched_dispatch_requests+0x129/0x180
> > > blk_mq_sched_dispatch_requests+0x30/0x60
> > > __blk_mq_run_hw_queue+0x47/0xe0
> > > __blk_mq_delay_run_hw_queue+0x15b/0x170
> > > blk_mq_sched_insert_requests+0x68/0xe0
> > > blk_mq_flush_plug_list+0xf0/0x170
> > > blk_finish_plug+0x36/0x50
> > > xlog_cil_committed+0x19f/0x290 [xfs]
> > > xlog_cil_process_committed+0x57/0x80 [xfs]
> > > xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > > xlog_ioend_work+0x2f/0x80 [xfs]
> > > process_one_work+0x1b6/0x350
> > > worker_thread+0x53/0x3e0
> > > ? process_one_work+0x350/0x350
> > > kthread+0x11b/0x140
> > > ? __kthread_bind_mask+0x60/0x60
> > > ret_from_fork+0x22/0x30
> > >
> > > This patch fixes blk_recalc_rq_segments to be aware of devices
> > > which can
> > > have multi-segment discards. It calculates the correct discard
> > > segment
> > > count by counting the number of bio as each discard bio is
> > > considered its
> > > own segment.
> > >
> > > Signed-off-by: David Jeffery <djeffery@redhat.com>
> > > Tested-by: Laurence Oberman <loberman@redhat.com>
> > > ---
> > > block/blk-merge.c | 7 +++++++
> > > 1 file changed, 7 insertions(+)
> > >
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..fe7358bd5d09 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct
> > > request *rq)
> > >
> > > switch (bio_op(rq->bio)) {
> > > case REQ_OP_DISCARD:
> > > + if (queue_max_discard_segments(rq->q) > 1) {
> > > + struct bio *bio = rq->bio;
> > > + for_each_bio(bio)
> > > + nr_phys_segs++;
> > > + return nr_phys_segs;
> > > + }
> > > + /* fall through */
> > > case REQ_OP_SECURE_ERASE:
> > > case REQ_OP_WRITE_ZEROES:
> > > return 0;
> >
> > blk_rq_nr_discard_segments() always returns >=1 segments, so no
> > similar
> > issue in case of single range discard.
> >
> > Reviewed-by: Ming Lei <ming.lei@redhat.com>
> >
> > And it can be thought as:
> >
> > Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard
> > bios into a single request")
> >
> >
>
> Great, can we get enough acks and push this through its urgent for me
> Reviewed-by: Laurence Oberman <loberman@redhat.com>
Hate to ping again, but we cant take this into RHEL unless its
upstream, can we get enough acks to get this in.
Many Thanks
Laurence
next prev parent reply other threads:[~2021-02-08 18:56 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-01 16:48 [PATCH] block: recalculate segment count for multi-segment discard requests correctly David Jeffery
2021-02-02 3:33 ` Ming Lei
2021-02-02 20:43 ` David Jeffery
2021-02-03 2:35 ` Ming Lei
2021-02-03 3:15 ` Chaitanya Kulkarni
2021-02-03 13:50 ` Laurence Oberman
2021-02-03 15:08 ` Laurence Oberman
2021-02-03 3:18 ` Chaitanya Kulkarni
2021-02-03 16:23 ` David Jeffery
2021-02-04 2:18 ` Ming Lei
2021-02-04 2:27 ` Ming Lei
2021-02-04 16:43 ` Laurence Oberman
2021-02-08 18:53 ` Laurence Oberman [this message]
2021-02-08 18:58 ` John Pittman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0ea759fa112b495cff9e7e1da3f02e922e8cc6a0.camel@redhat.com \
--to=loberman@redhat.com \
--cc=axboe@kernel.dk \
--cc=djeffery@redhat.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ming.lei@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).