From: Martin Wilck <mwilck@suse.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@kernel.dk>,
Sagi Grimberg <sagi@grimberg.me>,
Niklas Cassel <cassel@kernel.org>, Hannes Reinecke <hare@suse.de>,
Daniel Wagner <dwagner@suse.de>,
Stuart Hayes <stuart.w.hayes@gmail.com>,
linux-nvme@lists.infradead.org
Subject: Re: [PATCH v3] nvme: core: shorten duration of multipath namespace rescan
Date: Tue, 27 Aug 2024 17:42:52 +0200 [thread overview]
Message-ID: <41331f243b0b3172cfdcf0ef33b8ed34e813bac2.camel@suse.com> (raw)
In-Reply-To: <20240827063332.GA11773@lst.de>
On Tue, 2024-08-27 at 08:33 +0200, Christoph Hellwig wrote:
> > + /*
> > + * The controller queue is going to be frozen in
> > + * nvme_update_ns_info_{generic,block}(). Every freeze
> > implies waiting
> > + * for an RCU grace period to pass. For multipath devices,
> > we
> > + * need to freeze the multipath queue, too. Start freezing
> > the
> > + * multipath queue now, lest we need to wait for two grace
> > periods.
> > + */
> > + if (nvme_ns_head_multipath(ns->head))
> > + blk_freeze_queue_start(ns->head->disk->queue);
>
> I really don't like how we keep this state around the calls into
> the command set specific nvme_update_ns_info_* calls, and how it
> feels very open coded.
>
> And while looking at it, I think we might even want the multipath
> queue frozen over the actual update as well, as there is no point
> in sending the I/O even to other paths while we are updating the
> information, as they'll probably fail.
>
> Here is what might make sense:
>
> 1) move the code past the csi switch statement in
> nvme_update_ns_info
> into a new nvme_update_ns_info_common helper and all that
> from nvme_update_ns_info_block and nvme_update_ns_info_generic.
> Maybe having another helper just for the multipath updates might
> also be worthwhile.
> 2) add new nvme_freeze_ns helpers that freeze the ns and nshead
> queues and use those in nvme_update_ns_info_block and
> nvme_update_ns_info_generic over the entire update
Not sure if I understood you correctly, but a single helper for
unfreezing both the path and the multipath queues won't work.The
following code block
if (blk_queue_is_zoned(ns->queue)) {
ret = blk_revalidate_disk_zones(ns->disk);
if (ret && !nvme_first_scan(ns->disk))
goto out;
}
must be called from nvme_update_ns_info_block() after unfreezing the
path queue, but its result is used in the code that must run with the
multipath queue still frozen. So we can freeze both queues at the same
time, but we must unfreeze them separately.
Still trying to work this out.
Martin
next prev parent reply other threads:[~2024-08-27 15:43 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-26 16:39 [PATCH v3] nvme: core: shorten duration of multipath namespace rescan Martin Wilck
2024-08-27 6:33 ` Christoph Hellwig
2024-08-27 15:42 ` Martin Wilck [this message]
2024-08-29 6:30 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=41331f243b0b3172cfdcf0ef33b8ed34e813bac2.camel@suse.com \
--to=mwilck@suse.com \
--cc=axboe@kernel.dk \
--cc=cassel@kernel.org \
--cc=dwagner@suse.de \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
--cc=stuart.w.hayes@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox