From: Hannes Reinecke <hare@suse.de>
To: Maurizio Lombardi <mlombard@redhat.com>, kbusch@kernel.org
Cc: mheyne@amazon.de, emilne@redhat.com, jmeneghi@redhat.com,
linux-nvme@lists.infradead.org, dwagner@suse.de,
mlombard@arkamax.eu, mkhalfella@purestorage.com,
chaitanyak@nvidia.com, hare@kernel.org, hch@lst.de
Subject: Re: [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl
Date: Mon, 11 May 2026 11:53:35 +0200 [thread overview]
Message-ID: <837128f8-bff3-41da-8960-f43ef79797eb@suse.de> (raw)
In-Reply-To: <20260508133335.98612-8-mlombard@redhat.com>
On 5/8/26 15:33, Maurizio Lombardi wrote:
> Currently, the final reference for the fabrics admin queue (fabrics_q)
> is dropped inside nvme_remove_admin_tag_set(). However, the primary
> admin queue (admin_q) defers dropping its final reference until
> nvme_free_ctrl().
>
> Move the blk_put_queue() call for fabrics_q from nvme_remove_admin_tag_set()
> to nvme_free_ctrl(). This aligns the lifecycle management of both admin
> queues, ensuring they are freed symmetrically when the controller is finally
> torn down.
>
> Reviewed-by: Daniel Wagner <dwagner@suse.de>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
> ---
> drivers/nvme/host/core.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 5d3200a66f8e..73575d087a07 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -4932,10 +4932,8 @@ void nvme_remove_admin_tag_set(struct nvme_ctrl *ctrl)
> */
> nvme_stop_keep_alive(ctrl);
> blk_mq_destroy_queue(ctrl->admin_q);
> - if (ctrl->ops->flags & NVME_F_FABRICS) {
> + if (ctrl->ops->flags & NVME_F_FABRICS)
> blk_mq_destroy_queue(ctrl->fabrics_q);
> - blk_put_queue(ctrl->fabrics_q);
> - }
> blk_mq_free_tag_set(ctrl->admin_tagset);
> }
> EXPORT_SYMBOL_GPL(nvme_remove_admin_tag_set);
> @@ -5077,6 +5075,8 @@ static void nvme_free_ctrl(struct device *dev)
>
> if (ctrl->admin_q)
> blk_put_queue(ctrl->admin_q);
> + if (ctrl->fabrics_q)
> + blk_put_queue(ctrl->fabrics_q);
> if (!subsys || ctrl->instance != subsys->instance)
> ida_free(&nvme_instance_ida, ctrl->instance);
> nvme_free_cels(ctrl);
One wonders why we check for 'flags' in the first hunk, but for the
existence of 'fabrics_q' in the second hunk.
But anyway.
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
next prev parent reply other threads:[~2026-05-11 9:53 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 1/9] nvme: Let the blocklayer set timeouts for requests Maurizio Lombardi
2026-05-11 9:37 ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller Maurizio Lombardi
2026-05-08 16:57 ` Daniel Wagner
2026-05-10 22:10 ` Sagi Grimberg
2026-05-11 8:07 ` Christoph Hellwig
2026-05-11 11:29 ` Maurizio Lombardi
2026-05-11 12:31 ` Christoph Hellwig
2026-05-11 9:46 ` Hannes Reinecke
2026-05-11 10:05 ` Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag Maurizio Lombardi
2026-05-08 16:57 ` Daniel Wagner
2026-05-10 22:10 ` Sagi Grimberg
2026-05-11 8:07 ` Christoph Hellwig
2026-05-11 12:54 ` Sagi Grimberg
2026-05-11 15:09 ` Keith Busch
2026-05-11 15:45 ` Maurizio Lombardi
2026-05-11 17:10 ` Keith Busch
2026-05-11 8:08 ` Christoph Hellwig
2026-05-11 9:47 ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT Maurizio Lombardi
2026-05-10 22:10 ` Sagi Grimberg
2026-05-11 8:08 ` Christoph Hellwig
2026-05-11 9:48 ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller Maurizio Lombardi
2026-05-08 17:08 ` Daniel Wagner
2026-05-10 22:12 ` Sagi Grimberg
2026-05-11 8:52 ` Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default Maurizio Lombardi
2026-05-11 8:10 ` Christoph Hellwig
2026-05-11 11:42 ` Maurizio Lombardi
2026-05-11 12:32 ` Christoph Hellwig
2026-05-11 9:50 ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl Maurizio Lombardi
2026-05-10 22:15 ` Sagi Grimberg
2026-05-11 8:11 ` Christoph Hellwig
2026-05-11 9:53 ` Hannes Reinecke [this message]
2026-05-11 9:57 ` Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset Maurizio Lombardi
2026-05-08 17:09 ` Daniel Wagner
2026-05-10 22:16 ` Sagi Grimberg
2026-05-11 8:12 ` Christoph Hellwig
2026-05-11 9:55 ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 9/9] nvme-core: warn on allocating admin tag set with existing queue Maurizio Lombardi
2026-05-10 22:16 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=837128f8-bff3-41da-8960-f43ef79797eb@suse.de \
--to=hare@suse.de \
--cc=chaitanyak@nvidia.com \
--cc=dwagner@suse.de \
--cc=emilne@redhat.com \
--cc=hare@kernel.org \
--cc=hch@lst.de \
--cc=jmeneghi@redhat.com \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=mheyne@amazon.de \
--cc=mkhalfella@purestorage.com \
--cc=mlombard@arkamax.eu \
--cc=mlombard@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox