From: Chaitanya Kulkarni <kch@nvidia.com>
To: <hch@lst.de>, <sagi@grimberg.me>, <roys@lightbitslabs.com>,
<kbusch@kernel.org>
Cc: <linux-nvme@lists.infradead.org>,
Chaitanya Kulkarni <kch@nvidia.com>, <stable@vger.kernel.org>
Subject: [PATCH] nvmet: avoid recursive nvmet-wq flush in nvmet_ctrl_free
Date: Wed, 8 Apr 2026 17:56:47 -0700 [thread overview]
Message-ID: <20260409005647.112289-1-kch@nvidia.com> (raw)
nvmet_tcp_release_queue_work() runs on nvmet-wq and can drop the
final controller reference through nvmet_cq_put(). If that triggers
nvmet_ctrl_free(), the teardown path flushes ctrl->async_event_work on
the same nvmet-wq.
Call chain:
nvmet_tcp_schedule_release_queue()
kref_put(&queue->kref, nvmet_tcp_release_queue)
nvmet_tcp_release_queue()
queue_work(nvmet_wq, &queue->release_work) <--- nvmet_wq
process_one_work()
nvmet_tcp_release_queue_work()
nvmet_cq_put(&queue->nvme_cq)
nvmet_cq_destroy()
nvmet_ctrl_put(cq->ctrl)
nvmet_ctrl_free()
flush_work(&ctrl->async_event_work) <--- nvmet_wq
Previously Scheduled by :-
nvmet_add_async_event
queue_work(nvmet_wq, &ctrl->async_event_work);
This trips lockdep with a possible recursive locking warning.
[ 5223.015876] run blktests nvme/003 at 2026-04-07 20:53:55
[ 5223.061801] loop0: detected capacity change from 0 to 2097152
[ 5223.072206] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[ 5223.088368] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[ 5223.126086] nvmet: Created discovery controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
[ 5223.128453] nvme nvme1: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 127.0.0.1:4420, hostnqn: nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349
[ 5233.199447] nvme nvme1: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
[ 5233.227718] ============================================
[ 5233.231283] WARNING: possible recursive locking detected
[ 5233.234696] 7.0.0-rc3nvme+ #20 Tainted: G O N
[ 5233.238434] --------------------------------------------
[ 5233.241852] kworker/u192:6/2413 is trying to acquire lock:
[ 5233.245429] ffff888111632548 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: touch_wq_lockdep_map+0x26/0x90
[ 5233.251438]
but task is already holding lock:
[ 5233.255254] ffff888111632548 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: process_one_work+0x5cc/0x6e0
[ 5233.261125]
other info that might help us debug this:
[ 5233.265333] Possible unsafe locking scenario:
[ 5233.269217] CPU0
[ 5233.270795] ----
[ 5233.272436] lock((wq_completion)nvmet-wq);
[ 5233.275241] lock((wq_completion)nvmet-wq);
[ 5233.278020]
*** DEADLOCK ***
[ 5233.281793] May be due to missing lock nesting notation
[ 5233.286195] 3 locks held by kworker/u192:6/2413:
[ 5233.289192] #0: ffff888111632548 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: process_one_work+0x5cc/0x6e0
[ 5233.294569] #1: ffffc9000e2a7e40 ((work_completion)(&queue->release_work)){+.+.}-{0:0}, at: process_one_work+0x1c5/0x6e0
[ 5233.300128] #2: ffffffff82d7dc40 (rcu_read_lock){....}-{1:3}, at: __flush_work+0x62/0x530
[ 5233.304290]
stack backtrace:
[ 5233.306520] CPU: 4 UID: 0 PID: 2413 Comm: kworker/u192:6 Tainted: G O N 7.0.0-rc3nvme+ #20 PREEMPT(full)
[ 5233.306524] Tainted: [O]=OOT_MODULE, [N]=TEST
[ 5233.306525] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.17.0-0-gb52ca86e094d-prebuilt.qemu.org 04/01/2014
[ 5233.306527] Workqueue: nvmet-wq nvmet_tcp_release_queue_work [nvmet_tcp]
[ 5233.306532] Call Trace:
[ 5233.306534] <TASK>
[ 5233.306536] dump_stack_lvl+0x73/0xb0
[ 5233.306552] print_deadlock_bug+0x225/0x2f0
[ 5233.306556] __lock_acquire+0x13f0/0x2290
[ 5233.306563] lock_acquire+0xd0/0x300
[ 5233.306565] ? touch_wq_lockdep_map+0x26/0x90
[ 5233.306571] ? __flush_work+0x20b/0x530
[ 5233.306573] ? touch_wq_lockdep_map+0x26/0x90
[ 5233.306577] touch_wq_lockdep_map+0x3b/0x90
[ 5233.306580] ? touch_wq_lockdep_map+0x26/0x90
[ 5233.306583] ? __flush_work+0x20b/0x530
[ 5233.306585] __flush_work+0x268/0x530
[ 5233.306588] ? __pfx_wq_barrier_func+0x10/0x10
[ 5233.306594] ? xen_error_entry+0x30/0x60
[ 5233.306600] nvmet_ctrl_free+0x140/0x310 [nvmet]
[ 5233.306617] nvmet_cq_put+0x74/0x90 [nvmet]
[ 5233.306629] nvmet_tcp_release_queue_work+0x19f/0x360 [nvmet_tcp]
[ 5233.306634] process_one_work+0x206/0x6e0
[ 5233.306640] worker_thread+0x184/0x320
[ 5233.306643] ? __pfx_worker_thread+0x10/0x10
[ 5233.306646] kthread+0xf1/0x130
[ 5233.306648] ? __pfx_kthread+0x10/0x10
[ 5233.306651] ret_from_fork+0x355/0x450
[ 5233.306653] ? __pfx_kthread+0x10/0x10
[ 5233.306656] ret_from_fork_asm+0x1a/0x30
[ 5233.306664] </TASK>
There is also no need to flush async_event_work from controller
teardown. The admin queue teardown already fails outstanding AER
requests before the final controller put :-
nvmet_sq_destroy(admin sq)
nvmet_async_events_failall(ctrl)
The controller has already been removed from the subsystem list before
nvmet_ctrl_free() quiesces outstanding work.
Replace flush_work() with cancel_work_sync() so a pending
async_event_work item is canceled and a running instance is waited on
without recursing into the same workqueue.
Fixes: 06406d81a2d7 ("nvmet: cancel fatal error and flush async work before free controller")
Cc: stable@vger.kernel.org
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
drivers/nvme/target/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 33db6c5534e2..a87567f40c91 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1749,7 +1749,7 @@ static void nvmet_ctrl_free(struct kref *ref)
nvmet_stop_keep_alive_timer(ctrl);
- flush_work(&ctrl->async_event_work);
+ cancel_work_sync(&ctrl->async_event_work);
cancel_work_sync(&ctrl->fatal_err_work);
nvmet_destroy_auth(ctrl);
--
2.39.5
next reply other threads:[~2026-04-09 0:57 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-09 0:56 Chaitanya Kulkarni [this message]
2026-04-09 6:10 ` [PATCH] nvmet: avoid recursive nvmet-wq flush in nvmet_ctrl_free Christoph Hellwig
2026-04-09 14:46 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260409005647.112289-1-kch@nvidia.com \
--to=kch@nvidia.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=roys@lightbitslabs.com \
--cc=sagi@grimberg.me \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox