public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: "kbusch@kernel.org" <kbusch@kernel.org>
Cc: "wagi@monom.org" <wagi@monom.org>, "hch@lst.de" <hch@lst.de>,
	"shinichiro.kawasaki@wdc.com" <shinichiro.kawasaki@wdc.com>,
	"sagi@grimberg.me" <sagi@grimberg.me>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	Chaitanya Kulkarni <chaitanyak@nvidia.com>
Subject: Re: [PATCH V2] nvmet: move async event work off nvmet-wq
Date: Mon, 13 Apr 2026 00:11:03 +0000	[thread overview]
Message-ID: <3c93edf3-87fd-486f-bb00-8db2cf925641@nvidia.com> (raw)
In-Reply-To: <010359e2-c219-4e2e-8b5b-5e86eda5653f@nvidia.com>

Keith,

On 3/9/26 22:44, Chaitanya Kulkarni wrote:
> On 2/25/26 20:30, Chaitanya Kulkarni wrote:
>> For target nvmet_ctrl_free() flushes ctrl->async_event_work.
>> If nvmet_ctrl_free() runs on nvmet-wq, the flush re-enters workqueue
>> completion for the same worker:-
>>
>> A. Async event work queued on nvmet-wq (prior to disconnect):
>>    nvmet_execute_async_event()
>>       queue_work(nvmet_wq, &ctrl->async_event_work)
>>
>>    nvmet_add_async_event()
>>       queue_work(nvmet_wq, &ctrl->async_event_work)
>>
>> B. Full pre-work chain (RDMA CM path):
>>    nvmet_rdma_cm_handler()
>>       nvmet_rdma_queue_disconnect()
>>         __nvmet_rdma_queue_disconnect()
>>           queue_work(nvmet_wq, &queue->release_work)
>>             process_one_work()
>>               lock((wq_completion)nvmet-wq)  <--------- 1st
>>               nvmet_rdma_release_queue_work()
>>
>> C. Recursive path (same worker):
>>    nvmet_rdma_release_queue_work()
>>       nvmet_rdma_free_queue()
>>         nvmet_sq_destroy()
>>           nvmet_ctrl_put()
>>             nvmet_ctrl_free()
>>               flush_work(&ctrl->async_event_work)
>>                 __flush_work()
>>                   touch_wq_lockdep_map()
>>                   lock((wq_completion)nvmet-wq) <--------- 2nd
>>
>> Lockdep splat:
>>
>>    ============================================
>>    WARNING: possible recursive locking detected
>>    6.19.0-rc3nvme+ #14 Tainted: G                 N
>>    --------------------------------------------
>>    kworker/u192:42/44933 is trying to acquire lock:
>>    ffff888118a00948 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: 
>> touch_wq_lockdep_map+0x26/0x90
>>
>>    but task is already holding lock:
>>    ffff888118a00948 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: 
>> process_one_work+0x53e/0x660
>>
>>    3 locks held by kworker/u192:42/44933:
>>     #0: ffff888118a00948 ((wq_completion)nvmet-wq){+.+.}-{0:0}, at: 
>> process_one_work+0x53e/0x660
>>     #1: ffffc9000e6cbe28 
>> ((work_completion)(&queue->release_work)){+.+.}-{0:0}, at: 
>> process_one_work+0x1c5/0x660
>>     #2: ffffffff82d4db60 (rcu_read_lock){....}-{1:3}, at: 
>> __flush_work+0x62/0x530
>>
>>    Workqueue: nvmet-wq nvmet_rdma_release_queue_work [nvmet_rdma]
>>    Call Trace:
>>     __flush_work+0x268/0x530
>>     nvmet_ctrl_free+0x140/0x310 [nvmet]
>>     nvmet_cq_put+0x74/0x90 [nvmet]
>>     nvmet_rdma_free_queue+0x23/0xe0 [nvmet_rdma]
>>     nvmet_rdma_release_queue_work+0x19/0x50 [nvmet_rdma]
>>     process_one_work+0x206/0x660
>>     worker_thread+0x184/0x320
>>     kthread+0x10c/0x240
>>     ret_from_fork+0x319/0x390
>>
>> Move async event work to a dedicated nvmet-aen-wq to avoid reentrant
>> flush on nvmet-wq.
>>
>> Signed-off-by: Chaitanya Kulkarni<kch@nvidia.com>
>> ---
>
>
> can we please merge this ?
>
> -ck
>
>
Looks like this patch is not merged can you merge this ?
It has Christoph's reviewed-by :-

https://lists.infradead.org/pipermail/linux-nvme/2026-February/061381.html

-ck



  reply	other threads:[~2026-04-13  0:11 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-26  4:30 [PATCH V2] nvmet: move async event work off nvmet-wq Chaitanya Kulkarni
2026-02-26 15:33 ` Christoph Hellwig
2026-03-10  5:44 ` Chaitanya Kulkarni
2026-04-13  0:11   ` Chaitanya Kulkarni [this message]
2026-04-13 15:24     ` Keith Busch
2026-04-13 17:18       ` Chaitanya Kulkarni
2026-03-10 14:23 ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3c93edf3-87fd-486f-bb00-8db2cf925641@nvidia.com \
    --to=chaitanyak@nvidia.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=shinichiro.kawasaki@wdc.com \
    --cc=wagi@monom.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox