public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Mohamed Khalfella <mkhalfella@purestorage.com>
To: Jens Axboe <axboe@kernel.dk>, Omar Sandoval <osandov@fb.com>,
	Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>, Yu Kuai <yukuai@fnnas.com>,
	Nilay Shroff <nilay@linux.ibm.com>,
	linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	Mohamed Khalfella <mkhalfella@purestorage.com>
Subject: [PATCH] blk-mq-debugfs: Fix warning about possible deadlock
Date: Fri, 13 Feb 2026 14:45:04 -0800	[thread overview]
Message-ID: <20260213224504.3990506-1-mkhalfella@purestorage.com> (raw)

Commit 65d466b62984 ("blk-mq-debugfs: warn about possible deadlock")
added WARN_ONCE_ON() to debugfs_create_files() to prevent potential
deadlock if the queue is frozen. We hit this warning with the stacktrace
below

WARNING: block/blk-mq-debugfs.c:620 at debugfs_create_files+0x9c/0x1d0
Workqueue: nvme-wq nvme_tcp_reconnect_ctrl_work [nvme_tcp]
RIP: 0010:debugfs_create_files+0x9c/0x1d0
Call Trace:
 <TASK>
 blk_mq_debugfs_register_hctx+0x186/0x320
 blk_mq_debugfs_register_hctxs+0x80/0xa0
 blk_mq_update_nr_hw_queues+0xc8f/0xd50
 nvme_tcp_setup_ctrl+0xcfa/0xda0 [nvme_tcp]
 nvme_tcp_reconnect_ctrl_work+0x51/0x180 [nvme_tcp]
 process_scheduled_works+0x840/0xd80
 worker_thread+0x36d/0x4a0
 kthread+0x366/0x380
 ret_from_fork+0x7b/0x670
 ret_from_fork_asm+0x1a/0x30
 </TASK>

Here __blk_mq_update_nr_hw_queues() did set PF_MEMALLOC_NOIO flag before
calling blk_mq_debugfs_register_hctxs() and this should guarantee the
described deadlock will not happen. Warn only if the queue is frozen and
NOIO flag is not set.

Fixes: 65d466b62984 ("blk-mq-debugfs: warn about possible deadlock")
Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com>
---
 block/blk-mq-debugfs.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index faeaa1fc86a7..ec6670dc0a3d 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -612,12 +612,14 @@ static void debugfs_create_files(struct request_queue *q, struct dentry *parent,
 				 void *data,
 				 const struct blk_mq_debugfs_attr *attr)
 {
+	unsigned int pflags = READ_ONCE(current->flags);
+
 	lockdep_assert_held(&q->debugfs_mutex);
 	/*
 	 * Creating new debugfs entries with queue freezed has the risk of
 	 * deadlock.
 	 */
-	WARN_ON_ONCE(q->mq_freeze_depth != 0);
+	WARN_ON_ONCE((q->mq_freeze_depth != 0) && !(pflags & PF_MEMALLOC_NOIO));
 	/*
 	 * debugfs_mutex should not be nested under other locks that can be
 	 * grabbed while queue is frozen.

base-commit: cd7a5651db263b5384aef1950898e5e889425134
-- 
2.52.0


             reply	other threads:[~2026-02-13 22:45 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-13 22:45 Mohamed Khalfella [this message]
2026-02-15 13:36 ` [PATCH] blk-mq-debugfs: Fix warning about possible deadlock Nilay Shroff
2026-02-16  2:45   ` Mohamed Khalfella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260213224504.3990506-1-mkhalfella@purestorage.com \
    --to=mkhalfella@purestorage.com \
    --cc=axboe@kernel.dk \
    --cc=hare@suse.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=nilay@linux.ibm.com \
    --cc=osandov@fb.com \
    --cc=yukuai@fnnas.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox