From: Ming Lei <ming.lei@redhat.com>
To: Yi Zhang <yi.zhang@redhat.com>
Cc: Chaitanya Kulkarni <chaitanyak@nvidia.com>,
justintee8345@gmail.com,
Chaitanya Kulkarni <ckulkarnilinux@gmail.com>,
Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>,
"open list:NVM EXPRESS DRIVER" <linux-nvme@lists.infradead.org>,
linux-block <linux-block@vger.kernel.org>,
Daniel Wagner <dwagner@suse.de>, Keith Busch <kbusch@kernel.org>
Subject: Re: [bug report] kmemleak observed during blktests nvme/fc
Date: Fri, 30 Jan 2026 15:45:21 +0800 [thread overview]
Message-ID: <aXxhkUXXOlrRw1sG@fedora> (raw)
In-Reply-To: <CAHj4cs-84wsWQKeBS2wkd-_Y4Xe7wcTPqzutisNbXtpcAdh8yw@mail.gmail.com>
On Thu, Jan 15, 2026 at 05:24:58PM +0800, Yi Zhang wrote:
> Hi Justin and Chaitanya
>
> It turns out that the kmemleak was caused by nvme-loop. It was
> observed during the stress nvme loop/tcp/fc[1] test, but the kmemleak
> log was reported during the nvme/fc test. That's why I didn't
> reproduce it with the stress nvme/fc test before.
>
> [1]
> nvme_trtype=loop ./check nvme/
> nvme_trtype=tcp ./check nvme/
> nvme_trtype=fc ./check nvme/
>
> unreferenced object 0xffff8881295fd000 (size 1024):
> comm "nvme", pid 101335, jiffies 4299282670
> hex dump (first 32 bytes):
> 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
> ff ff ff ff ff ff ff ff e0 3c 57 af ff ff ff ff .........<W.....
> backtrace (crc 414bcfcd):
> __kmalloc_cache_node_noprof+0x5f9/0x840
> blk_mq_alloc_hctx+0x52/0x810
> blk_mq_alloc_and_init_hctx+0x5b9/0x840
> __blk_mq_realloc_hw_ctxs+0x20a/0x610
> blk_mq_init_allocated_queue+0x2e9/0x1210
> blk_mq_alloc_queue+0x17f/0x230
> nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
> nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
> nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
> nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
> nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
> vfs_write+0x1d0/0xfd0
> ksys_write+0xf9/0x1d0
> do_syscall_64+0x95/0x520
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
It seems regression from 03b3bcd319b3 ("nvme: fix admin request_queue
lifetime"), can you try the following fix?
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 19b67cf5d550..64db8e3d8fd8 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4848,6 +4848,15 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
struct queue_limits lim = {};
int ret;
+ /*
+ * If a previous admin queue exists (e.g., from before a reset),
+ * put it now before allocating a new one to avoid orphaning it.
+ */
+ if (ctrl->admin_q) {
+ blk_put_queue(ctrl->admin_q);
+ ctrl->admin_q = NULL;
+ }
+
memset(set, 0, sizeof(*set));
set->ops = ops;
set->queue_depth = NVME_AQ_MQ_TAG_DEPTH;
Thanks,
Ming
next prev parent reply other threads:[~2026-01-30 7:45 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-11 15:40 [bug report] kmemleak observed during blktests nvme/fc Yi Zhang
2025-12-15 3:44 ` Chaitanya Kulkarni
2025-12-18 19:41 ` Chaitanya Kulkarni
2025-12-27 12:10 ` Yi Zhang
2026-01-15 9:24 ` Yi Zhang
2026-01-30 7:45 ` Ming Lei [this message]
2026-01-31 13:00 ` Yi Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aXxhkUXXOlrRw1sG@fedora \
--to=ming.lei@redhat.com \
--cc=chaitanyak@nvidia.com \
--cc=ckulkarnilinux@gmail.com \
--cc=dwagner@suse.de \
--cc=justintee8345@gmail.com \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=shinichiro.kawasaki@wdc.com \
--cc=yi.zhang@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox