* [bug report] kmemleak observed during blktests nvme/fc
@ 2025-12-11 15:40 Yi Zhang
2025-12-15 3:44 ` Chaitanya Kulkarni
0 siblings, 1 reply; 7+ messages in thread
From: Yi Zhang @ 2025-12-11 15:40 UTC (permalink / raw)
To: linux-block, open list:NVM EXPRESS DRIVER
Cc: Shinichiro Kawasaki, Daniel Wagner
Hi
The following kmemleak was observed during blktests nvme/fc, please
help check it and let me know if you need any info/test for it,
thanks.
commit d678712ead7318d5650158aa00113f63ccd4e210
Merge: 95ed689e9f30 a0750fae73c5
Author: Jens Axboe <axboe@kernel.dk>
Date: Wed Dec 10 13:41:17 2025 -0700
Merge branch 'block-6.19' into for-next
* block-6.19:
blk-mq-dma: always initialize dma state
# cat /sys/kernel/debug/kmemleak
unreferenced object 0xffff88826cab51c0 (size 2488):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
60 1a be c1 ff ff ff ff c0 2b 05 73 77 60 00 00 `........+.sw`..
backtrace (crc 155ec6c5):
kmem_cache_alloc_node_noprof+0x5e4/0x830
blk_alloc_queue+0x30/0x700
blk_mq_alloc_queue+0x14b/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8883428ec400 (size 96):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 32 bytes):
00 c4 8e 42 83 88 ff ff 00 c4 8e 42 83 88 ff ff ...B.......B....
00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
backtrace (crc 1deeea82):
__kmalloc_cache_noprof+0x5de/0x820
blk_alloc_queue_stats+0x3f/0x100
blk_alloc_queue+0xc0/0x700
blk_mq_alloc_queue+0x14b/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object (percpu) 0x60777301a898 (size 8):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 8 bytes on cpu 9):
00 00 00 00 00 00 00 00 ........
backtrace (crc 0):
pcpu_alloc_noprof+0x5e0/0xf10
percpu_ref_init+0x2c/0x330
blk_alloc_queue+0x533/0x700
blk_mq_alloc_queue+0x14b/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8881a20fbf80 (size 64):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 80 9e db 8f ff ff ff ff ................
00 00 00 00 00 00 00 00 03 00 00 00 00 00 00 00 ................
backtrace (crc 8cfdd87d):
__kmalloc_cache_noprof+0x5de/0x820
percpu_ref_init+0xbf/0x330
blk_alloc_queue+0x533/0x700
blk_mq_alloc_queue+0x14b/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8883428ec600 (size 96):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 08 c6 8e 42 83 88 ff ff ...........B....
08 c6 8e 42 83 88 ff ff 00 00 00 00 00 00 00 00 ...B............
backtrace (crc af4dc711):
__kmalloc_cache_noprof+0x5de/0x820
blk_mq_init_allocated_queue+0xce/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object (percpu) 0x607773052bc0 (size 256):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 32 bytes on cpu 9):
00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
ff ff ff ff ff ff ff ff 60 3c 17 97 ff ff ff ff ........`<......
backtrace (crc ce57ad5e):
pcpu_alloc_noprof+0x5e0/0xf10
blk_mq_init_allocated_queue+0xf0/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8881459079e0 (size 8):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 8 bytes):
00 a0 9e 43 82 88 ff ff ...C....
backtrace (crc 69c4a0b3):
__kmalloc_node_noprof+0x6ab/0x970
__blk_mq_realloc_hw_ctxs+0x361/0x5a0
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8882439ea000 (size 1024):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 32 bytes):
00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
ff ff ff ff ff ff ff ff e0 3c 17 97 ff ff ff ff .........<......
backtrace (crc 66835ea5):
__kmalloc_cache_node_noprof+0x5f9/0x840
blk_mq_alloc_hctx+0x52/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x5a0
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8881459072a0 (size 8):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 8 bytes):
ff ff 00 00 00 00 00 00 ........
backtrace (crc b47d4cd6):
__kmalloc_node_noprof+0x6ab/0x970
alloc_cpumask_var_node+0x56/0xb0
blk_mq_alloc_hctx+0x74/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x5a0
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff88814b47b400 (size 128):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 32 bytes):
c0 6b f1 fb ff e8 ff ff c0 6b 31 fc ff e8 ff ff .k.......k1.....
c0 6b 71 fc ff e8 ff ff c0 6b b1 fc ff e8 ff ff .kq......k......
backtrace (crc d04b4dbc):
__kmalloc_node_noprof+0x6ab/0x970
blk_mq_alloc_hctx+0x43a/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x5a0
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff888256326c00 (size 512):
comm "nvme", pid 84134, jiffies 4304631753
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace (crc a9e88d35):
__kvmalloc_node_noprof+0x814/0xb30
sbitmap_init_node+0x184/0x730
blk_mq_alloc_hctx+0x4b3/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x5a0
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
0xffffffffc11de07f
0xffffffffc11dfc28
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
--
Best Regards,
Yi Zhang
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [bug report] kmemleak observed during blktests nvme/fc
2025-12-11 15:40 [bug report] kmemleak observed during blktests nvme/fc Yi Zhang
@ 2025-12-15 3:44 ` Chaitanya Kulkarni
2025-12-18 19:41 ` Chaitanya Kulkarni
0 siblings, 1 reply; 7+ messages in thread
From: Chaitanya Kulkarni @ 2025-12-15 3:44 UTC (permalink / raw)
To: Yi Zhang
Cc: Shinichiro Kawasaki, open list:NVM EXPRESS DRIVER, linux-block,
Daniel Wagner, Chaitanya Kulkarni
On 12/11/25 07:40, Yi Zhang wrote:
> Hi
> The following kmemleak was observed during blktests nvme/fc, please
> help check it and let me know if you need any info/test for it,
> thanks.
>
> commit d678712ead7318d5650158aa00113f63ccd4e210
> Merge: 95ed689e9f30 a0750fae73c5
> Author: Jens Axboe <axboe@kernel.dk>
> Date: Wed Dec 10 13:41:17 2025 -0700
>
> Merge branch 'block-6.19' into for-next
>
> * block-6.19:
> blk-mq-dma: always initialize dma state
>
> # cat /sys/kernel/debug/kmemleak
> unreferenced object 0xffff88826cab51c0 (size 2488):
> comm "nvme", pid 84134, jiffies 4304631753
> hex dump (first 32 bytes):
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> 60 1a be c1 ff ff ff ff c0 2b 05 73 77 60 00 00 `........+.sw`..
> backtrace (crc 155ec6c5):
> kmem_cache_alloc_node_noprof+0x5e4/0x830
> blk_alloc_queue+0x30/0x700
> blk_mq_alloc_queue+0x14b/0x230
> nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
> 0xffffffffc11de07f
> 0xffffffffc11dfc28
> nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
> nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
> vfs_write+0x1d0/0xfd0
> ksys_write+0xf9/0x1d0
> do_syscall_64+0x95/0x520
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
Can you try following ? FYI : - Potential fix, only compile tested.
From b3c2e350ae741b18c04abe489dcf9d325537c01c Mon Sep 17 00:00:00 2001
From: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
Date: Sun, 14 Dec 2025 19:29:24 -0800
Subject: [PATCH COMPILE TESTED ONLY] nvme-fc: release admin tagset if init fails
nvme_fabrics creates an NVMe/FC controller in following path:
nvmf_dev_write()
-> nvmf_create_ctrl()
-> nvme_fc_create_ctrl()
-> nvme_fc_init_ctrl()
Check ctrl->ctrl.admin_tagset in the fail_ctrl path and call
nvme_remove_admin_tag_set() to release the resources.
Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
---
drivers/nvme/host/fc.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index bc455fa98246..6948de3f438a 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -3587,6 +3587,8 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
ctrl->ctrl.opts = NULL;
+ if (ctrl->ctrl.admin_tagset)
+ nvme_remove_admin_tag_set(&ctrl->ctrl);
/* initiate nvme ctrl ref counting teardown */
nvme_uninit_ctrl(&ctrl->ctrl);
--
2.40.0
-ck
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [bug report] kmemleak observed during blktests nvme/fc
2025-12-15 3:44 ` Chaitanya Kulkarni
@ 2025-12-18 19:41 ` Chaitanya Kulkarni
2025-12-27 12:10 ` Yi Zhang
0 siblings, 1 reply; 7+ messages in thread
From: Chaitanya Kulkarni @ 2025-12-18 19:41 UTC (permalink / raw)
To: Chaitanya Kulkarni, Yi Zhang
Cc: Shinichiro Kawasaki, open list:NVM EXPRESS DRIVER, linux-block,
Daniel Wagner, Chaitanya Kulkarni
On 12/14/25 7:44 PM, Chaitanya Kulkarni wrote:
> On 12/11/25 07:40, Yi Zhang wrote:
>> Hi
>> The following kmemleak was observed during blktests nvme/fc, please
>> help check it and let me know if you need any info/test for it,
>> thanks.
>>
>> commit d678712ead7318d5650158aa00113f63ccd4e210
>> Merge: 95ed689e9f30 a0750fae73c5
>> Author: Jens Axboe <axboe@kernel.dk>
>> Date: Wed Dec 10 13:41:17 2025 -0700
>>
>> Merge branch 'block-6.19' into for-next
>>
>> * block-6.19:
>> blk-mq-dma: always initialize dma state
>>
>> # cat /sys/kernel/debug/kmemleak
>> unreferenced object 0xffff88826cab51c0 (size 2488):
>> comm "nvme", pid 84134, jiffies 4304631753
>> hex dump (first 32 bytes):
>> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
>> 60 1a be c1 ff ff ff ff c0 2b 05 73 77 60 00 00 `........+.sw`..
>> backtrace (crc 155ec6c5):
>> kmem_cache_alloc_node_noprof+0x5e4/0x830
>> blk_alloc_queue+0x30/0x700
>> blk_mq_alloc_queue+0x14b/0x230
>> nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
>> 0xffffffffc11de07f
>> 0xffffffffc11dfc28
>> nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
>> nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
>> vfs_write+0x1d0/0xfd0
>> ksys_write+0xf9/0x1d0
>> do_syscall_64+0x95/0x520
>> entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
>
> Can you try following ? FYI : - Potential fix, only compile tested.
>
> From b3c2e350ae741b18c04abe489dcf9d325537c01c Mon Sep 17 00:00:00 2001
> From: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
> Date: Sun, 14 Dec 2025 19:29:24 -0800
> Subject: [PATCH COMPILE TESTED ONLY] nvme-fc: release admin tagset if
> init fails
>
> nvme_fabrics creates an NVMe/FC controller in following path:
>
> nvmf_dev_write()
> -> nvmf_create_ctrl()
> -> nvme_fc_create_ctrl()
> -> nvme_fc_init_ctrl()
>
> Check ctrl->ctrl.admin_tagset in the fail_ctrl path and call
> nvme_remove_admin_tag_set() to release the resources.
>
> Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
> ---
> drivers/nvme/host/fc.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index bc455fa98246..6948de3f438a 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -3587,6 +3587,8 @@ nvme_fc_init_ctrl(struct device *dev, struct
> nvmf_ctrl_options *opts,
>
> ctrl->ctrl.opts = NULL;
>
> + if (ctrl->ctrl.admin_tagset)
> + nvme_remove_admin_tag_set(&ctrl->ctrl);
> /* initiate nvme ctrl ref counting teardown */
> nvme_uninit_ctrl(&ctrl->ctrl);
>
did you get a chance to try this ?
-ck
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme/fc
2025-12-18 19:41 ` Chaitanya Kulkarni
@ 2025-12-27 12:10 ` Yi Zhang
2026-01-15 9:24 ` Yi Zhang
0 siblings, 1 reply; 7+ messages in thread
From: Yi Zhang @ 2025-12-27 12:10 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: Chaitanya Kulkarni, Shinichiro Kawasaki,
open list:NVM EXPRESS DRIVER, linux-block, Daniel Wagner
> > Can you try following ? FYI : - Potential fix, only compile tested.
> >
> > From b3c2e350ae741b18c04abe489dcf9d325537c01c Mon Sep 17 00:00:00 2001
> > From: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
> > Date: Sun, 14 Dec 2025 19:29:24 -0800
> > Subject: [PATCH COMPILE TESTED ONLY] nvme-fc: release admin tagset if
> > init fails
> >
> > nvme_fabrics creates an NVMe/FC controller in following path:
> >
> > nvmf_dev_write()
> > -> nvmf_create_ctrl()
> > -> nvme_fc_create_ctrl()
> > -> nvme_fc_init_ctrl()
> >
> > Check ctrl->ctrl.admin_tagset in the fail_ctrl path and call
> > nvme_remove_admin_tag_set() to release the resources.
> >
> > Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
> > ---
> > drivers/nvme/host/fc.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> > index bc455fa98246..6948de3f438a 100644
> > --- a/drivers/nvme/host/fc.c
> > +++ b/drivers/nvme/host/fc.c
> > @@ -3587,6 +3587,8 @@ nvme_fc_init_ctrl(struct device *dev, struct
> > nvmf_ctrl_options *opts,
> >
> > ctrl->ctrl.opts = NULL;
> >
> > + if (ctrl->ctrl.admin_tagset)
> > + nvme_remove_admin_tag_set(&ctrl->ctrl);
> > /* initiate nvme ctrl ref counting teardown */
> > nvme_uninit_ctrl(&ctrl->ctrl);
> >
> did you get a chance to try this ?
Hi Chaitanya
Sorry for the late response, I tried to reproduce this issue recently
but with no luck to reproduce it again.
And during the stress blktests nvme/fc test, I reproduced several panic issue.
I will report it later after I get more info.
>
> -ck
>
--
Best Regards,
Yi Zhang
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme/fc
2025-12-27 12:10 ` Yi Zhang
@ 2026-01-15 9:24 ` Yi Zhang
2026-01-30 7:45 ` Ming Lei
0 siblings, 1 reply; 7+ messages in thread
From: Yi Zhang @ 2026-01-15 9:24 UTC (permalink / raw)
To: Chaitanya Kulkarni, justintee8345
Cc: Chaitanya Kulkarni, Shinichiro Kawasaki,
open list:NVM EXPRESS DRIVER, linux-block, Daniel Wagner
Hi Justin and Chaitanya
It turns out that the kmemleak was caused by nvme-loop. It was
observed during the stress nvme loop/tcp/fc[1] test, but the kmemleak
log was reported during the nvme/fc test. That's why I didn't
reproduce it with the stress nvme/fc test before.
[1]
nvme_trtype=loop ./check nvme/
nvme_trtype=tcp ./check nvme/
nvme_trtype=fc ./check nvme/
unreferenced object 0xffff8881295fd000 (size 1024):
comm "nvme", pid 101335, jiffies 4299282670
hex dump (first 32 bytes):
00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
ff ff ff ff ff ff ff ff e0 3c 57 af ff ff ff ff .........<W.....
backtrace (crc 414bcfcd):
__kmalloc_cache_node_noprof+0x5f9/0x840
blk_mq_alloc_hctx+0x52/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x610
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8881c24db660 (size 8):
comm "nvme", pid 101335, jiffies 4299282670
hex dump (first 8 bytes):
ff ff 00 00 00 00 00 00 ........
backtrace (crc b47d4cd6):
__kmalloc_node_noprof+0x6ab/0x970
alloc_cpumask_var_node+0x56/0xb0
blk_mq_alloc_hctx+0x74/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x610
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff8882752cd300 (size 128):
comm "nvme", pid 101335, jiffies 4299282670
hex dump (first 32 bytes):
00 bf f0 fb ff e8 ff ff 00 bf 30 fc ff e8 ff ff ..........0.....
00 bf 70 fc ff e8 ff ff 00 bf b0 fc ff e8 ff ff ..p.............
backtrace (crc caffc16d):
__kmalloc_node_noprof+0x6ab/0x970
blk_mq_alloc_hctx+0x43a/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x610
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
unreferenced object 0xffff88827d5d7800 (size 512):
comm "nvme", pid 101335, jiffies 4299282670
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace (crc 93cf34af):
__kvmalloc_node_noprof+0x814/0xb30
sbitmap_init_node+0x184/0x730
blk_mq_alloc_hctx+0x4b3/0x810
blk_mq_alloc_and_init_hctx+0x5b9/0x840
__blk_mq_realloc_hw_ctxs+0x20a/0x610
blk_mq_init_allocated_queue+0x2e9/0x1210
blk_mq_alloc_queue+0x17f/0x230
nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
vfs_write+0x1d0/0xfd0
ksys_write+0xf9/0x1d0
do_syscall_64+0x95/0x520
entry_SYSCALL_64_after_hwframe+0x76/0x7e
On Sat, Dec 27, 2025 at 8:10 PM Yi Zhang <yi.zhang@redhat.com> wrote:
>
> > > Can you try following ? FYI : - Potential fix, only compile tested.
> > >
> > > From b3c2e350ae741b18c04abe489dcf9d325537c01c Mon Sep 17 00:00:00 2001
> > > From: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
> > > Date: Sun, 14 Dec 2025 19:29:24 -0800
> > > Subject: [PATCH COMPILE TESTED ONLY] nvme-fc: release admin tagset if
> > > init fails
> > >
> > > nvme_fabrics creates an NVMe/FC controller in following path:
> > >
> > > nvmf_dev_write()
> > > -> nvmf_create_ctrl()
> > > -> nvme_fc_create_ctrl()
> > > -> nvme_fc_init_ctrl()
> > >
> > > Check ctrl->ctrl.admin_tagset in the fail_ctrl path and call
> > > nvme_remove_admin_tag_set() to release the resources.
> > >
> > > Signed-off-by: Chaitanya Kulkarni <ckulkarnilinux@gmail.com>
> > > ---
> > > drivers/nvme/host/fc.c | 2 ++
> > > 1 file changed, 2 insertions(+)
> > >
> > > diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> > > index bc455fa98246..6948de3f438a 100644
> > > --- a/drivers/nvme/host/fc.c
> > > +++ b/drivers/nvme/host/fc.c
> > > @@ -3587,6 +3587,8 @@ nvme_fc_init_ctrl(struct device *dev, struct
> > > nvmf_ctrl_options *opts,
> > >
> > > ctrl->ctrl.opts = NULL;
> > >
> > > + if (ctrl->ctrl.admin_tagset)
> > > + nvme_remove_admin_tag_set(&ctrl->ctrl);
> > > /* initiate nvme ctrl ref counting teardown */
> > > nvme_uninit_ctrl(&ctrl->ctrl);
> > >
> > did you get a chance to try this ?
>
> Hi Chaitanya
>
> Sorry for the late response, I tried to reproduce this issue recently
> but with no luck to reproduce it again.
> And during the stress blktests nvme/fc test, I reproduced several panic issue.
> I will report it later after I get more info.
>
>
> >
> > -ck
> >
>
>
> --
> Best Regards,
> Yi Zhang
--
Best Regards,
Yi Zhang
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [bug report] kmemleak observed during blktests nvme/fc
2026-01-15 9:24 ` Yi Zhang
@ 2026-01-30 7:45 ` Ming Lei
2026-01-31 13:00 ` Yi Zhang
0 siblings, 1 reply; 7+ messages in thread
From: Ming Lei @ 2026-01-30 7:45 UTC (permalink / raw)
To: Yi Zhang
Cc: Chaitanya Kulkarni, justintee8345, Chaitanya Kulkarni,
Shinichiro Kawasaki, open list:NVM EXPRESS DRIVER, linux-block,
Daniel Wagner, Keith Busch
On Thu, Jan 15, 2026 at 05:24:58PM +0800, Yi Zhang wrote:
> Hi Justin and Chaitanya
>
> It turns out that the kmemleak was caused by nvme-loop. It was
> observed during the stress nvme loop/tcp/fc[1] test, but the kmemleak
> log was reported during the nvme/fc test. That's why I didn't
> reproduce it with the stress nvme/fc test before.
>
> [1]
> nvme_trtype=loop ./check nvme/
> nvme_trtype=tcp ./check nvme/
> nvme_trtype=fc ./check nvme/
>
> unreferenced object 0xffff8881295fd000 (size 1024):
> comm "nvme", pid 101335, jiffies 4299282670
> hex dump (first 32 bytes):
> 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
> ff ff ff ff ff ff ff ff e0 3c 57 af ff ff ff ff .........<W.....
> backtrace (crc 414bcfcd):
> __kmalloc_cache_node_noprof+0x5f9/0x840
> blk_mq_alloc_hctx+0x52/0x810
> blk_mq_alloc_and_init_hctx+0x5b9/0x840
> __blk_mq_realloc_hw_ctxs+0x20a/0x610
> blk_mq_init_allocated_queue+0x2e9/0x1210
> blk_mq_alloc_queue+0x17f/0x230
> nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
> nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
> nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
> nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
> nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
> vfs_write+0x1d0/0xfd0
> ksys_write+0xf9/0x1d0
> do_syscall_64+0x95/0x520
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
It seems regression from 03b3bcd319b3 ("nvme: fix admin request_queue
lifetime"), can you try the following fix?
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 19b67cf5d550..64db8e3d8fd8 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4848,6 +4848,15 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
struct queue_limits lim = {};
int ret;
+ /*
+ * If a previous admin queue exists (e.g., from before a reset),
+ * put it now before allocating a new one to avoid orphaning it.
+ */
+ if (ctrl->admin_q) {
+ blk_put_queue(ctrl->admin_q);
+ ctrl->admin_q = NULL;
+ }
+
memset(set, 0, sizeof(*set));
set->ops = ops;
set->queue_depth = NVME_AQ_MQ_TAG_DEPTH;
Thanks,
Ming
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [bug report] kmemleak observed during blktests nvme/fc
2026-01-30 7:45 ` Ming Lei
@ 2026-01-31 13:00 ` Yi Zhang
0 siblings, 0 replies; 7+ messages in thread
From: Yi Zhang @ 2026-01-31 13:00 UTC (permalink / raw)
To: Ming Lei
Cc: Chaitanya Kulkarni, justintee8345, Chaitanya Kulkarni,
Shinichiro Kawasaki, open list:NVM EXPRESS DRIVER, linux-block,
Daniel Wagner, Keith Busch
On Fri, Jan 30, 2026 at 3:45 PM Ming Lei <ming.lei@redhat.com> wrote:
>
> On Thu, Jan 15, 2026 at 05:24:58PM +0800, Yi Zhang wrote:
> > Hi Justin and Chaitanya
> >
> > It turns out that the kmemleak was caused by nvme-loop. It was
> > observed during the stress nvme loop/tcp/fc[1] test, but the kmemleak
> > log was reported during the nvme/fc test. That's why I didn't
> > reproduce it with the stress nvme/fc test before.
> >
> > [1]
> > nvme_trtype=loop ./check nvme/
> > nvme_trtype=tcp ./check nvme/
> > nvme_trtype=fc ./check nvme/
> >
> > unreferenced object 0xffff8881295fd000 (size 1024):
> > comm "nvme", pid 101335, jiffies 4299282670
> > hex dump (first 32 bytes):
> > 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
> > ff ff ff ff ff ff ff ff e0 3c 57 af ff ff ff ff .........<W.....
> > backtrace (crc 414bcfcd):
> > __kmalloc_cache_node_noprof+0x5f9/0x840
> > blk_mq_alloc_hctx+0x52/0x810
> > blk_mq_alloc_and_init_hctx+0x5b9/0x840
> > __blk_mq_realloc_hw_ctxs+0x20a/0x610
> > blk_mq_init_allocated_queue+0x2e9/0x1210
> > blk_mq_alloc_queue+0x17f/0x230
> > nvme_alloc_admin_tag_set+0x352/0x670 [nvme_core]
> > nvme_loop_configure_admin_queue+0xdf/0x2d0 [nvme_loop]
> > nvme_loop_create_ctrl+0x428/0xb13 [nvme_loop]
> > nvmf_create_ctrl+0x2ec/0x620 [nvme_fabrics]
> > nvmf_dev_write+0xd5/0x180 [nvme_fabrics]
> > vfs_write+0x1d0/0xfd0
> > ksys_write+0xf9/0x1d0
> > do_syscall_64+0x95/0x520
> > entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> It seems regression from 03b3bcd319b3 ("nvme: fix admin request_queue
> lifetime"), can you try the following fix?
>
I've verified the issue cannot be reproduced now. Thanks.
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 19b67cf5d550..64db8e3d8fd8 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -4848,6 +4848,15 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
> struct queue_limits lim = {};
> int ret;
>
> + /*
> + * If a previous admin queue exists (e.g., from before a reset),
> + * put it now before allocating a new one to avoid orphaning it.
> + */
> + if (ctrl->admin_q) {
> + blk_put_queue(ctrl->admin_q);
> + ctrl->admin_q = NULL;
> + }
> +
> memset(set, 0, sizeof(*set));
> set->ops = ops;
> set->queue_depth = NVME_AQ_MQ_TAG_DEPTH;
>
>
>
>
> Thanks,
> Ming
>
--
Best Regards,
Yi Zhang
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-01-31 13:00 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-11 15:40 [bug report] kmemleak observed during blktests nvme/fc Yi Zhang
2025-12-15 3:44 ` Chaitanya Kulkarni
2025-12-18 19:41 ` Chaitanya Kulkarni
2025-12-27 12:10 ` Yi Zhang
2026-01-15 9:24 ` Yi Zhang
2026-01-30 7:45 ` Ming Lei
2026-01-31 13:00 ` Yi Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox