* [bug report] kmemleak observed during blktests nvme-tcp
@ 2023-04-21 0:37 Yi Zhang
2023-04-23 14:15 ` Sagi Grimberg
0 siblings, 1 reply; 12+ messages in thread
From: Yi Zhang @ 2023-04-21 0:37 UTC (permalink / raw)
To: linux-block, open list:NVM EXPRESS DRIVER; +Cc: Hannes Reinecke, Sagi Grimberg
Hello
Below kmemleak observed after blktests nvme-tcp, pls help check it, thanks.
commit: linux-block/for-next
aaf9cff31abe (origin/for-next) Merge branch 'for-6.4/io_uring' into for-next
unreferenced object 0xffff88821f0cc880 (size 32):
comm "kworker/1:2H", pid 3067, jiffies 4295825061 (age 12918.254s)
hex dump (first 32 bytes):
82 0c 38 08 00 ea ff ff 00 00 00 00 00 10 00 00 ..8.............
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<ffffffff86f646ab>] __kmalloc+0x4b/0x190
[<ffffffff8776d0bf>] sgl_alloc_order+0x7f/0x360
[<ffffffffc0ba9875>] 0xffffffffc0ba9875
[<ffffffffc0bb068f>] 0xffffffffc0bb068f
[<ffffffffc0bb2038>] 0xffffffffc0bb2038
[<ffffffffc0bb257c>] 0xffffffffc0bb257c
[<ffffffffc0bb2de3>] 0xffffffffc0bb2de3
[<ffffffff86897f49>] process_one_work+0x8b9/0x1550
[<ffffffff8689919c>] worker_thread+0x5ac/0xed0
[<ffffffff868b2222>] kthread+0x2a2/0x340
[<ffffffff866063ac>] ret_from_fork+0x2c/0x50
unreferenced object 0xffff88823abb7c00 (size 512):
comm "nvme", pid 6312, jiffies 4295856007 (age 12887.309s)
hex dump (first 32 bytes):
00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N..........
ff ff ff ff ff ff ff ff a0 53 5f 8e ff ff ff ff .........S_.....
backtrace:
[<ffffffff86f63da7>] kmalloc_trace+0x27/0xe0
[<ffffffff87d61205>] device_add+0x645/0x12f0
[<ffffffff871c2a73>] cdev_device_add+0xf3/0x230
[<ffffffffc09ed7c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core]
[<ffffffffc0b54e0c>] 0xffffffffc0b54e0c
[<ffffffffc086b177>] 0xffffffffc086b177
[<ffffffffc086b613>] 0xffffffffc086b613
[<ffffffff871b41e6>] vfs_write+0x216/0xc60
[<ffffffff871b5479>] ksys_write+0xf9/0x1d0
[<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90
[<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88810ccc9b80 (size 96):
comm "nvme", pid 6312, jiffies 4295856008 (age 12887.308s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<ffffffff86f63da7>] kmalloc_trace+0x27/0xe0
[<ffffffff87d918e0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200
[<ffffffffc09ed83c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core]
[<ffffffffc0b54e0c>] 0xffffffffc0b54e0c
[<ffffffffc086b177>] 0xffffffffc086b177
[<ffffffffc086b613>] 0xffffffffc086b613
[<ffffffff871b41e6>] vfs_write+0x216/0xc60
[<ffffffff871b5479>] ksys_write+0xf9/0x1d0
[<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90
[<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff8881d1fdb780 (size 64):
comm "check", pid 6358, jiffies 4295859851 (age 12883.466s)
hex dump (first 32 bytes):
44 48 48 43 2d 31 3a 30 30 3a 4e 46 76 44 6d 75 DHHC-1:00:NFvDmu
52 58 77 79 54 79 62 57 78 70 43 4a 45 4a 68 36 RXwyTybWxpCJEJh6
backtrace:
[<ffffffff86f646ab>] __kmalloc+0x4b/0x190
[<ffffffffc09fb710>] nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core]
[<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530
[<ffffffff871b47d2>] vfs_write+0x802/0xc60
[<ffffffff871b5479>] ksys_write+0xf9/0x1d0
[<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90
[<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff8881d1fdb600 (size 64):
comm "check", pid 6358, jiffies 4295859908 (age 12883.409s)
hex dump (first 32 bytes):
44 48 48 43 2d 31 3a 30 30 3a 4e 46 76 44 6d 75 DHHC-1:00:NFvDmu
52 58 77 79 54 79 62 57 78 70 43 4a 45 4a 68 36 RXwyTybWxpCJEJh6
backtrace:
[<ffffffff86f646ab>] __kmalloc+0x4b/0x190
[<ffffffffc09fb710>] nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core]
[<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530
[<ffffffff871b47d2>] vfs_write+0x802/0xc60
[<ffffffff871b5479>] ksys_write+0xf9/0x1d0
[<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90
[<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
--
Best Regards,
Yi Zhang
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-04-21 0:37 [bug report] kmemleak observed during blktests nvme-tcp Yi Zhang @ 2023-04-23 14:15 ` Sagi Grimberg 2023-04-25 9:54 ` Yi Zhang 0 siblings, 1 reply; 12+ messages in thread From: Sagi Grimberg @ 2023-04-23 14:15 UTC (permalink / raw) To: Yi Zhang, linux-block, open list:NVM EXPRESS DRIVER; +Cc: Hannes Reinecke > Hello > > Below kmemleak observed after blktests nvme-tcp, pls help check it, thanks. > > commit: linux-block/for-next > aaf9cff31abe (origin/for-next) Merge branch 'for-6.4/io_uring' into for-next Hey Yi, Is this a regression? And can you correlate to specific tests that trigger this? > > unreferenced object 0xffff88821f0cc880 (size 32): > comm "kworker/1:2H", pid 3067, jiffies 4295825061 (age 12918.254s) > hex dump (first 32 bytes): > 82 0c 38 08 00 ea ff ff 00 00 00 00 00 10 00 00 ..8............. > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > backtrace: > [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 > [<ffffffff8776d0bf>] sgl_alloc_order+0x7f/0x360 > [<ffffffffc0ba9875>] 0xffffffffc0ba9875 > [<ffffffffc0bb068f>] 0xffffffffc0bb068f > [<ffffffffc0bb2038>] 0xffffffffc0bb2038 > [<ffffffffc0bb257c>] 0xffffffffc0bb257c > [<ffffffffc0bb2de3>] 0xffffffffc0bb2de3 > [<ffffffff86897f49>] process_one_work+0x8b9/0x1550 > [<ffffffff8689919c>] worker_thread+0x5ac/0xed0 > [<ffffffff868b2222>] kthread+0x2a2/0x340 > [<ffffffff866063ac>] ret_from_fork+0x2c/0x50 > unreferenced object 0xffff88823abb7c00 (size 512): > comm "nvme", pid 6312, jiffies 4295856007 (age 12887.309s) > hex dump (first 32 bytes): > 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... > ff ff ff ff ff ff ff ff a0 53 5f 8e ff ff ff ff .........S_..... > backtrace: > [<ffffffff86f63da7>] kmalloc_trace+0x27/0xe0 > [<ffffffff87d61205>] device_add+0x645/0x12f0 > [<ffffffff871c2a73>] cdev_device_add+0xf3/0x230 > [<ffffffffc09ed7c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core] > [<ffffffffc0b54e0c>] 0xffffffffc0b54e0c > [<ffffffffc086b177>] 0xffffffffc086b177 > [<ffffffffc086b613>] 0xffffffffc086b613 > [<ffffffff871b41e6>] vfs_write+0x216/0xc60 > [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff88810ccc9b80 (size 96): > comm "nvme", pid 6312, jiffies 4295856008 (age 12887.308s) > hex dump (first 32 bytes): > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > backtrace: > [<ffffffff86f63da7>] kmalloc_trace+0x27/0xe0 > [<ffffffff87d918e0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 > [<ffffffffc09ed83c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] > [<ffffffffc0b54e0c>] 0xffffffffc0b54e0c > [<ffffffffc086b177>] 0xffffffffc086b177 > [<ffffffffc086b613>] 0xffffffffc086b613 > [<ffffffff871b41e6>] vfs_write+0x216/0xc60 > [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff8881d1fdb780 (size 64): > comm "check", pid 6358, jiffies 4295859851 (age 12883.466s) > hex dump (first 32 bytes): > 44 48 48 43 2d 31 3a 30 30 3a 4e 46 76 44 6d 75 DHHC-1:00:NFvDmu > 52 58 77 79 54 79 62 57 78 70 43 4a 45 4a 68 36 RXwyTybWxpCJEJh6 > backtrace: > [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 > [<ffffffffc09fb710>] nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] > [<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530 > [<ffffffff871b47d2>] vfs_write+0x802/0xc60 > [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff8881d1fdb600 (size 64): > comm "check", pid 6358, jiffies 4295859908 (age 12883.409s) > hex dump (first 32 bytes): > 44 48 48 43 2d 31 3a 30 30 3a 4e 46 76 44 6d 75 DHHC-1:00:NFvDmu > 52 58 77 79 54 79 62 57 78 70 43 4a 45 4a 68 36 RXwyTybWxpCJEJh6 > backtrace: > [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 > [<ffffffffc09fb710>] nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] > [<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530 > [<ffffffff871b47d2>] vfs_write+0x802/0xc60 > [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > -- > Best Regards, > Yi Zhang > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-04-23 14:15 ` Sagi Grimberg @ 2023-04-25 9:54 ` Yi Zhang 2023-04-26 8:23 ` Chaitanya Kulkarni 0 siblings, 1 reply; 12+ messages in thread From: Yi Zhang @ 2023-04-25 9:54 UTC (permalink / raw) To: Sagi Grimberg; +Cc: linux-block, open list:NVM EXPRESS DRIVER, Hannes Reinecke On Sun, Apr 23, 2023 at 10:15 PM Sagi Grimberg <sagi@grimberg.me> wrote: > > > > Hello > > > > Below kmemleak observed after blktests nvme-tcp, pls help check it, thanks. > > > > commit: linux-block/for-next > > aaf9cff31abe (origin/for-next) Merge branch 'for-6.4/io_uring' into for-next > > Hey Yi, > > Is this a regression? I'm not sure, but both can be reproduced on 6.2.0 > And can you correlate to specific tests that trigger this? > Yes, just run blktests nvme-tcp nvme/044 nvme/045 could trigger them: nvme/044 unreferenced object 0xffff8881911f7800 (size 512): comm "nvme", pid 8233, jiffies 4295443413 (age 157.206s) hex dump (first 32 bytes): 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... ff ff ff ff ff ff ff ff 60 70 79 9b ff ff ff ff ........`py..... backtrace: [<ffffffff93767af7>] kmalloc_trace+0x27/0xe0 [<ffffffff94568e85>] device_add+0x645/0x12f0 [<ffffffff939c6fa3>] cdev_device_add+0xf3/0x230 [<ffffffffc0a697c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core] [<ffffffffc1f6ce0c>] 0xffffffffc1f6ce0c [<ffffffffc1f4d177>] 0xffffffffc1f4d177 [<ffffffffc1f4d613>] 0xffffffffc1f4d613 [<ffffffff939b8716>] vfs_write+0x216/0xc60 [<ffffffff939b99a9>] ksys_write+0xf9/0x1d0 [<ffffffff95378c4c>] do_syscall_64+0x5c/0x90 [<ffffffff954000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff8882297fc780 (size 96): comm "nvme", pid 8233, jiffies 4295443414 (age 157.205s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff93767af7>] kmalloc_trace+0x27/0xe0 [<ffffffff94599560>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 [<ffffffffc0a6983c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] [<ffffffffc1f6ce0c>] 0xffffffffc1f6ce0c [<ffffffffc1f4d177>] 0xffffffffc1f4d177 [<ffffffffc1f4d613>] 0xffffffffc1f4d613 [<ffffffff939b8716>] vfs_write+0x216/0xc60 [<ffffffff939b99a9>] ksys_write+0xf9/0x1d0 [<ffffffff95378c4c>] do_syscall_64+0x5c/0x90 [<ffffffff954000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc nvme/045 unreferenced object 0xffff8881e3b32200 (size 64): comm "check", pid 8335, jiffies 4295703407 (age 177.101s) hex dump (first 32 bytes): 44 48 48 43 2d 31 3a 30 30 3a 77 59 5a 2f 37 4f DHHC-1:00:wYZ/7O 4f 33 2b 71 34 74 6c 38 45 6c 73 71 59 68 55 41 O3+q4tl8ElsqYhUA backtrace: [<ffffffff937683fb>] __kmalloc+0x4b/0x190 [<ffffffffc0a77830>] nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] [<ffffffff93bd1708>] kernfs_fop_write_iter+0x358/0x530 [<ffffffff939b8d02>] vfs_write+0x802/0xc60 [<ffffffff939b99a9>] ksys_write+0xf9/0x1d0 [<ffffffff95378c4c>] do_syscall_64+0x5c/0x90 [<ffffffff954000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff8881e3b32100 (size 64): comm "check", pid 8335, jiffies 4295703468 (age 177.040s) hex dump (first 32 bytes): 44 48 48 43 2d 31 3a 30 30 3a 77 59 5a 2f 37 4f DHHC-1:00:wYZ/7O 4f 33 2b 71 34 74 6c 38 45 6c 73 71 59 68 55 41 O3+q4tl8ElsqYhUA backtrace: [<ffffffff937683fb>] __kmalloc+0x4b/0x190 [<ffffffffc0a77830>] nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] [<ffffffff93bd1708>] kernfs_fop_write_iter+0x358/0x530 [<ffffffff939b8d02>] vfs_write+0x802/0xc60 [<ffffffff939b99a9>] ksys_write+0xf9/0x1d0 [<ffffffff95378c4c>] do_syscall_64+0x5c/0x90 [<ffffffff954000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > > > unreferenced object 0xffff88821f0cc880 (size 32): > > comm "kworker/1:2H", pid 3067, jiffies 4295825061 (age 12918.254s) > > hex dump (first 32 bytes): > > 82 0c 38 08 00 ea ff ff 00 00 00 00 00 10 00 00 ..8............. > > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > > backtrace: > > [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 > > [<ffffffff8776d0bf>] sgl_alloc_order+0x7f/0x360 > > [<ffffffffc0ba9875>] 0xffffffffc0ba9875 > > [<ffffffffc0bb068f>] 0xffffffffc0bb068f > > [<ffffffffc0bb2038>] 0xffffffffc0bb2038 > > [<ffffffffc0bb257c>] 0xffffffffc0bb257c > > [<ffffffffc0bb2de3>] 0xffffffffc0bb2de3 > > [<ffffffff86897f49>] process_one_work+0x8b9/0x1550 > > [<ffffffff8689919c>] worker_thread+0x5ac/0xed0 > > [<ffffffff868b2222>] kthread+0x2a2/0x340 > > [<ffffffff866063ac>] ret_from_fork+0x2c/0x50 > > unreferenced object 0xffff88823abb7c00 (size 512): > > comm "nvme", pid 6312, jiffies 4295856007 (age 12887.309s) > > hex dump (first 32 bytes): > > 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... > > ff ff ff ff ff ff ff ff a0 53 5f 8e ff ff ff ff .........S_..... > > backtrace: > > [<ffffffff86f63da7>] kmalloc_trace+0x27/0xe0 > > [<ffffffff87d61205>] device_add+0x645/0x12f0 > > [<ffffffff871c2a73>] cdev_device_add+0xf3/0x230 > > [<ffffffffc09ed7c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core] > > [<ffffffffc0b54e0c>] 0xffffffffc0b54e0c > > [<ffffffffc086b177>] 0xffffffffc086b177 > > [<ffffffffc086b613>] 0xffffffffc086b613 > > [<ffffffff871b41e6>] vfs_write+0x216/0xc60 > > [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > > [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > > [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff88810ccc9b80 (size 96): > > comm "nvme", pid 6312, jiffies 4295856008 (age 12887.308s) > > hex dump (first 32 bytes): > > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > > backtrace: > > [<ffffffff86f63da7>] kmalloc_trace+0x27/0xe0 > > [<ffffffff87d918e0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 > > [<ffffffffc09ed83c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] > > [<ffffffffc0b54e0c>] 0xffffffffc0b54e0c > > [<ffffffffc086b177>] 0xffffffffc086b177 > > [<ffffffffc086b613>] 0xffffffffc086b613 > > [<ffffffff871b41e6>] vfs_write+0x216/0xc60 > > [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > > [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > > [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff8881d1fdb780 (size 64): > > comm "check", pid 6358, jiffies 4295859851 (age 12883.466s) > > hex dump (first 32 bytes): > > 44 48 48 43 2d 31 3a 30 30 3a 4e 46 76 44 6d 75 DHHC-1:00:NFvDmu > > 52 58 77 79 54 79 62 57 78 70 43 4a 45 4a 68 36 RXwyTybWxpCJEJh6 > > backtrace: > > [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 > > [<ffffffffc09fb710>] nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] > > [<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530 > > [<ffffffff871b47d2>] vfs_write+0x802/0xc60 > > [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > > [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > > [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff8881d1fdb600 (size 64): > > comm "check", pid 6358, jiffies 4295859908 (age 12883.409s) > > hex dump (first 32 bytes): > > 44 48 48 43 2d 31 3a 30 30 3a 4e 46 76 44 6d 75 DHHC-1:00:NFvDmu > > 52 58 77 79 54 79 62 57 78 70 43 4a 45 4a 68 36 RXwyTybWxpCJEJh6 > > backtrace: > > [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 > > [<ffffffffc09fb710>] nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] > > [<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530 > > [<ffffffff871b47d2>] vfs_write+0x802/0xc60 > > [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > > [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > > [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > > > -- > > Best Regards, > > Yi Zhang > > > -- Best Regards, Yi Zhang ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-04-25 9:54 ` Yi Zhang @ 2023-04-26 8:23 ` Chaitanya Kulkarni 2023-04-26 8:34 ` Chaitanya Kulkarni 0 siblings, 1 reply; 12+ messages in thread From: Chaitanya Kulkarni @ 2023-04-26 8:23 UTC (permalink / raw) To: Yi Zhang, Sagi Grimberg Cc: linux-block, open list:NVM EXPRESS DRIVER, Hannes Reinecke >>> [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 >>> [<ffffffffc09fb710>] nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] >>> [<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530 >>> [<ffffffff871b47d2>] vfs_write+0x802/0xc60 >>> [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 >>> [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 >>> [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc can you check if following fixes your problem for dhchap ? linux-block (for-next) # git diff diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 1bfd52eae2ee..0e22d048de3c 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3825,8 +3825,10 @@ static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev, int ret; ret = nvme_auth_generate_key(dhchap_secret, &key); - if (ret) + if (ret) { + kfree(dhchap_secret); return ret; + } kfree(opts->dhchap_secret); opts->dhchap_secret = dhchap_secret; host_key = ctrl->host_key; @@ -3879,8 +3881,10 @@ static ssize_t nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, int ret; ret = nvme_auth_generate_key(dhchap_secret, &key); - if (ret) + if (ret) { + kfree(dhchap_secret); return ret; + } kfree(opts->dhchap_ctrl_secret); opts->dhchap_ctrl_secret = dhchap_secret; ctrl_key = ctrl->ctrl_key; -ck ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-04-26 8:23 ` Chaitanya Kulkarni @ 2023-04-26 8:34 ` Chaitanya Kulkarni 2023-04-27 7:24 ` Yi Zhang 0 siblings, 1 reply; 12+ messages in thread From: Chaitanya Kulkarni @ 2023-04-26 8:34 UTC (permalink / raw) To: Yi Zhang, Sagi Grimberg Cc: linux-block, open list:NVM EXPRESS DRIVER, Hannes Reinecke, Chaitanya Kulkarni On 4/26/23 01:23, Chaitanya Kulkarni wrote: > >>>> [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 >>>> [<ffffffffc09fb710>] >>>> nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] >>>> [<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530 >>>> [<ffffffff871b47d2>] vfs_write+0x802/0xc60 >>>> [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 >>>> [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 >>>> [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > can you check if following fixes your problem for dhchap ? > > > linux-block (for-next) # git diff > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 1bfd52eae2ee..0e22d048de3c 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -3825,8 +3825,10 @@ static ssize_t > nvme_ctrl_dhchap_secret_store(struct device *dev, > int ret; > > ret = nvme_auth_generate_key(dhchap_secret, &key); > - if (ret) > + if (ret) { > + kfree(dhchap_secret); > return ret; > + } > kfree(opts->dhchap_secret); > opts->dhchap_secret = dhchap_secret; > host_key = ctrl->host_key; > @@ -3879,8 +3881,10 @@ static ssize_t > nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, > int ret; > > ret = nvme_auth_generate_key(dhchap_secret, &key); > - if (ret) > + if (ret) { > + kfree(dhchap_secret); > return ret; > + } > kfree(opts->dhchap_ctrl_secret); > opts->dhchap_ctrl_secret = dhchap_secret; > ctrl_key = ctrl->ctrl_key; > > -ck > > sorry my forget to add ida changes, plz ignore earlier and try this :- linux-block (for-next) # git diff diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 1bfd52eae2ee..bb376cc6a5a3 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3825,8 +3825,10 @@ static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev, int ret; ret = nvme_auth_generate_key(dhchap_secret, &key); - if (ret) + if (ret) { + kfree(dhchap_secret); return ret; + } kfree(opts->dhchap_secret); opts->dhchap_secret = dhchap_secret; host_key = ctrl->host_key; @@ -3879,8 +3881,10 @@ static ssize_t nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, int ret; ret = nvme_auth_generate_key(dhchap_secret, &key); - if (ret) + if (ret) { + kfree(dhchap_secret); return ret; + } kfree(opts->dhchap_ctrl_secret); opts->dhchap_ctrl_secret = dhchap_secret; ctrl_key = ctrl->ctrl_key; @@ -4042,8 +4046,10 @@ int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device, cdev_init(cdev, fops); cdev->owner = owner; ret = cdev_device_add(cdev, cdev_device); - if (ret) + if (ret) { put_device(cdev_device); + ida_free(&nvme_ns_chr_minor_ida, MINOR(cdev_device->devt)); + } return ret; } with above patch I was able to get this :- blktests (master) # ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.729s ... 1.892s blktests (master) # ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.798s ... 6.303s -ck ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-04-26 8:34 ` Chaitanya Kulkarni @ 2023-04-27 7:24 ` Yi Zhang 2023-04-27 7:39 ` Yi Zhang 0 siblings, 1 reply; 12+ messages in thread From: Yi Zhang @ 2023-04-27 7:24 UTC (permalink / raw) To: Chaitanya Kulkarni Cc: Sagi Grimberg, linux-block, open list:NVM EXPRESS DRIVER, Hannes Reinecke Hi Chaitanya The kmemleak in [1] is fixed by your patch, but there still has one[2], would you mind checking it, thanks. [1] nvme_ctrl_dhchap_secret_store cdev_device_add [2] unreferenced object 0xffff888288a53b80 (size 96): comm "nvme", pid 1934, jiffies 4294932237 (age 237.359s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 [<ffffffff87395fa0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 [<ffffffffc08c783c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c [<ffffffffc0d38177>] 0xffffffffc0d38177 [<ffffffffc0d38613>] 0xffffffffc0d38613 [<ffffffff867b5056>] vfs_write+0x216/0xc60 [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc On Wed, Apr 26, 2023 at 4:34 PM Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote: > > On 4/26/23 01:23, Chaitanya Kulkarni wrote: > > > >>>> [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 > >>>> [<ffffffffc09fb710>] > >>>> nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] > >>>> [<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530 > >>>> [<ffffffff871b47d2>] vfs_write+0x802/0xc60 > >>>> [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > >>>> [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > >>>> [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > > > can you check if following fixes your problem for dhchap ? > > > > > > linux-block (for-next) # git diff > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > > index 1bfd52eae2ee..0e22d048de3c 100644 > > --- a/drivers/nvme/host/core.c > > +++ b/drivers/nvme/host/core.c > > @@ -3825,8 +3825,10 @@ static ssize_t > > nvme_ctrl_dhchap_secret_store(struct device *dev, > > int ret; > > > > ret = nvme_auth_generate_key(dhchap_secret, &key); > > - if (ret) > > + if (ret) { > > + kfree(dhchap_secret); > > return ret; > > + } > > kfree(opts->dhchap_secret); > > opts->dhchap_secret = dhchap_secret; > > host_key = ctrl->host_key; > > @@ -3879,8 +3881,10 @@ static ssize_t > > nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, > > int ret; > > > > ret = nvme_auth_generate_key(dhchap_secret, &key); > > - if (ret) > > + if (ret) { > > + kfree(dhchap_secret); > > return ret; > > + } > > kfree(opts->dhchap_ctrl_secret); > > opts->dhchap_ctrl_secret = dhchap_secret; > > ctrl_key = ctrl->ctrl_key; > > > > -ck > > > > > > sorry my forget to add ida changes, plz ignore earlier and try this :- > > linux-block (for-next) # git diff > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 1bfd52eae2ee..bb376cc6a5a3 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -3825,8 +3825,10 @@ static ssize_t > nvme_ctrl_dhchap_secret_store(struct device *dev, > int ret; > > ret = nvme_auth_generate_key(dhchap_secret, &key); > - if (ret) > + if (ret) { > + kfree(dhchap_secret); > return ret; > + } > kfree(opts->dhchap_secret); > opts->dhchap_secret = dhchap_secret; > host_key = ctrl->host_key; > @@ -3879,8 +3881,10 @@ static ssize_t > nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, > int ret; > > ret = nvme_auth_generate_key(dhchap_secret, &key); > - if (ret) > + if (ret) { > + kfree(dhchap_secret); > return ret; > + } > kfree(opts->dhchap_ctrl_secret); > opts->dhchap_ctrl_secret = dhchap_secret; > ctrl_key = ctrl->ctrl_key; > @@ -4042,8 +4046,10 @@ int nvme_cdev_add(struct cdev *cdev, struct > device *cdev_device, > cdev_init(cdev, fops); > cdev->owner = owner; > ret = cdev_device_add(cdev, cdev_device); > - if (ret) > + if (ret) { > put_device(cdev_device); > + ida_free(&nvme_ns_chr_minor_ida, MINOR(cdev_device->devt)); > + } > > return ret; > } > > > with above patch I was able to get this :- > > blktests (master) # ./check nvme/044 > nvme/044 (Test bi-directional authentication) [passed] > runtime 1.729s ... 1.892s > blktests (master) # ./check nvme/045 > nvme/045 (Test re-authentication) [passed] > runtime 4.798s ... 6.303s > > -ck > > -- Best Regards, Yi Zhang ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-04-27 7:24 ` Yi Zhang @ 2023-04-27 7:39 ` Yi Zhang 2023-04-27 10:58 ` Chaitanya Kulkarni 0 siblings, 1 reply; 12+ messages in thread From: Yi Zhang @ 2023-04-27 7:39 UTC (permalink / raw) To: Chaitanya Kulkarni Cc: Sagi Grimberg, linux-block, open list:NVM EXPRESS DRIVER, Hannes Reinecke oops, the kmemleak still exists: # cat /sys/kernel/debug/kmemleak unreferenced object 0xffff8882a4cc6000 (size 4096): comm "kworker/u32:6", pid 116, jiffies 4294699939 (age 1614.355s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 03 10 03 1f 00 00 00 ................ backtrace: [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 [<ffffffffc08cc68e>] nvme_identify_ns+0xae/0x230 [nvme_core] [<ffffffffc08cc8b9>] nvme_ns_info_from_identify+0x99/0x4a0 [nvme_core] [<ffffffffc08e0696>] nvme_scan_ns+0x1b6/0x460 [nvme_core] [<ffffffffc08e0ae2>] nvme_scan_ns_list+0x192/0x4f0 [nvme_core] [<ffffffffc08e1271>] nvme_scan_work+0x2f1/0xa30 [nvme_core] [<ffffffff85e98629>] process_one_work+0x8b9/0x1550 [<ffffffff85e9987c>] worker_thread+0x5ac/0xed0 [<ffffffff85eb2902>] kthread+0x2a2/0x340 [<ffffffff85c062cc>] ret_from_fork+0x2c/0x50 unreferenced object 0xffff88829782bc00 (size 512): comm "nvme", pid 1539, jiffies 4294914967 (age 1399.449s) hex dump (first 32 bytes): 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... ff ff ff ff ff ff ff ff a0 73 bf 8d ff ff ff ff .........s...... backtrace: [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 [<ffffffff873658c5>] device_add+0x645/0x12f0 [<ffffffff867c38e3>] cdev_device_add+0xf3/0x230 [<ffffffffc08c77c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core] [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c [<ffffffffc0d38177>] 0xffffffffc0d38177 [<ffffffffc0d38613>] 0xffffffffc0d38613 [<ffffffff867b5056>] vfs_write+0x216/0xc60 [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff88824216a880 (size 96): comm "nvme", pid 1539, jiffies 4294914968 (age 1399.448s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 [<ffffffff87395fa0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 [<ffffffffc08c783c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c [<ffffffffc0d38177>] 0xffffffffc0d38177 [<ffffffffc0d38613>] 0xffffffffc0d38613 [<ffffffff867b5056>] vfs_write+0x216/0xc60 [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff8881b00f4900 (size 64): comm "check", pid 1587, jiffies 4294922730 (age 1391.686s) hex dump (first 32 bytes): 44 48 48 43 2d 31 3a 30 30 3a 79 68 33 70 6f 45 DHHC-1:00:yh3poE 61 47 37 31 68 45 69 2f 33 42 41 75 54 2f 61 6c aG71hEi/3BAuT/al backtrace: [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 [<ffffffff867b5642>] vfs_write+0x802/0xc60 [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff8882b4567700 (size 64): comm "check", pid 1587, jiffies 4294922738 (age 1391.678s) hex dump (first 32 bytes): 44 48 48 43 2d 31 3a 30 30 3a 79 68 33 70 6f 45 DHHC-1:00:yh3poE 61 47 37 31 68 45 69 2f 33 42 41 75 54 2f 61 6c aG71hEi/3BAuT/al backtrace: [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 [<ffffffff867b5642>] vfs_write+0x802/0xc60 [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff8882b6fbe000 (size 512): comm "nvme", pid 1934, jiffies 4294932235 (age 1382.239s) hex dump (first 32 bytes): 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... ff ff ff ff ff ff ff ff a0 73 bf 8d ff ff ff ff .........s...... backtrace: [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 [<ffffffff873658c5>] device_add+0x645/0x12f0 [<ffffffff867c38e3>] cdev_device_add+0xf3/0x230 [<ffffffffc08c77c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core] [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c [<ffffffffc0d38177>] 0xffffffffc0d38177 [<ffffffffc0d38613>] 0xffffffffc0d38613 [<ffffffff867b5056>] vfs_write+0x216/0xc60 [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff888288a53b80 (size 96): comm "nvme", pid 1934, jiffies 4294932237 (age 1382.237s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 [<ffffffff87395fa0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 [<ffffffffc08c783c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c [<ffffffffc0d38177>] 0xffffffffc0d38177 [<ffffffffc0d38613>] 0xffffffffc0d38613 [<ffffffff867b5056>] vfs_write+0x216/0xc60 [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff88829e6a3b80 (size 64): comm "check", pid 1981, jiffies 4294936167 (age 1378.307s) hex dump (first 32 bytes): 44 48 48 43 2d 31 3a 30 30 3a 61 56 6f 56 44 4f DHHC-1:00:aVoVDO 79 69 31 6c 59 33 74 79 77 47 33 6a 4f 6e 37 33 yi1lY3tywG3jOn73 backtrace: [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 [<ffffffff867b5642>] vfs_write+0x802/0xc60 [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff88829e6a3a80 (size 64): comm "check", pid 1981, jiffies 4294936885 (age 1377.589s) hex dump (first 32 bytes): 44 48 48 43 2d 31 3a 30 30 3a 61 56 6f 56 44 4f DHHC-1:00:aVoVDO 79 69 31 6c 59 33 74 79 77 47 33 6a 4f 6e 37 33 yi1lY3tywG3jOn73 backtrace: [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 [<ffffffff867b5642>] vfs_write+0x802/0xc60 [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc On Thu, Apr 27, 2023 at 3:24 PM Yi Zhang <yi.zhang@redhat.com> wrote: > > Hi Chaitanya > > The kmemleak in [1] is fixed by your patch, but there still has > one[2], would you mind checking it, thanks. > > [1] > nvme_ctrl_dhchap_secret_store > cdev_device_add > > [2] > unreferenced object 0xffff888288a53b80 (size 96): > comm "nvme", pid 1934, jiffies 4294932237 (age 237.359s) > hex dump (first 32 bytes): > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > backtrace: > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > [<ffffffff87395fa0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 > [<ffffffffc08c783c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] > [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c > [<ffffffffc0d38177>] 0xffffffffc0d38177 > [<ffffffffc0d38613>] 0xffffffffc0d38613 > [<ffffffff867b5056>] vfs_write+0x216/0xc60 > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > On Wed, Apr 26, 2023 at 4:34 PM Chaitanya Kulkarni > <chaitanyak@nvidia.com> wrote: > > > > On 4/26/23 01:23, Chaitanya Kulkarni wrote: > > > > > >>>> [<ffffffff86f646ab>] __kmalloc+0x4b/0x190 > > >>>> [<ffffffffc09fb710>] > > >>>> nvme_ctrl_dhchap_secret_store+0x110/0x350 [nvme_core] > > >>>> [<ffffffff873cc848>] kernfs_fop_write_iter+0x358/0x530 > > >>>> [<ffffffff871b47d2>] vfs_write+0x802/0xc60 > > >>>> [<ffffffff871b5479>] ksys_write+0xf9/0x1d0 > > >>>> [<ffffffff88ba8f9c>] do_syscall_64+0x5c/0x90 > > >>>> [<ffffffff88c000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > > > > > can you check if following fixes your problem for dhchap ? > > > > > > > > > linux-block (for-next) # git diff > > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > > > index 1bfd52eae2ee..0e22d048de3c 100644 > > > --- a/drivers/nvme/host/core.c > > > +++ b/drivers/nvme/host/core.c > > > @@ -3825,8 +3825,10 @@ static ssize_t > > > nvme_ctrl_dhchap_secret_store(struct device *dev, > > > int ret; > > > > > > ret = nvme_auth_generate_key(dhchap_secret, &key); > > > - if (ret) > > > + if (ret) { > > > + kfree(dhchap_secret); > > > return ret; > > > + } > > > kfree(opts->dhchap_secret); > > > opts->dhchap_secret = dhchap_secret; > > > host_key = ctrl->host_key; > > > @@ -3879,8 +3881,10 @@ static ssize_t > > > nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, > > > int ret; > > > > > > ret = nvme_auth_generate_key(dhchap_secret, &key); > > > - if (ret) > > > + if (ret) { > > > + kfree(dhchap_secret); > > > return ret; > > > + } > > > kfree(opts->dhchap_ctrl_secret); > > > opts->dhchap_ctrl_secret = dhchap_secret; > > > ctrl_key = ctrl->ctrl_key; > > > > > > -ck > > > > > > > > > > sorry my forget to add ida changes, plz ignore earlier and try this :- > > > > linux-block (for-next) # git diff > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > > index 1bfd52eae2ee..bb376cc6a5a3 100644 > > --- a/drivers/nvme/host/core.c > > +++ b/drivers/nvme/host/core.c > > @@ -3825,8 +3825,10 @@ static ssize_t > > nvme_ctrl_dhchap_secret_store(struct device *dev, > > int ret; > > > > ret = nvme_auth_generate_key(dhchap_secret, &key); > > - if (ret) > > + if (ret) { > > + kfree(dhchap_secret); > > return ret; > > + } > > kfree(opts->dhchap_secret); > > opts->dhchap_secret = dhchap_secret; > > host_key = ctrl->host_key; > > @@ -3879,8 +3881,10 @@ static ssize_t > > nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, > > int ret; > > > > ret = nvme_auth_generate_key(dhchap_secret, &key); > > - if (ret) > > + if (ret) { > > + kfree(dhchap_secret); > > return ret; > > + } > > kfree(opts->dhchap_ctrl_secret); > > opts->dhchap_ctrl_secret = dhchap_secret; > > ctrl_key = ctrl->ctrl_key; > > @@ -4042,8 +4046,10 @@ int nvme_cdev_add(struct cdev *cdev, struct > > device *cdev_device, > > cdev_init(cdev, fops); > > cdev->owner = owner; > > ret = cdev_device_add(cdev, cdev_device); > > - if (ret) > > + if (ret) { > > put_device(cdev_device); > > + ida_free(&nvme_ns_chr_minor_ida, MINOR(cdev_device->devt)); > > + } > > > > return ret; > > } > > > > > > with above patch I was able to get this :- > > > > blktests (master) # ./check nvme/044 > > nvme/044 (Test bi-directional authentication) [passed] > > runtime 1.729s ... 1.892s > > blktests (master) # ./check nvme/045 > > nvme/045 (Test re-authentication) [passed] > > runtime 4.798s ... 6.303s > > > > -ck > > > > > > > -- > Best Regards, > Yi Zhang -- Best Regards, Yi Zhang ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-04-27 7:39 ` Yi Zhang @ 2023-04-27 10:58 ` Chaitanya Kulkarni 2023-04-27 15:57 ` Yi Zhang 0 siblings, 1 reply; 12+ messages in thread From: Chaitanya Kulkarni @ 2023-04-27 10:58 UTC (permalink / raw) To: Yi Zhang Cc: Sagi Grimberg, linux-block, open list:NVM EXPRESS DRIVER, Hannes Reinecke On 4/27/23 00:39, Yi Zhang wrote: > oops, the kmemleak still exists: hmmm, problem is I'm not able to reproduce nvme_ctrl_dhchap_secret_store(), I could only get cdev ad dev_pm_ops_xxxx. Let's see if following fixes nvme_ctrl_dhchap_secret_store() case ? as I've added one missing kfree() from earlier fix .. once you confirm I'd like to send nvme_ctrl_dhchap_secret_store() first , meanwhile keep looking into cdev and dm_ops :- diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 1bfd52eae2ee..663f8c215d7b 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3825,8 +3825,10 @@ static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev, int ret; ret = nvme_auth_generate_key(dhchap_secret, &key); - if (ret) + if (ret) { + kfree(dhchap_secret); return ret; + } kfree(opts->dhchap_secret); opts->dhchap_secret = dhchap_secret; host_key = ctrl->host_key; @@ -3834,7 +3836,8 @@ static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev, ctrl->host_key = key; mutex_unlock(&ctrl->dhchap_auth_mutex); nvme_auth_free_key(host_key); - } + } else + kfree(dhchap_secret); /* Start re-authentication */ dev_info(ctrl->device, "re-authenticating controller\n"); queue_work(nvme_wq, &ctrl->dhchap_auth_work); @@ -3879,8 +3882,10 @@ static ssize_t nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, int ret; ret = nvme_auth_generate_key(dhchap_secret, &key); - if (ret) + if (ret) { + kfree(dhchap_secret); return ret; + } kfree(opts->dhchap_ctrl_secret); opts->dhchap_ctrl_secret = dhchap_secret; ctrl_key = ctrl->ctrl_key; @@ -3888,7 +3893,8 @@ static ssize_t nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, ctrl->ctrl_key = key; mutex_unlock(&ctrl->dhchap_auth_mutex); nvme_auth_free_key(ctrl_key); - } + } else + kfree(dhchap_secret); /* Start re-authentication */ dev_info(ctrl->device, "re-authenticating controller\n"); queue_work(nvme_wq, &ctrl->dhchap_auth_work); > # cat /sys/kernel/debug/kmemleak > unreferenced object 0xffff8882a4cc6000 (size 4096): > comm "kworker/u32:6", pid 116, jiffies 4294699939 (age 1614.355s) > hex dump (first 32 bytes): > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > 00 00 00 00 00 00 00 00 00 03 10 03 1f 00 00 00 ................ > backtrace: > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > [<ffffffffc08cc68e>] nvme_identify_ns+0xae/0x230 [nvme_core] > [<ffffffffc08cc8b9>] nvme_ns_info_from_identify+0x99/0x4a0 [nvme_core] > [<ffffffffc08e0696>] nvme_scan_ns+0x1b6/0x460 [nvme_core] > [<ffffffffc08e0ae2>] nvme_scan_ns_list+0x192/0x4f0 [nvme_core] > [<ffffffffc08e1271>] nvme_scan_work+0x2f1/0xa30 [nvme_core] > [<ffffffff85e98629>] process_one_work+0x8b9/0x1550 > [<ffffffff85e9987c>] worker_thread+0x5ac/0xed0 > [<ffffffff85eb2902>] kthread+0x2a2/0x340 > [<ffffffff85c062cc>] ret_from_fork+0x2c/0x50 > unreferenced object 0xffff88829782bc00 (size 512): > comm "nvme", pid 1539, jiffies 4294914967 (age 1399.449s) > hex dump (first 32 bytes): > 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... > ff ff ff ff ff ff ff ff a0 73 bf 8d ff ff ff ff .........s...... > backtrace: > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > [<ffffffff873658c5>] device_add+0x645/0x12f0 > [<ffffffff867c38e3>] cdev_device_add+0xf3/0x230 > [<ffffffffc08c77c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core] > [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c > [<ffffffffc0d38177>] 0xffffffffc0d38177 > [<ffffffffc0d38613>] 0xffffffffc0d38613 > [<ffffffff867b5056>] vfs_write+0x216/0xc60 > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff88824216a880 (size 96): > comm "nvme", pid 1539, jiffies 4294914968 (age 1399.448s) > hex dump (first 32 bytes): > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > backtrace: > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > [<ffffffff87395fa0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 > [<ffffffffc08c783c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] > [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c > [<ffffffffc0d38177>] 0xffffffffc0d38177 > [<ffffffffc0d38613>] 0xffffffffc0d38613 > [<ffffffff867b5056>] vfs_write+0x216/0xc60 > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff8881b00f4900 (size 64): > comm "check", pid 1587, jiffies 4294922730 (age 1391.686s) > hex dump (first 32 bytes): > 44 48 48 43 2d 31 3a 30 30 3a 79 68 33 70 6f 45 DHHC-1:00:yh3poE > 61 47 37 31 68 45 69 2f 33 42 41 75 54 2f 61 6c aG71hEi/3BAuT/al > backtrace: > [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 > [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] > [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 > [<ffffffff867b5642>] vfs_write+0x802/0xc60 > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff8882b4567700 (size 64): > comm "check", pid 1587, jiffies 4294922738 (age 1391.678s) > hex dump (first 32 bytes): > 44 48 48 43 2d 31 3a 30 30 3a 79 68 33 70 6f 45 DHHC-1:00:yh3poE > 61 47 37 31 68 45 69 2f 33 42 41 75 54 2f 61 6c aG71hEi/3BAuT/al > backtrace: > [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 > [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] > [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 > [<ffffffff867b5642>] vfs_write+0x802/0xc60 > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff8882b6fbe000 (size 512): > comm "nvme", pid 1934, jiffies 4294932235 (age 1382.239s) > hex dump (first 32 bytes): > 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... > ff ff ff ff ff ff ff ff a0 73 bf 8d ff ff ff ff .........s...... > backtrace: > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > [<ffffffff873658c5>] device_add+0x645/0x12f0 > [<ffffffff867c38e3>] cdev_device_add+0xf3/0x230 > [<ffffffffc08c77c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core] > [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c > [<ffffffffc0d38177>] 0xffffffffc0d38177 > [<ffffffffc0d38613>] 0xffffffffc0d38613 > [<ffffffff867b5056>] vfs_write+0x216/0xc60 > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff888288a53b80 (size 96): > comm "nvme", pid 1934, jiffies 4294932237 (age 1382.237s) > hex dump (first 32 bytes): > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > backtrace: > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > [<ffffffff87395fa0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 > [<ffffffffc08c783c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] > [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c > [<ffffffffc0d38177>] 0xffffffffc0d38177 > [<ffffffffc0d38613>] 0xffffffffc0d38613 > [<ffffffff867b5056>] vfs_write+0x216/0xc60 > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff88829e6a3b80 (size 64): > comm "check", pid 1981, jiffies 4294936167 (age 1378.307s) > hex dump (first 32 bytes): > 44 48 48 43 2d 31 3a 30 30 3a 61 56 6f 56 44 4f DHHC-1:00:aVoVDO > 79 69 31 6c 59 33 74 79 77 47 33 6a 4f 6e 37 33 yi1lY3tywG3jOn73 > backtrace: > [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 > [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] > [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 > [<ffffffff867b5642>] vfs_write+0x802/0xc60 > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > unreferenced object 0xffff88829e6a3a80 (size 64): > comm "check", pid 1981, jiffies 4294936885 (age 1377.589s) > hex dump (first 32 bytes): > 44 48 48 43 2d 31 3a 30 30 3a 61 56 6f 56 44 4f DHHC-1:00:aVoVDO > 79 69 31 6c 59 33 74 79 77 47 33 6a 4f 6e 37 33 yi1lY3tywG3jOn73 > backtrace: > [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 > [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] > [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 > [<ffffffff867b5642>] vfs_write+0x802/0xc60 > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > [..] -ck ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-04-27 10:58 ` Chaitanya Kulkarni @ 2023-04-27 15:57 ` Yi Zhang 2023-05-01 8:23 ` Chaitanya Kulkarni 0 siblings, 1 reply; 12+ messages in thread From: Yi Zhang @ 2023-04-27 15:57 UTC (permalink / raw) To: Chaitanya Kulkarni Cc: Sagi Grimberg, linux-block, open list:NVM EXPRESS DRIVER, Hannes Reinecke On Thu, Apr 27, 2023 at 6:58 PM Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote: > > On 4/27/23 00:39, Yi Zhang wrote: > > oops, the kmemleak still exists: > > hmmm, problem is I'm not able to reproduce > nvme_ctrl_dhchap_secret_store(), I could only get > cdev ad dev_pm_ops_xxxx. Let's see if following fixes > nvme_ctrl_dhchap_secret_store() case ? as I've added one > missing kfree() from earlier fix .. Hi Chaitanya The kmemleak in nvme_ctrl_dhchap_secret_store was fixed with the change, feel free to add: Tested-by: Yi Zhang <yi.zhang@redhat.com> > > once you confirm I'd like to send > nvme_ctrl_dhchap_secret_store() first , meanwhile keep > looking into cdev and dm_ops :- > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 1bfd52eae2ee..663f8c215d7b 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -3825,8 +3825,10 @@ static ssize_t > nvme_ctrl_dhchap_secret_store(struct device *dev, > int ret; > > ret = nvme_auth_generate_key(dhchap_secret, &key); > - if (ret) > + if (ret) { > + kfree(dhchap_secret); > return ret; > + } > kfree(opts->dhchap_secret); > opts->dhchap_secret = dhchap_secret; > host_key = ctrl->host_key; > @@ -3834,7 +3836,8 @@ static ssize_t > nvme_ctrl_dhchap_secret_store(struct device *dev, > ctrl->host_key = key; > mutex_unlock(&ctrl->dhchap_auth_mutex); > nvme_auth_free_key(host_key); > - } > + } else > + kfree(dhchap_secret); > /* Start re-authentication */ > dev_info(ctrl->device, "re-authenticating controller\n"); > queue_work(nvme_wq, &ctrl->dhchap_auth_work); > @@ -3879,8 +3882,10 @@ static ssize_t > nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, > int ret; > > ret = nvme_auth_generate_key(dhchap_secret, &key); > - if (ret) > + if (ret) { > + kfree(dhchap_secret); > return ret; > + } > kfree(opts->dhchap_ctrl_secret); > opts->dhchap_ctrl_secret = dhchap_secret; > ctrl_key = ctrl->ctrl_key; > @@ -3888,7 +3893,8 @@ static ssize_t > nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev, > ctrl->ctrl_key = key; > mutex_unlock(&ctrl->dhchap_auth_mutex); > nvme_auth_free_key(ctrl_key); > - } > + } else > + kfree(dhchap_secret); > /* Start re-authentication */ > dev_info(ctrl->device, "re-authenticating controller\n"); > queue_work(nvme_wq, &ctrl->dhchap_auth_work); > > > > # cat /sys/kernel/debug/kmemleak > > unreferenced object 0xffff8882a4cc6000 (size 4096): > > comm "kworker/u32:6", pid 116, jiffies 4294699939 (age 1614.355s) > > hex dump (first 32 bytes): > > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > > 00 00 00 00 00 00 00 00 00 03 10 03 1f 00 00 00 ................ > > backtrace: > > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > > [<ffffffffc08cc68e>] nvme_identify_ns+0xae/0x230 [nvme_core] > > [<ffffffffc08cc8b9>] nvme_ns_info_from_identify+0x99/0x4a0 [nvme_core] > > [<ffffffffc08e0696>] nvme_scan_ns+0x1b6/0x460 [nvme_core] > > [<ffffffffc08e0ae2>] nvme_scan_ns_list+0x192/0x4f0 [nvme_core] > > [<ffffffffc08e1271>] nvme_scan_work+0x2f1/0xa30 [nvme_core] > > [<ffffffff85e98629>] process_one_work+0x8b9/0x1550 > > [<ffffffff85e9987c>] worker_thread+0x5ac/0xed0 > > [<ffffffff85eb2902>] kthread+0x2a2/0x340 > > [<ffffffff85c062cc>] ret_from_fork+0x2c/0x50 > > unreferenced object 0xffff88829782bc00 (size 512): > > comm "nvme", pid 1539, jiffies 4294914967 (age 1399.449s) > > hex dump (first 32 bytes): > > 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... > > ff ff ff ff ff ff ff ff a0 73 bf 8d ff ff ff ff .........s...... > > backtrace: > > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > > [<ffffffff873658c5>] device_add+0x645/0x12f0 > > [<ffffffff867c38e3>] cdev_device_add+0xf3/0x230 > > [<ffffffffc08c77c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core] > > [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c > > [<ffffffffc0d38177>] 0xffffffffc0d38177 > > [<ffffffffc0d38613>] 0xffffffffc0d38613 > > [<ffffffff867b5056>] vfs_write+0x216/0xc60 > > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff88824216a880 (size 96): > > comm "nvme", pid 1539, jiffies 4294914968 (age 1399.448s) > > hex dump (first 32 bytes): > > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > > backtrace: > > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > > [<ffffffff87395fa0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 > > [<ffffffffc08c783c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] > > [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c > > [<ffffffffc0d38177>] 0xffffffffc0d38177 > > [<ffffffffc0d38613>] 0xffffffffc0d38613 > > [<ffffffff867b5056>] vfs_write+0x216/0xc60 > > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff8881b00f4900 (size 64): > > comm "check", pid 1587, jiffies 4294922730 (age 1391.686s) > > hex dump (first 32 bytes): > > 44 48 48 43 2d 31 3a 30 30 3a 79 68 33 70 6f 45 DHHC-1:00:yh3poE > > 61 47 37 31 68 45 69 2f 33 42 41 75 54 2f 61 6c aG71hEi/3BAuT/al > > backtrace: > > [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 > > [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] > > [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 > > [<ffffffff867b5642>] vfs_write+0x802/0xc60 > > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff8882b4567700 (size 64): > > comm "check", pid 1587, jiffies 4294922738 (age 1391.678s) > > hex dump (first 32 bytes): > > 44 48 48 43 2d 31 3a 30 30 3a 79 68 33 70 6f 45 DHHC-1:00:yh3poE > > 61 47 37 31 68 45 69 2f 33 42 41 75 54 2f 61 6c aG71hEi/3BAuT/al > > backtrace: > > [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 > > [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] > > [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 > > [<ffffffff867b5642>] vfs_write+0x802/0xc60 > > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff8882b6fbe000 (size 512): > > comm "nvme", pid 1934, jiffies 4294932235 (age 1382.239s) > > hex dump (first 32 bytes): > > 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... > > ff ff ff ff ff ff ff ff a0 73 bf 8d ff ff ff ff .........s...... > > backtrace: > > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > > [<ffffffff873658c5>] device_add+0x645/0x12f0 > > [<ffffffff867c38e3>] cdev_device_add+0xf3/0x230 > > [<ffffffffc08c77c6>] nvme_init_ctrl+0xbe6/0x1140 [nvme_core] > > [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c > > [<ffffffffc0d38177>] 0xffffffffc0d38177 > > [<ffffffffc0d38613>] 0xffffffffc0d38613 > > [<ffffffff867b5056>] vfs_write+0x216/0xc60 > > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff888288a53b80 (size 96): > > comm "nvme", pid 1934, jiffies 4294932237 (age 1382.237s) > > hex dump (first 32 bytes): > > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > > backtrace: > > [<ffffffff86564437>] kmalloc_trace+0x27/0xe0 > > [<ffffffff87395fa0>] dev_pm_qos_update_user_latency_tolerance+0xe0/0x200 > > [<ffffffffc08c783c>] nvme_init_ctrl+0xc5c/0x1140 [nvme_core] > > [<ffffffffc1ab0e0c>] 0xffffffffc1ab0e0c > > [<ffffffffc0d38177>] 0xffffffffc0d38177 > > [<ffffffffc0d38613>] 0xffffffffc0d38613 > > [<ffffffff867b5056>] vfs_write+0x216/0xc60 > > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff88829e6a3b80 (size 64): > > comm "check", pid 1981, jiffies 4294936167 (age 1378.307s) > > hex dump (first 32 bytes): > > 44 48 48 43 2d 31 3a 30 30 3a 61 56 6f 56 44 4f DHHC-1:00:aVoVDO > > 79 69 31 6c 59 33 74 79 77 47 33 6a 4f 6e 37 33 yi1lY3tywG3jOn73 > > backtrace: > > [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 > > [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] > > [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 > > [<ffffffff867b5642>] vfs_write+0x802/0xc60 > > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > unreferenced object 0xffff88829e6a3a80 (size 64): > > comm "check", pid 1981, jiffies 4294936885 (age 1377.589s) > > hex dump (first 32 bytes): > > 44 48 48 43 2d 31 3a 30 30 3a 61 56 6f 56 44 4f DHHC-1:00:aVoVDO > > 79 69 31 6c 59 33 74 79 77 47 33 6a 4f 6e 37 33 yi1lY3tywG3jOn73 > > backtrace: > > [<ffffffff86564d3b>] __kmalloc+0x4b/0x190 > > [<ffffffffc08d5841>] nvme_ctrl_dhchap_secret_store+0x111/0x360 [nvme_core] > > [<ffffffff869ce038>] kernfs_fop_write_iter+0x358/0x530 > > [<ffffffff867b5642>] vfs_write+0x802/0xc60 > > [<ffffffff867b62e9>] ksys_write+0xf9/0x1d0 > > [<ffffffff881adc4c>] do_syscall_64+0x5c/0x90 > > [<ffffffff882000aa>] entry_SYSCALL_64_after_hwframe+0x72/0xdc > > > > > > [..] > > -ck > > -- Best Regards, Yi Zhang ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-04-27 15:57 ` Yi Zhang @ 2023-05-01 8:23 ` Chaitanya Kulkarni 2023-05-01 8:44 ` Sagi Grimberg 0 siblings, 1 reply; 12+ messages in thread From: Chaitanya Kulkarni @ 2023-05-01 8:23 UTC (permalink / raw) To: Yi Zhang Cc: Sagi Grimberg, linux-block, open list:NVM EXPRESS DRIVER, Hannes Reinecke On 4/27/23 08:57, Yi Zhang wrote: > On Thu, Apr 27, 2023 at 6:58 PM Chaitanya Kulkarni > <chaitanyak@nvidia.com> wrote: >> On 4/27/23 00:39, Yi Zhang wrote: >>> oops, the kmemleak still exists: >> hmmm, problem is I'm not able to reproduce >> nvme_ctrl_dhchap_secret_store(), I could only get >> cdev ad dev_pm_ops_xxxx. Let's see if following fixes >> nvme_ctrl_dhchap_secret_store() case ? as I've added one >> missing kfree() from earlier fix .. > Hi Chaitanya > > The kmemleak in nvme_ctrl_dhchap_secret_store was fixed with the > change, feel free to add: > > Tested-by: Yi Zhang <yi.zhang@redhat.com> > > I was able to fix remaining memleaks from for blktests nvme/044-nvme/045 with nvme-loop and nvme-tcp transport. I've tested following patch with blktests and memleak on, also specifically tested two testcases in question, whenever you have time see if this fixes all issues, below are the logs from my testing, here is the patch :- linux-block (for-next) # git diff diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 42e90d00fc40..245a832f4df5 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -5151,6 +5151,10 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev, BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct nvme_dsm_range) > PAGE_SIZE); + ret = nvme_auth_init_ctrl(ctrl); + if (ret) + return ret; + ctrl->discard_page = alloc_page(GFP_KERNEL); if (!ctrl->discard_page) { ret = -ENOMEM; @@ -5195,13 +5199,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev, nvme_fault_inject_init(&ctrl->fault_inject, dev_name(ctrl->device)); nvme_mpath_init_ctrl(ctrl); - ret = nvme_auth_init_ctrl(ctrl); - if (ret) - goto out_free_cdev; return 0; -out_free_cdev: - cdev_device_del(&ctrl->cdev, ctrl->device); out_free_name: nvme_put_ctrl(ctrl); kfree_const(ctrl->device->kobj.name); -ck linux-block (for-next) # git diff diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 42e90d00fc40..245a832f4df5 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -5151,6 +5151,10 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev, BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct nvme_dsm_range) > PAGE_SIZE); + ret = nvme_auth_init_ctrl(ctrl); + if (ret) + return ret; + ctrl->discard_page = alloc_page(GFP_KERNEL); if (!ctrl->discard_page) { ret = -ENOMEM; @@ -5195,13 +5199,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev, nvme_fault_inject_init(&ctrl->fault_inject, dev_name(ctrl->device)); nvme_mpath_init_ctrl(ctrl); - ret = nvme_auth_init_ctrl(ctrl); - if (ret) - goto out_free_cdev; return 0; -out_free_cdev: - cdev_device_del(&ctrl->cdev, ctrl->device); out_free_name: nvme_put_ctrl(ctrl); kfree_const(ctrl->device->kobj.name); linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-6.3.0+ root=UUID=e5f9bccb-cc5d-4577-8f74-fddb710fae7f ro rootflags=subvol=root rhgb quiet console=ttyS0,115200 kgdboc=ttyS0,115200 nokaslr kmemleak=on linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # linux-block (for-next) # ./compile_nvme.sh + umount /mnt/nvme0n1 + clear_dmesg ./compile_nvme.sh: line 3: clear_dmesg: command not found umount: /mnt/nvme0n1: no mount point specified. + rmmod host/tets_verify.ko rmmod: ERROR: Module host/tets_verify is not currently loaded + modprobe -r nvme-fabrics + modprobe -r nvme_loop + modprobe -r nvmet + modprobe -r nvme + sleep 1 + modprobe -r nvme-core + lsmod + grep nvme ++ nproc + make -j 48 M=drivers/nvme/ modules /lib/modules/6.3.0+/kernel/drivers/nvme/host/: total 7.5M -rw-r--r--. 1 root root 3.5M May 1 00:49 nvme-core.ko -rw-r--r--. 1 root root 477K May 1 00:49 nvme-fabrics.ko -rw-r--r--. 1 root root 974K May 1 00:49 nvme-fc.ko -rw-r--r--. 1 root root 783K May 1 00:49 nvme.ko -rw-r--r--. 1 root root 926K May 1 00:49 nvme-rdma.ko -rw-r--r--. 1 root root 902K May 1 00:49 nvme-tcp.ko /lib/modules/6.3.0+/kernel/drivers/nvme/target//: total 7.4M -rw-r--r--. 1 root root 532K May 1 00:49 nvme-fcloop.ko -rw-r--r--. 1 root root 469K May 1 00:49 nvme-loop.ko -rw-r--r--. 1 root root 799K May 1 00:49 nvmet-fc.ko -rw-r--r--. 1 root root 4.0M May 1 00:49 nvmet.ko -rw-r--r--. 1 root root 892K May 1 00:49 nvmet-rdma.ko -rw-r--r--. 1 root root 753K May 1 00:49 nvmet-tcp.ko + modprobe nvme + dmesg -c [ 124.012407] nvme 0000:00:04.0: vgaarb: pci_notify [ 124.274357] pci 0000:00:04.0: vgaarb: pci_notify [ 127.055054] nvme 0000:00:04.0: vgaarb: pci_notify [ 127.055093] nvme 0000:00:04.0: runtime IRQ mapping not provided by arch [ 127.056594] nvme nvme0: pci function 0000:00:04.0 [ 127.258570] nvme 0000:00:04.0: enabling bus mastering [ 127.259413] nvme 0000:00:04.0: saving config space at offset 0x0 (reading 0x101b36) [ 127.259433] nvme 0000:00:04.0: saving config space at offset 0x4 (reading 0x100507) [ 127.259439] nvme 0000:00:04.0: saving config space at offset 0x8 (reading 0x1080202) [ 127.259444] nvme 0000:00:04.0: saving config space at offset 0xc (reading 0x0) [ 127.259448] nvme 0000:00:04.0: saving config space at offset 0x10 (reading 0xfebd0004) [ 127.259453] nvme 0000:00:04.0: saving config space at offset 0x14 (reading 0x0) [ 127.259458] nvme 0000:00:04.0: saving config space at offset 0x18 (reading 0x0) [ 127.259463] nvme 0000:00:04.0: saving config space at offset 0x1c (reading 0x0) [ 127.259467] nvme 0000:00:04.0: saving config space at offset 0x20 (reading 0x0) [ 127.259473] nvme 0000:00:04.0: saving config space at offset 0x24 (reading 0x0) [ 127.259477] nvme 0000:00:04.0: saving config space at offset 0x28 (reading 0x0) [ 127.259482] nvme 0000:00:04.0: saving config space at offset 0x2c (reading 0x11001af4) [ 127.259486] nvme 0000:00:04.0: saving config space at offset 0x30 (reading 0x0) [ 127.259491] nvme 0000:00:04.0: saving config space at offset 0x34 (reading 0x40) [ 127.259495] nvme 0000:00:04.0: saving config space at offset 0x38 (reading 0x0) [ 127.259500] nvme 0000:00:04.0: saving config space at offset 0x3c (reading 0x10b) [ 127.279599] nvme nvme0: 48/0/0 default/read/poll queues [ 127.286872] nvme nvme0: Ignoring bogus Namespace Identifiers [ 127.301631] nvme 0000:00:04.0: vgaarb: pci_notify linux-block (for-next) # cdblktests blktests (master) # sh ./test-memleak modprobe: FATAL: Module kmemleak-test not found in directory /lib/modules/6.3.0+ modprobe: FATAL: Module kmemleak-test not found. + for transport in loop tcp + echo '################nvme_trtype=loop############' ################nvme_trtype=loop############ ++ seq 1 10 + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.486s ... 2.195s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.716s ... 6.905s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.195s ... 2.086s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 6.905s ... 4.687s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.086s ... 2.094s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.687s ... 4.746s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.094s ... 2.125s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.746s ... 5.240s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.125s ... 2.110s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.240s ... 4.592s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.110s ... 2.075s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.592s ... 4.734s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.075s ... 2.054s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.734s ... 4.757s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.054s ... 2.084s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.757s ... 4.751s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.084s ... 2.100s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.751s ... 4.832s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.100s ... 2.065s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.832s ... 4.814s + echo scan + cat /sys/kernel/debug/kmemleak + for transport in loop tcp + echo '################nvme_trtype=tcp############' ################nvme_trtype=tcp############ ++ seq 1 10 + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.065s ... 1.472s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.814s ... 5.601s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.472s ... 1.478s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.601s ... 5.615s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.478s ... 1.494s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.615s ... 5.638s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.494s ... 1.488s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.638s ... 5.572s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.488s ... 1.489s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.572s ... 5.572s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.489s ... 1.457s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.572s ... 5.597s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.457s ... 1.500s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.597s ... 5.583s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.500s ... 1.480s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.583s ... 5.597s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.480s ... 1.491s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.597s ... 5.584s + echo scan + cat /sys/kernel/debug/kmemleak + for i in `seq 1 10` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.491s ... 1.471s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.584s ... 5.586s + echo scan + cat /sys/kernel/debug/kmemleak + for transport in loop tcp + echo '################nvme_trtype=loop############' ################nvme_trtype=loop############ + echo clear + nvme_trtype=loop + ./check nvme/ nvme/002 (create many subsystems and test discovery) [passed] runtime ... 35.769s nvme/003 (test if we're sending keep-alives to a discovery controller) [passed] runtime 10.155s ... 10.152s nvme/004 (test nvme and nvmet UUID NS descriptors) [passed] runtime 1.282s ... 1.588s nvme/005 (reset local loopback target) [passed] runtime 1.368s ... 1.957s nvme/006 (create an NVMeOF target with a block device-backed ns) [passed] runtime 0.107s ... 0.114s nvme/007 (create an NVMeOF target with a file-backed ns) [passed] runtime 0.083s ... 0.067s nvme/008 (create an NVMeOF host with a block device-backed ns) [passed] runtime 1.297s ... 1.642s nvme/009 (create an NVMeOF host with a file-backed ns) [passed] runtime 1.278s ... 1.577s nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed] runtime 102.287s ... 87.668s nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed] runtime 93.770s ... 83.243s nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed] runtime 80.527s ... 74.513s nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed] runtime 83.463s ... 82.201s nvme/014 (flush a NVMeOF block device-backed ns) [passed] runtime 5.563s ... 5.878s nvme/015 (unit test for NVMe flush for file backed ns) [passed] runtime 4.043s ... 4.513s nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [passed] runtime ... 18.275s nvme/017 (create/delete many file-ns and test discovery) [passed] runtime ... 18.579s nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed] runtime 1.275s ... 1.582s nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed] runtime 1.286s ... 1.599s nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed] runtime 1.271s ... 1.571s nvme/021 (test NVMe list command on NVMeOF file-backed ns) [passed] runtime 1.258s ... 1.555s nvme/022 (test NVMe reset command on NVMeOF file-backed ns) [passed] runtime 1.327s ... 1.936s nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed] runtime 1.302s ... 1.610s nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed] runtime 1.253s ... 1.572s nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed] runtime 1.250s ... 1.560s nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed] runtime 1.255s ... 1.567s nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed] runtime 1.270s ... 1.594s nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed] runtime 1.282s ... 1.573s nvme/029 (test userspace IO via nvme-cli read/write interface) [passed] runtime 1.562s ... 1.893s nvme/030 (ensure the discovery generation counter is updated appropriately) [passed] runtime 0.274s ... 0.328s nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed] runtime 1.427s ... 4.514s nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed] runtime 0.044s ... 0.031s nvme/040 (test nvme fabrics controller reset/disconnect operation during I/O) [passed] runtime 7.377s ... 8.120s nvme/041 (Create authenticated connections) [passed] runtime 1.088s ... 1.351s nvme/042 (Test dhchap key types for authenticated connections) [passed] runtime 6.691s ... 8.555s nvme/043 (Test hash and DH group variations for authenticated connections) [passed] runtime 1.346s ... 5.423s nvme/044 (Test bi-directional authentication) [passed] runtime 1.471s ... 2.156s nvme/045 (Test re-authentication) [passed] runtime 5.586s ... 4.742s nvme/047 (test different queue types for fabric transports) [not run] runtime 1.785s ... nvme_trtype=loop is not supported in this test nvme/048 (Test queue count changes on reconnect) [not run] runtime 5.549s ... nvme_trtype=loop is not supported in this test + echo scan + cat /sys/kernel/debug/kmemleak + for transport in loop tcp + echo '################nvme_trtype=tcp############' ################nvme_trtype=tcp############ + echo clear + nvme_trtype=tcp + ./check nvme/ nvme/002 (create many subsystems and test discovery) [not run] runtime 35.769s ... nvme_trtype=tcp is not supported in this test nvme/003 (test if we're sending keep-alives to a discovery controller) [passed] runtime 10.152s ... 10.166s nvme/004 (test nvme and nvmet UUID NS descriptors) [passed] runtime 1.588s ... 1.280s nvme/005 (reset local loopback target) [passed] runtime 1.957s ... 1.378s nvme/006 (create an NVMeOF target with a block device-backed ns) [passed] runtime 0.114s ... 0.109s nvme/007 (create an NVMeOF target with a file-backed ns) [passed] runtime 0.067s ... 0.074s nvme/008 (create an NVMeOF host with a block device-backed ns) [passed] runtime 1.642s ... 1.289s nvme/009 (create an NVMeOF host with a file-backed ns) [passed] runtime 1.577s ... 1.266s nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed] runtime 87.668s ... 79.153s nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed] runtime 83.243s ... 91.240s nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed] runtime 74.513s ... 79.656s nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed] runtime 82.201s ... 88.545s nvme/014 (flush a NVMeOF block device-backed ns) [passed] runtime 5.878s ... 5.557s nvme/015 (unit test for NVMe flush for file backed ns) [passed] runtime 4.513s ... 3.911s nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [not run] runtime 18.275s ... nvme_trtype=tcp is not supported in this test nvme/017 (create/delete many file-ns and test discovery) [not run] runtime 18.579s ... nvme_trtype=tcp is not supported in this test nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed] runtime 1.582s ... 1.268s nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed] runtime 1.599s ... 1.294s nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed] runtime 1.571s ... 1.267s nvme/021 (test NVMe list command on NVMeOF file-backed ns) [passed] runtime 1.555s ... 1.258s nvme/022 (test NVMe reset command on NVMeOF file-backed ns) [passed] runtime 1.936s ... 1.350s nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed] runtime 1.610s ... 1.301s nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed] runtime 1.572s ... 1.260s nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed] runtime 1.560s ... 1.267s nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed] runtime 1.567s ... 1.270s nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed] runtime 1.594s ... 1.272s nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed] runtime 1.573s ... 1.260s nvme/029 (test userspace IO via nvme-cli read/write interface) [passed] runtime 1.893s ... 1.540s nvme/030 (ensure the discovery generation counter is updated appropriately) [passed] runtime 0.328s ... 0.266s nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed] runtime 4.514s ... 1.385s nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed] runtime 0.031s ... 0.040s nvme/040 (test nvme fabrics controller reset/disconnect operation during I/O) [passed] runtime 8.120s ... 7.384s nvme/041 (Create authenticated connections) [passed] runtime 1.351s ... 1.035s nvme/042 (Test dhchap key types for authenticated connections) [passed] runtime 8.555s ... 6.533s nvme/043 (Test hash and DH group variations for authenticated connections) [passed] runtime 5.423s ... 1.321s nvme/044 (Test bi-directional authentication) [passed] runtime 2.156s ... 1.497s nvme/045 (Test re-authentication) [passed] runtime 4.742s ... 5.633s nvme/047 (test different queue types for fabric transports) [passed] runtime ... 2.213s nvme/048 (Test queue count changes on reconnect) [passed] runtime ... 5.507s + echo scan + cat /sys/kernel/debug/kmemleak blktests (master) # Without the fix :- blktests (master) # sh test-memleak + for transport in loop tcp + echo '################nvme_trtype=loop############' ################nvme_trtype=loop############ ++ seq 1 2 + for i in `seq 1 2` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.117s ... 2.063s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.642s ... 4.638s + echo scan + cat /sys/kernel/debug/kmemleak unreferenced object 0xffff88817949cb40 (size 96): comm "nvme", pid 38795, jiffies 4296649589 (age 10.273s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000bb3e4509>] kmalloc_trace+0x25/0x90 [<000000000c352c8d>] dev_pm_qos_update_user_latency_tolerance+0x6f/0x100 [<0000000014aec0f3>] nvme_init_ctrl+0x38a/0x400 [nvme_core] [<000000001ada0f52>] 0xffffffffc0a428b3 [<000000000b6031a5>] 0xffffffffc055a4cb [<000000008adf2100>] vfs_write+0xc5/0x3c0 [<000000005037c347>] ksys_write+0x5f/0xe0 [<000000007b6b8e18>] do_syscall_64+0x3b/0x90 [<00000000fef52c0f>] entry_SYSCALL_64_after_hwframe+0x72/0xdc + for i in `seq 1 2` + echo clear + nvme_trtype=loop + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.063s ... 2.126s + nvme_trtype=loop + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.638s ... 4.740s + echo scan + cat /sys/kernel/debug/kmemleak unreferenced object 0xffff8881210e7e00 (size 256): comm "nvme", pid 38795, jiffies 4296649589 (age 23.146s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 08 7e 0e 21 81 88 ff ff .........~.!.... 08 7e 0e 21 81 88 ff ff 40 d7 9c 81 ff ff ff ff .~.!....@....... backtrace: [<00000000bb3e4509>] kmalloc_trace+0x25/0x90 [<00000000e8f2ff7e>] device_add+0x4cf/0x850 [<0000000025c90eb3>] cdev_device_add+0x44/0x90 [<000000004bb481f7>] nvme_init_ctrl+0x352/0x400 [nvme_core] [<000000001ada0f52>] 0xffffffffc0a428b3 [<000000000b6031a5>] 0xffffffffc055a4cb [<000000008adf2100>] vfs_write+0xc5/0x3c0 [<000000005037c347>] ksys_write+0x5f/0xe0 [<000000007b6b8e18>] do_syscall_64+0x3b/0x90 [<00000000fef52c0f>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff88810ddbade0 (size 96): comm "nvme", pid 39119, jiffies 4296662383 (age 10.353s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000bb3e4509>] kmalloc_trace+0x25/0x90 [<000000000c352c8d>] dev_pm_qos_update_user_latency_tolerance+0x6f/0x100 [<0000000014aec0f3>] nvme_init_ctrl+0x38a/0x400 [nvme_core] [<000000001ada0f52>] 0xffffffffc0a428b3 [<000000000b6031a5>] 0xffffffffc055a4cb [<000000008adf2100>] vfs_write+0xc5/0x3c0 [<000000005037c347>] ksys_write+0x5f/0xe0 [<000000007b6b8e18>] do_syscall_64+0x3b/0x90 [<00000000fef52c0f>] entry_SYSCALL_64_after_hwframe+0x72/0xdc + for transport in loop tcp + echo '################nvme_trtype=tcp############' ################nvme_trtype=tcp############ ++ seq 1 2 + for i in `seq 1 2` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 2.126s ... 1.473s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 4.740s ... 5.701s + echo scan + cat /sys/kernel/debug/kmemleak unreferenced object 0xffff8882682e6000 (size 256): comm "nvme", pid 39119, jiffies 4296662383 (age 23.568s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 08 60 2e 68 82 88 ff ff .........`.h.... 08 60 2e 68 82 88 ff ff 40 d7 9c 81 ff ff ff ff .`.h....@....... backtrace: [<00000000bb3e4509>] kmalloc_trace+0x25/0x90 [<00000000e8f2ff7e>] device_add+0x4cf/0x850 [<0000000025c90eb3>] cdev_device_add+0x44/0x90 [<000000004bb481f7>] nvme_init_ctrl+0x352/0x400 [nvme_core] [<000000001ada0f52>] 0xffffffffc0a428b3 [<000000000b6031a5>] 0xffffffffc055a4cb [<000000008adf2100>] vfs_write+0xc5/0x3c0 [<000000005037c347>] ksys_write+0x5f/0xe0 [<000000007b6b8e18>] do_syscall_64+0x3b/0x90 [<00000000fef52c0f>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff8881061229c0 (size 96): comm "nvme", pid 39541, jiffies 4296674602 (age 11.349s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000bb3e4509>] kmalloc_trace+0x25/0x90 [<000000000c352c8d>] dev_pm_qos_update_user_latency_tolerance+0x6f/0x100 [<0000000014aec0f3>] nvme_init_ctrl+0x38a/0x400 [nvme_core] [<00000000dd15d26e>] 0xffffffffc09fc9ae [<000000000b6031a5>] 0xffffffffc055a4cb [<000000008adf2100>] vfs_write+0xc5/0x3c0 [<000000005037c347>] ksys_write+0x5f/0xe0 [<000000007b6b8e18>] do_syscall_64+0x3b/0x90 [<00000000fef52c0f>] entry_SYSCALL_64_after_hwframe+0x72/0xdc + for i in `seq 1 2` + echo clear + nvme_trtype=tcp + ./check nvme/044 nvme/044 (Test bi-directional authentication) [passed] runtime 1.473s ... 1.506s + nvme_trtype=tcp + ./check nvme/045 nvme/045 (Test re-authentication) [passed] runtime 5.701s ... 5.652s + echo scan + cat /sys/kernel/debug/kmemleak unreferenced object 0xffff888248f3fb00 (size 256): comm "nvme", pid 39541, jiffies 4296674601 (age 24.645s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 08 fb f3 48 82 88 ff ff ...........H.... 08 fb f3 48 82 88 ff ff 40 d7 9c 81 ff ff ff ff ...H....@....... backtrace: [<00000000bb3e4509>] kmalloc_trace+0x25/0x90 [<00000000e8f2ff7e>] device_add+0x4cf/0x850 [<0000000025c90eb3>] cdev_device_add+0x44/0x90 [<000000004bb481f7>] nvme_init_ctrl+0x352/0x400 [nvme_core] [<00000000dd15d26e>] 0xffffffffc09fc9ae [<000000000b6031a5>] 0xffffffffc055a4cb [<000000008adf2100>] vfs_write+0xc5/0x3c0 [<000000005037c347>] ksys_write+0x5f/0xe0 [<000000007b6b8e18>] do_syscall_64+0x3b/0x90 [<00000000fef52c0f>] entry_SYSCALL_64_after_hwframe+0x72/0xdc unreferenced object 0xffff8881050bb540 (size 96): comm "nvme", pid 40046, jiffies 4296687837 (age 11.409s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000bb3e4509>] kmalloc_trace+0x25/0x90 [<000000000c352c8d>] dev_pm_qos_update_user_latency_tolerance+0x6f/0x100 [<0000000014aec0f3>] nvme_init_ctrl+0x38a/0x400 [nvme_core] [<00000000dd15d26e>] 0xffffffffc09fc9ae [<000000000b6031a5>] 0xffffffffc055a4cb [<000000008adf2100>] vfs_write+0xc5/0x3c0 [<000000005037c347>] ksys_write+0x5f/0xe0 [<000000007b6b8e18>] do_syscall_64+0x3b/0x90 [<00000000fef52c0f>] entry_SYSCALL_64_after_hwframe+0x72/0xdc ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-05-01 8:23 ` Chaitanya Kulkarni @ 2023-05-01 8:44 ` Sagi Grimberg 2023-05-02 2:35 ` Chaitanya Kulkarni 0 siblings, 1 reply; 12+ messages in thread From: Sagi Grimberg @ 2023-05-01 8:44 UTC (permalink / raw) To: Chaitanya Kulkarni, Yi Zhang Cc: linux-block, open list:NVM EXPRESS DRIVER, Hannes Reinecke On 5/1/23 11:23, Chaitanya Kulkarni wrote: > On 4/27/23 08:57, Yi Zhang wrote: >> On Thu, Apr 27, 2023 at 6:58 PM Chaitanya Kulkarni >> <chaitanyak@nvidia.com> wrote: >>> On 4/27/23 00:39, Yi Zhang wrote: >>>> oops, the kmemleak still exists: >>> hmmm, problem is I'm not able to reproduce >>> nvme_ctrl_dhchap_secret_store(), I could only get >>> cdev ad dev_pm_ops_xxxx. Let's see if following fixes >>> nvme_ctrl_dhchap_secret_store() case ? as I've added one >>> missing kfree() from earlier fix .. >> Hi Chaitanya >> >> The kmemleak in nvme_ctrl_dhchap_secret_store was fixed with the >> change, feel free to add: >> >> Tested-by: Yi Zhang <yi.zhang@redhat.com> >> >> > > I was able to fix remaining memleaks from for blktests > nvme/044-nvme/045 with nvme-loop and nvme-tcp transport. I've tested > following patch with blktests and memleak on, also specifically tested > two testcases in question, whenever you have time see if this fixes all > issues, below are the logs from my testing, here is the patch :- > > linux-block (for-next) # git diff > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > index 42e90d00fc40..245a832f4df5 100644 > --- a/drivers/nvme/host/core.c > +++ b/drivers/nvme/host/core.c > @@ -5151,6 +5151,10 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct > device *dev, > > BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct nvme_dsm_range) > > PAGE_SIZE); > + ret = nvme_auth_init_ctrl(ctrl); > + if (ret) > + return ret; > + > ctrl->discard_page = alloc_page(GFP_KERNEL); > if (!ctrl->discard_page) { > ret = -ENOMEM; > @@ -5195,13 +5199,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct > device *dev, > > nvme_fault_inject_init(&ctrl->fault_inject, > dev_name(ctrl->device)); > nvme_mpath_init_ctrl(ctrl); > - ret = nvme_auth_init_ctrl(ctrl); > - if (ret) > - goto out_free_cdev; This does not seem to me like a fix, but a particular way to hide the issue. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [bug report] kmemleak observed during blktests nvme-tcp 2023-05-01 8:44 ` Sagi Grimberg @ 2023-05-02 2:35 ` Chaitanya Kulkarni 0 siblings, 0 replies; 12+ messages in thread From: Chaitanya Kulkarni @ 2023-05-02 2:35 UTC (permalink / raw) To: Sagi Grimberg Cc: linux-block, Yi Zhang, open list:NVM EXPRESS DRIVER, Hannes Reinecke On 5/1/23 01:44, Sagi Grimberg wrote: > > > On 5/1/23 11:23, Chaitanya Kulkarni wrote: >> On 4/27/23 08:57, Yi Zhang wrote: >>> On Thu, Apr 27, 2023 at 6:58 PM Chaitanya Kulkarni >>> <chaitanyak@nvidia.com> wrote: >>>> On 4/27/23 00:39, Yi Zhang wrote: >>>>> oops, the kmemleak still exists: >>>> hmmm, problem is I'm not able to reproduce >>>> nvme_ctrl_dhchap_secret_store(), I could only get >>>> cdev ad dev_pm_ops_xxxx. Let's see if following fixes >>>> nvme_ctrl_dhchap_secret_store() case ? as I've added one >>>> missing kfree() from earlier fix .. >>> Hi Chaitanya >>> >>> The kmemleak in nvme_ctrl_dhchap_secret_store was fixed with the >>> change, feel free to add: >>> >>> Tested-by: Yi Zhang <yi.zhang@redhat.com> >>> >>> >> >> I was able to fix remaining memleaks from for blktests >> nvme/044-nvme/045 with nvme-loop and nvme-tcp transport. I've tested >> following patch with blktests and memleak on, also specifically tested >> two testcases in question, whenever you have time see if this fixes all >> issues, below are the logs from my testing, here is the patch :- >> >> linux-block (for-next) # git diff >> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c >> index 42e90d00fc40..245a832f4df5 100644 >> --- a/drivers/nvme/host/core.c >> +++ b/drivers/nvme/host/core.c >> @@ -5151,6 +5151,10 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct >> device *dev, >> >> BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct >> nvme_dsm_range) > >> PAGE_SIZE); >> + ret = nvme_auth_init_ctrl(ctrl); >> + if (ret) >> + return ret; >> + >> ctrl->discard_page = alloc_page(GFP_KERNEL); >> if (!ctrl->discard_page) { >> ret = -ENOMEM; >> @@ -5195,13 +5199,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct >> device *dev, >> >> nvme_fault_inject_init(&ctrl->fault_inject, >> dev_name(ctrl->device)); >> nvme_mpath_init_ctrl(ctrl); >> - ret = nvme_auth_init_ctrl(ctrl); >> - if (ret) >> - goto out_free_cdev; > > This does not seem to me like a fix, but a particular way to hide the > issue. Agree, but right now Irvin is working on fixing nvme_init_ctrl() issue(s) and current block tree has memleak. Shouldn't we fix it before it gets merged into the linux/for-next ? if that is not the case we can safely drop this ... -ck ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-05-02 4:45 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-04-21 0:37 [bug report] kmemleak observed during blktests nvme-tcp Yi Zhang 2023-04-23 14:15 ` Sagi Grimberg 2023-04-25 9:54 ` Yi Zhang 2023-04-26 8:23 ` Chaitanya Kulkarni 2023-04-26 8:34 ` Chaitanya Kulkarni 2023-04-27 7:24 ` Yi Zhang 2023-04-27 7:39 ` Yi Zhang 2023-04-27 10:58 ` Chaitanya Kulkarni 2023-04-27 15:57 ` Yi Zhang 2023-05-01 8:23 ` Chaitanya Kulkarni 2023-05-01 8:44 ` Sagi Grimberg 2023-05-02 2:35 ` Chaitanya Kulkarni
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox