From: Dave Jones <davej@codemonkey.org.uk>
To: linux-scsi@vger.kernel.org
Subject: blk-mq vs kmemleak
Date: Fri, 3 Jul 2015 12:11:37 -0400 [thread overview]
Message-ID: <20150703161137.GA10438@codemonkey.org.uk> (raw)
After a fuzzing run recently, I noticed that the machine had oom'd, and
killed everything, but there was still 3GB of memory still in use, that
I couldn't even reclaim with /proc/sys/vm/drop_caches
So I enabled kmemleak. After applying this..
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index cf79f110157c..6dc18dbad9ec 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -553,8 +553,8 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
if (!object) {
- pr_warning("Cannot allocate a kmemleak_object structure\n");
- kmemleak_disable();
+ //pr_warning("Cannot allocate a kmemleak_object structure\n");
+ //kmemleak_disable();
return NULL;
}
otherwise it would disable itself within a minute of runtime.
I notice now that I'm seeing a lot of traces like this..
unreferenced object 0xffff8800ba8202c0 (size 320):
comm "kworker/u4:1", pid 38, jiffies 4294741176 (age 46887.690s)
hex dump (first 32 bytes):
21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<ffffffff8969b80e>] kmemleak_alloc+0x4e/0xb0
[<ffffffff891b3e37>] kmem_cache_alloc+0x107/0x200
[<ffffffff8916528d>] mempool_alloc_slab+0x1d/0x30
[<ffffffff89165963>] mempool_alloc+0x63/0x180
[<ffffffff8945f85a>] scsi_sg_alloc+0x4a/0x50
[<ffffffff89323f0e>] __sg_alloc_table+0x11e/0x180
[<ffffffff8945dc03>] scsi_alloc_sgtable+0x43/0x90
[<ffffffff8945dc81>] scsi_init_sgtable+0x31/0x80
[<ffffffff8945dd1a>] scsi_init_io+0x4a/0x1c0
[<ffffffff8946da59>] sd_init_command+0x59/0xe40
[<ffffffff8945df81>] scsi_setup_cmnd+0xf1/0x160
[<ffffffff8945e75c>] scsi_queue_rq+0x57c/0x6a0
[<ffffffff892f60b8>] __blk_mq_run_hw_queue+0x1d8/0x390
[<ffffffff892f5e5e>] blk_mq_run_hw_queue+0x9e/0x120
[<ffffffff892f7524>] blk_mq_insert_requests+0xd4/0x1a0
[<ffffffff892f8273>] blk_mq_flush_plug_list+0x123/0x140
unreferenced object 0xffff8800ba824800 (size 640):
comm "trinity-c2", pid 3687, jiffies 4294843075 (age 46785.966s)
hex dump (first 32 bytes):
21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<ffffffff8969b80e>] kmemleak_alloc+0x4e/0xb0
[<ffffffff891b3e37>] kmem_cache_alloc+0x107/0x200
[<ffffffff8916528d>] mempool_alloc_slab+0x1d/0x30
[<ffffffff89165963>] mempool_alloc+0x63/0x180
[<ffffffff8945f85a>] scsi_sg_alloc+0x4a/0x50
[<ffffffff89323f0e>] __sg_alloc_table+0x11e/0x180
[<ffffffff8945dc03>] scsi_alloc_sgtable+0x43/0x90
[<ffffffff8945dc81>] scsi_init_sgtable+0x31/0x80
[<ffffffff8945dd1a>] scsi_init_io+0x4a/0x1c0
[<ffffffff8946da59>] sd_init_command+0x59/0xe40
[<ffffffff8945df81>] scsi_setup_cmnd+0xf1/0x160
[<ffffffff8945e75c>] scsi_queue_rq+0x57c/0x6a0
[<ffffffff892f60b8>] __blk_mq_run_hw_queue+0x1d8/0x390
[<ffffffff892f5e5e>] blk_mq_run_hw_queue+0x9e/0x120
[<ffffffff892f7524>] blk_mq_insert_requests+0xd4/0x1a0
[<ffffffff892f8273>] blk_mq_flush_plug_list+0x123/0x140
unreferenced object 0xffff8800a9fe6780 (size 2560):
comm "kworker/1:1H", pid 171, jiffies 4294843118 (age 46785.923s)
hex dump (first 32 bytes):
21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<ffffffff8969b80e>] kmemleak_alloc+0x4e/0xb0
[<ffffffff891b3e37>] kmem_cache_alloc+0x107/0x200
[<ffffffff8916528d>] mempool_alloc_slab+0x1d/0x30
[<ffffffff89165963>] mempool_alloc+0x63/0x180
[<ffffffff8945f85a>] scsi_sg_alloc+0x4a/0x50
[<ffffffff89323f0e>] __sg_alloc_table+0x11e/0x180
[<ffffffff8945dc03>] scsi_alloc_sgtable+0x43/0x90
[<ffffffff8945dc81>] scsi_init_sgtable+0x31/0x80
[<ffffffff8945dd1a>] scsi_init_io+0x4a/0x1c0
[<ffffffff8946da59>] sd_init_command+0x59/0xe40
[<ffffffff8945df81>] scsi_setup_cmnd+0xf1/0x160
[<ffffffff8945e75c>] scsi_queue_rq+0x57c/0x6a0
[<ffffffff892f60b8>] __blk_mq_run_hw_queue+0x1d8/0x390
[<ffffffff892f66b2>] blk_mq_run_work_fn+0x12/0x20
[<ffffffff8908eba7>] process_one_work+0x147/0x420
[<ffffffff8908f209>] worker_thread+0x69/0x470
The sizes vary, but the hex dump is always the same.
What's the usual completion path where these would get deallocated ?
I'm wondering if there's just some annotation missing to appease kmemleak,
because I'm seeing thousands of these.
Or it could be a real leak, but it seems surprising no-one else is complaining.
Dave
next reply other threads:[~2015-07-03 16:11 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-03 16:11 Dave Jones [this message]
2015-07-03 17:04 ` blk-mq vs kmemleak Bart Van Assche
2015-07-03 17:07 ` Dave Jones
2015-07-07 10:33 ` Catalin Marinas
2015-07-07 13:59 ` Bart Van Assche
2015-07-08 8:17 ` Christoph Hellwig
2015-08-01 0:37 ` Bart Van Assche
2015-08-03 10:43 ` Catalin Marinas
2015-08-03 13:33 ` Christoph Hellwig
2015-08-03 15:34 ` Catalin Marinas
2015-08-03 17:05 ` Bart Van Assche
2015-08-03 17:50 ` Catalin Marinas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150703161137.GA10438@codemonkey.org.uk \
--to=davej@codemonkey.org.uk \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).