linux-iio.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matti Vaittinen <mazziesaccount@gmail.com>
To: Jonathan Cameron <jic23@kernel.org>, Jinjie Ruan <ruanjinjie@huawei.com>
Cc: lars@metafoo.de, linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] iio: gts-helper: Fix memory leaks in iio_gts_build_avail_scale_table()
Date: Tue, 15 Oct 2024 09:26:54 +0300	[thread overview]
Message-ID: <8671e3dc-dda6-4a45-864e-b01c668cd9d9@gmail.com> (raw)
In-Reply-To: <20241012171559.2c118a2f@jic23-huawei>

Hi Jinjie, Jonathan,

On 12/10/2024 19:15, Jonathan Cameron wrote:
> On Fri, 11 Oct 2024 17:55:12 +0800
> Jinjie Ruan <ruanjinjie@huawei.com> wrote:
> 
>> modprobe iio-test-gts and rmmod it, then the following memory leak
>> occurs:
>>
>> 	unreferenced object 0xffffff80c810be00 (size 64):
>> 	  comm "kunit_try_catch", pid 1654, jiffies 4294913981
>> 	  hex dump (first 32 bytes):
>> 	    02 00 00 00 08 00 00 00 20 00 00 00 40 00 00 00  ........ ...@...
>> 	    80 00 00 00 00 02 00 00 00 04 00 00 00 08 00 00  ................
>> 	  backtrace (crc a63d875e):
>> 	    [<0000000028c1b3c2>] kmemleak_alloc+0x34/0x40
>> 	    [<000000001d6ecc87>] __kmalloc_noprof+0x2bc/0x3c0
>> 	    [<00000000393795c1>] devm_iio_init_iio_gts+0x4b4/0x16f4
>> 	    [<0000000071bb4b09>] 0xffffffdf052a62e0
>> 	    [<000000000315bc18>] 0xffffffdf052a6488
>> 	    [<00000000f9dc55b5>] kunit_try_run_case+0x13c/0x3ac
>> 	    [<00000000175a3fd4>] kunit_generic_run_threadfn_adapter+0x80/0xec
>> 	    [<00000000f505065d>] kthread+0x2e8/0x374
>> 	    [<00000000bbfb0e5d>] ret_from_fork+0x10/0x20
>> 	unreferenced object 0xffffff80cbfe9e70 (size 16):
>> 	  comm "kunit_try_catch", pid 1658, jiffies 4294914015
>> 	  hex dump (first 16 bytes):
>> 	    10 00 00 00 40 00 00 00 80 00 00 00 00 00 00 00  ....@...........
>> 	  backtrace (crc 857f0cb4):
>> 	    [<0000000028c1b3c2>] kmemleak_alloc+0x34/0x40
>> 	    [<000000001d6ecc87>] __kmalloc_noprof+0x2bc/0x3c0
>> 	    [<00000000393795c1>] devm_iio_init_iio_gts+0x4b4/0x16f4
>> 	    [<0000000071bb4b09>] 0xffffffdf052a62e0
>> 	    [<000000007d089d45>] 0xffffffdf052a6864
>> 	    [<00000000f9dc55b5>] kunit_try_run_case+0x13c/0x3ac
>> 	    [<00000000175a3fd4>] kunit_generic_run_threadfn_adapter+0x80/0xec
>> 	    [<00000000f505065d>] kthread+0x2e8/0x374
>> 	    [<00000000bbfb0e5d>] ret_from_fork+0x10/0x20
>> 	......
>>
>> It includes 5*5 times "size 64" memory leaks, which correspond to 5 times
>> test_init_iio_gain_scale() calls with gts_test_gains size 10 (10*size(int))
>> and gts_test_itimes size 5. It also includes 5*1 times "size 16"
>> memory leak, which correspond to one time __test_init_iio_gain_scale()
>> call with gts_test_gains_gain_low size 3 (3*size(int)) and gts_test_itimes
>> size 5.
>>
>> The reason is that the per_time_gains[i] is not freed which is allocated in
>> the "gts->num_itime" for loop in iio_gts_build_avail_scale_table().
>>
>> Cc: stable@vger.kernel.org
>> Fixes: 38416c28e168 ("iio: light: Add gain-time-scale helpers")
>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> Hi Jinjie,
> 
> Your explanation looks correct to me.  I'll wait a while though to give Matti time
> to take a look as well.

Sorry for late reply - I spent last couple of days walking through the 
swamps and forests in the wilderness.

Something was bothering me with this. I browsed through the code and all 
the allocations and I'm not able to see why these arrays shouldn't be 
freed. I did even go through the versions I've sent on list trying to 
find out what bothers me.

Well, I found nothing. The version 2 had some code which looped through 
these arrays freeing them - in an error path - but not in successful 
case. It seemed as if I had thought these values had to be maintained - 
but I really can't see why.

So - thanks. I suppose leaving the memory not freed is just a bug :) 
Nice that you killed this one :)

Reviewed-by: Matti Vaittinen <mazziesaccount@gmail.com>

> 
> Thanks,
> 
> Jonathan
> 
>> ---
>>   drivers/iio/industrialio-gts-helper.c | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
>> index 59d7615c0f56..7326c7949244 100644
>> --- a/drivers/iio/industrialio-gts-helper.c
>> +++ b/drivers/iio/industrialio-gts-helper.c
>> @@ -307,6 +307,8 @@ static int iio_gts_build_avail_scale_table(struct iio_gts *gts)
>>   	if (ret)
>>   		goto err_free_out;
>>   
>> +	for (i = 0; i < gts->num_itime; i++)
>> +		kfree(per_time_gains[i]);
>>   	kfree(per_time_gains);
>>   	gts->per_time_avail_scale_tables = per_time_scales;
>>   
> 


  reply	other threads:[~2024-10-15  6:26 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-11  9:55 [PATCH] iio: gts-helper: Fix memory leaks in iio_gts_build_avail_scale_table() Jinjie Ruan
2024-10-12 16:15 ` Jonathan Cameron
2024-10-15  6:26   ` Matti Vaittinen [this message]
2024-10-19 14:08     ` Jonathan Cameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8671e3dc-dda6-4a45-864e-b01c668cd9d9@gmail.com \
    --to=mazziesaccount@gmail.com \
    --cc=jic23@kernel.org \
    --cc=lars@metafoo.de \
    --cc=linux-iio@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ruanjinjie@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).