public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] ceph: fix memory in MDS client cap_auths
@ 2024-08-14 10:17 Luis Henriques (SUSE)
  2024-08-19  1:06 ` Xiubo Li
  0 siblings, 1 reply; 3+ messages in thread
From: Luis Henriques (SUSE) @ 2024-08-14 10:17 UTC (permalink / raw)
  To: Xiubo Li, Ilya Dryomov, Milind Changire
  Cc: ceph-devel, linux-kernel, Luis Henriques (SUSE)

The cap_auths that are allocated during an MDS session opening are never
released, causing a memory leak detected by kmemleak.  Fix this by freeing
the memory allocated when shutting down the mds client.

Fixes: 1d17de9534cb ("ceph: save cap_auths in MDS client when session is opened")
Signed-off-by: Luis Henriques (SUSE) <luis.henriques@linux.dev>
---
 fs/ceph/mds_client.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 276e34ab3e2c..d798d0a5b2b1 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -6015,6 +6015,20 @@ static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
 		ceph_mdsmap_destroy(mdsc->mdsmap);
 	kfree(mdsc->sessions);
 	ceph_caps_finalize(mdsc);
+
+	if (mdsc->s_cap_auths) {
+		int i;
+
+		mutex_lock(&mdsc->mutex);
+		for (i = 0; i < mdsc->s_cap_auths_num; i++) {
+			kfree(mdsc->s_cap_auths[i].match.gids);
+			kfree(mdsc->s_cap_auths[i].match.path);
+			kfree(mdsc->s_cap_auths[i].match.fs_name);
+		}
+		kfree(mdsc->s_cap_auths);
+		mutex_unlock(&mdsc->mutex);
+	}
+
 	ceph_pool_perm_destroy(mdsc);
 }
 

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] ceph: fix memory in MDS client cap_auths
  2024-08-14 10:17 [PATCH] ceph: fix memory in MDS client cap_auths Luis Henriques (SUSE)
@ 2024-08-19  1:06 ` Xiubo Li
  2024-08-19  9:30   ` Luis Henriques
  0 siblings, 1 reply; 3+ messages in thread
From: Xiubo Li @ 2024-08-19  1:06 UTC (permalink / raw)
  To: Luis Henriques (SUSE), Ilya Dryomov, Milind Changire
  Cc: ceph-devel, linux-kernel

Hi Luis,

Good catch!

On 8/14/24 18:17, Luis Henriques (SUSE) wrote:
> The cap_auths that are allocated during an MDS session opening are never
> released, causing a memory leak detected by kmemleak.  Fix this by freeing
> the memory allocated when shutting down the mds client.
>
> Fixes: 1d17de9534cb ("ceph: save cap_auths in MDS client when session is opened")
> Signed-off-by: Luis Henriques (SUSE) <luis.henriques@linux.dev>
> ---
>   fs/ceph/mds_client.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
>
> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> index 276e34ab3e2c..d798d0a5b2b1 100644
> --- a/fs/ceph/mds_client.c
> +++ b/fs/ceph/mds_client.c
> @@ -6015,6 +6015,20 @@ static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
>   		ceph_mdsmap_destroy(mdsc->mdsmap);
>   	kfree(mdsc->sessions);
>   	ceph_caps_finalize(mdsc);
> +
> +	if (mdsc->s_cap_auths) {
> +		int i;
> +
> +		mutex_lock(&mdsc->mutex);

BTW, is the lock really needed here ?

IMO it should be safe to remove it because once here the sb has already 
been killed and the 'mdsc->stopping' will help guarantee that there 
won't be any other thread will access to 'mdsc', Isn't it ?

Else we need to do the lock from the beginning of this function.

Thanks

- Xiubo

> +		for (i = 0; i < mdsc->s_cap_auths_num; i++) {
> +			kfree(mdsc->s_cap_auths[i].match.gids);
> +			kfree(mdsc->s_cap_auths[i].match.path);
> +			kfree(mdsc->s_cap_auths[i].match.fs_name);
> +		}
> +		kfree(mdsc->s_cap_auths);
> +		mutex_unlock(&mdsc->mutex);
> +	}
> +
>   	ceph_pool_perm_destroy(mdsc);
>   }
>   
>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] ceph: fix memory in MDS client cap_auths
  2024-08-19  1:06 ` Xiubo Li
@ 2024-08-19  9:30   ` Luis Henriques
  0 siblings, 0 replies; 3+ messages in thread
From: Luis Henriques @ 2024-08-19  9:30 UTC (permalink / raw)
  To: Xiubo Li; +Cc: Ilya Dryomov, Milind Changire, ceph-devel, linux-kernel

On Mon, Aug 19 2024, Xiubo Li wrote:

> Hi Luis,
>
> Good catch!
>
> On 8/14/24 18:17, Luis Henriques (SUSE) wrote:
>> The cap_auths that are allocated during an MDS session opening are never
>> released, causing a memory leak detected by kmemleak.  Fix this by freeing
>> the memory allocated when shutting down the mds client.
>>
>> Fixes: 1d17de9534cb ("ceph: save cap_auths in MDS client when session is opened")
>> Signed-off-by: Luis Henriques (SUSE) <luis.henriques@linux.dev>
>> ---
>>   fs/ceph/mds_client.c | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
>> index 276e34ab3e2c..d798d0a5b2b1 100644
>> --- a/fs/ceph/mds_client.c
>> +++ b/fs/ceph/mds_client.c
>> @@ -6015,6 +6015,20 @@ static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
>>   		ceph_mdsmap_destroy(mdsc->mdsmap);
>>   	kfree(mdsc->sessions);
>>   	ceph_caps_finalize(mdsc);
>> +
>> +	if (mdsc->s_cap_auths) {
>> +		int i;
>> +
>> +		mutex_lock(&mdsc->mutex);
>
> BTW, is the lock really needed here ?
>
> IMO it should be safe to remove it because once here the sb has already been
> killed and the 'mdsc->stopping' will help guarantee that there won't be any
> other thread will access to 'mdsc', Isn't it ?

Ah, yes, good point.  Thanks, I'll send v2 shortly.

Cheers,
-- 
Luís


> Else we need to do the lock from the beginning of this function.
>
> Thanks
>
> - Xiubo
>
>> +		for (i = 0; i < mdsc->s_cap_auths_num; i++) {
>> +			kfree(mdsc->s_cap_auths[i].match.gids);
>> +			kfree(mdsc->s_cap_auths[i].match.path);
>> +			kfree(mdsc->s_cap_auths[i].match.fs_name);
>> +		}
>> +		kfree(mdsc->s_cap_auths);
>> +		mutex_unlock(&mdsc->mutex);
>> +	}
>> +
>>   	ceph_pool_perm_destroy(mdsc);
>>   }
>>   
>


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-08-19  9:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-14 10:17 [PATCH] ceph: fix memory in MDS client cap_auths Luis Henriques (SUSE)
2024-08-19  1:06 ` Xiubo Li
2024-08-19  9:30   ` Luis Henriques

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox