cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [Cluster-devel] [GFS2 PATCH] [bz878476] - Fix race in gfs2_rs_alloc
@ 2012-12-19 15:48 Abhijith Das
  2012-12-19 16:23 ` Steven Whitehouse
  0 siblings, 1 reply; 2+ messages in thread
From: Abhijith Das @ 2012-12-19 15:48 UTC (permalink / raw)
  To: cluster-devel.redhat.com

QE aio tests uncovered a race condition in gfs2_rs_alloc where it's possible to come out of the function with a valid ip->i_res allocation but it gets freed before use resulting in a NULL ptr dereference.

This patch envelopes the initial short-circuit check for non-NULL ip->i_res into the mutex lock. With this patch, I was able to successfully run the reproducer test multiple times.

Resolves: rhbz#878476
Signed-off-by: Abhi Das <adas@redhat.com>

diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
index 37ee061..738b388 100644
--- a/fs/gfs2/rgrp.c
+++ b/fs/gfs2/rgrp.c
@@ -557,22 +557,20 @@ void gfs2_free_clones(struct gfs2_rgrpd *rgd)
  */
 int gfs2_rs_alloc(struct gfs2_inode *ip)
 {
-	struct gfs2_blkreserv *res;
+	int error = 0;
 
+	down_write(&ip->i_rw_mutex);
 	if (ip->i_res)
-		return 0;
-
-	res = kmem_cache_zalloc(gfs2_rsrv_cachep, GFP_NOFS);
-	if (!res)
-		return -ENOMEM;
+		goto out;
 
-	RB_CLEAR_NODE(&res->rs_node);
+	ip->i_res = kmem_cache_zalloc(gfs2_rsrv_cachep, GFP_NOFS);
+	if (!ip->i_res) {
+		error = -ENOMEM;
+		goto out;
+	}
 
-	down_write(&ip->i_rw_mutex);
-	if (ip->i_res)
-		kmem_cache_free(gfs2_rsrv_cachep, res);
-	else
-		ip->i_res = res;
+	RB_CLEAR_NODE(&ip->i_res->rs_node);
+out:
 	up_write(&ip->i_rw_mutex);
 	return 0;
 }



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* [Cluster-devel] [GFS2 PATCH] [bz878476] - Fix race in gfs2_rs_alloc
  2012-12-19 15:48 [Cluster-devel] [GFS2 PATCH] [bz878476] - Fix race in gfs2_rs_alloc Abhijith Das
@ 2012-12-19 16:23 ` Steven Whitehouse
  0 siblings, 0 replies; 2+ messages in thread
From: Steven Whitehouse @ 2012-12-19 16:23 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

I've added that to my tree of pending patches. That will be in the -nmw
tree just as soon as -rc1 is out. Thanks,

Steve.

On Wed, 2012-12-19 at 10:48 -0500, Abhijith Das wrote:
> QE aio tests uncovered a race condition in gfs2_rs_alloc where it's possible to come out of the function with a valid ip->i_res allocation but it gets freed before use resulting in a NULL ptr dereference.
> 
> This patch envelopes the initial short-circuit check for non-NULL ip->i_res into the mutex lock. With this patch, I was able to successfully run the reproducer test multiple times.
> 
> Resolves: rhbz#878476
> Signed-off-by: Abhi Das <adas@redhat.com>
> 
> diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
> index 37ee061..738b388 100644
> --- a/fs/gfs2/rgrp.c
> +++ b/fs/gfs2/rgrp.c
> @@ -557,22 +557,20 @@ void gfs2_free_clones(struct gfs2_rgrpd *rgd)
>   */
>  int gfs2_rs_alloc(struct gfs2_inode *ip)
>  {
> -	struct gfs2_blkreserv *res;
> +	int error = 0;
>  
> +	down_write(&ip->i_rw_mutex);
>  	if (ip->i_res)
> -		return 0;
> -
> -	res = kmem_cache_zalloc(gfs2_rsrv_cachep, GFP_NOFS);
> -	if (!res)
> -		return -ENOMEM;
> +		goto out;
>  
> -	RB_CLEAR_NODE(&res->rs_node);
> +	ip->i_res = kmem_cache_zalloc(gfs2_rsrv_cachep, GFP_NOFS);
> +	if (!ip->i_res) {
> +		error = -ENOMEM;
> +		goto out;
> +	}
>  
> -	down_write(&ip->i_rw_mutex);
> -	if (ip->i_res)
> -		kmem_cache_free(gfs2_rsrv_cachep, res);
> -	else
> -		ip->i_res = res;
> +	RB_CLEAR_NODE(&ip->i_res->rs_node);
> +out:
>  	up_write(&ip->i_rw_mutex);
>  	return 0;
>  }
> 




^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-12-19 16:23 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-19 15:48 [Cluster-devel] [GFS2 PATCH] [bz878476] - Fix race in gfs2_rs_alloc Abhijith Das
2012-12-19 16:23 ` Steven Whitehouse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).