public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH] shm_destroy lock hang
  2002-06-27 19:17 [PATCH] shm_destroy lock hang Hugh Dickins
@ 2002-06-27 19:01 ` Marcelo Tosatti
  2002-06-27 20:24   ` David Schwartz
  0 siblings, 1 reply; 4+ messages in thread
From: Marcelo Tosatti @ 2002-06-27 19:01 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Linus Torvalds, Dave Jones, Alan Cox, Martin Schwidefsky,
	Andrew Morton, Christoph Rohland, linux-kernel


Hi,

Just please avoid doing that locking nastyness:

function() {
unlock();
}


lock();
if (something)
	function();
else
	unlock();


For this case its OK, but please avoid that.

On Thu, 27 Jun 2002, Hugh Dickins wrote:

> Martin reported "Bug with shared memory" to LKML 14 May: hang due to
> schedule in truncate_list_pages called from .... shm_destroy holding
> the shm_lock spinlock.  shm_destroy needs that lock for shm_rmid, but
> it can be safely unlocked once link from id to shp has been removed.
> Patch against 2.4.19-rc1 or -pre10-ac2, applies at offset to 2.5.24.
>
> --- 2.4.19-rc1/ipc/shm.c	Mon Jun 24 19:14:58 2002
> +++ linux/ipc/shm.c	Thu Jun 27 19:34:51 2002
> @@ -117,12 +117,14 @@
>   *
>   * @shp: struct to free
>   *
> - * It has to be called with shp and shm_ids.sem locked
> + * It has to be called with shp and shm_ids.sem locked,
> + * but returns with shp unlocked and freed.
>   */
>  static void shm_destroy (struct shmid_kernel *shp)
>  {
>  	shm_tot -= (shp->shm_segsz + PAGE_SIZE - 1) >> PAGE_SHIFT;
>  	shm_rmid (shp->id);
> +	shm_unlock(shp->id);
>  	shmem_lock(shp->shm_file, 0);
>  	fput (shp->shm_file);
>  	kfree (shp);
> @@ -150,8 +152,8 @@
>  	if(shp->shm_nattch == 0 &&
>  	   shp->shm_flags & SHM_DEST)
>  		shm_destroy (shp);
> -
> -	shm_unlock(id);
> +	else
> +		shm_unlock(id);
>  	up (&shm_ids.sem);
>  }
>
> @@ -511,11 +513,9 @@
>  			shp->shm_flags |= SHM_DEST;
>  			/* Do not find it any more */
>  			shp->shm_perm.key = IPC_PRIVATE;
> +			shm_unlock(shmid);
>  		} else
>  			shm_destroy (shp);
> -
> -		/* Unlock */
> -		shm_unlock(shmid);
>  		up(&shm_ids.sem);
>  		return err;
>  	}
> @@ -653,7 +653,8 @@
>  	if(shp->shm_nattch == 0 &&
>  	   shp->shm_flags & SHM_DEST)
>  		shm_destroy (shp);
> -	shm_unlock(shmid);
> +	else
> +		shm_unlock(shmid);
>  	up (&shm_ids.sem);
>
>  	*raddr = (unsigned long) user_addr;
>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH] shm_destroy lock hang
@ 2002-06-27 19:17 Hugh Dickins
  2002-06-27 19:01 ` Marcelo Tosatti
  0 siblings, 1 reply; 4+ messages in thread
From: Hugh Dickins @ 2002-06-27 19:17 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Linus Torvalds, Dave Jones, Alan Cox, Martin Schwidefsky,
	Andrew Morton, Christoph Rohland, linux-kernel

Martin reported "Bug with shared memory" to LKML 14 May: hang due to
schedule in truncate_list_pages called from .... shm_destroy holding
the shm_lock spinlock.  shm_destroy needs that lock for shm_rmid, but
it can be safely unlocked once link from id to shp has been removed.
Patch against 2.4.19-rc1 or -pre10-ac2, applies at offset to 2.5.24.

--- 2.4.19-rc1/ipc/shm.c	Mon Jun 24 19:14:58 2002
+++ linux/ipc/shm.c	Thu Jun 27 19:34:51 2002
@@ -117,12 +117,14 @@
  *
  * @shp: struct to free
  *
- * It has to be called with shp and shm_ids.sem locked
+ * It has to be called with shp and shm_ids.sem locked,
+ * but returns with shp unlocked and freed.
  */
 static void shm_destroy (struct shmid_kernel *shp)
 {
 	shm_tot -= (shp->shm_segsz + PAGE_SIZE - 1) >> PAGE_SHIFT;
 	shm_rmid (shp->id);
+	shm_unlock(shp->id);
 	shmem_lock(shp->shm_file, 0);
 	fput (shp->shm_file);
 	kfree (shp);
@@ -150,8 +152,8 @@
 	if(shp->shm_nattch == 0 &&
 	   shp->shm_flags & SHM_DEST)
 		shm_destroy (shp);
-
-	shm_unlock(id);
+	else
+		shm_unlock(id);
 	up (&shm_ids.sem);
 }
 
@@ -511,11 +513,9 @@
 			shp->shm_flags |= SHM_DEST;
 			/* Do not find it any more */
 			shp->shm_perm.key = IPC_PRIVATE;
+			shm_unlock(shmid);
 		} else
 			shm_destroy (shp);
-
-		/* Unlock */
-		shm_unlock(shmid);
 		up(&shm_ids.sem);
 		return err;
 	}
@@ -653,7 +653,8 @@
 	if(shp->shm_nattch == 0 &&
 	   shp->shm_flags & SHM_DEST)
 		shm_destroy (shp);
-	shm_unlock(shmid);
+	else
+		shm_unlock(shmid);
 	up (&shm_ids.sem);
 
 	*raddr = (unsigned long) user_addr;


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] shm_destroy lock hang
  2002-06-27 20:24   ` David Schwartz
@ 2002-06-27 20:11     ` Marcelo Tosatti
  0 siblings, 0 replies; 4+ messages in thread
From: Marcelo Tosatti @ 2002-06-27 20:11 UTC (permalink / raw)
  To: David Schwartz; +Cc: Hugh Dickins, linux-kernel



On Thu, 27 Jun 2002, David Schwartz wrote:

>
> >Just please avoid doing that locking nastyness:
> >
> >function() {
> >unlock();
> >}
> >
> >
> >lock();
> >if (something)
> >    function();
> >else
> >    unlock();
>
> 	What do you do in cases where 'function' looks like this:
>
> function() {
>  something();
>  unlock();
>  something_else();
> }
>

Move something() outside function().


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] shm_destroy lock hang
  2002-06-27 19:01 ` Marcelo Tosatti
@ 2002-06-27 20:24   ` David Schwartz
  2002-06-27 20:11     ` Marcelo Tosatti
  0 siblings, 1 reply; 4+ messages in thread
From: David Schwartz @ 2002-06-27 20:24 UTC (permalink / raw)
  To: marcelo, Hugh Dickins; +Cc: linux-kernel


>Just please avoid doing that locking nastyness:
>
>function() {
>unlock();
>}
>
>
>lock();
>if (something)
>    function();
>else
>    unlock();

	What do you do in cases where 'function' looks like this:

function() {
 something();
 unlock();
 something_else();
}

	DS



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2002-06-27 21:04 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-06-27 19:17 [PATCH] shm_destroy lock hang Hugh Dickins
2002-06-27 19:01 ` Marcelo Tosatti
2002-06-27 20:24   ` David Schwartz
2002-06-27 20:11     ` Marcelo Tosatti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox