From: Arnd Bergmann <arnd@arndb.de>
To: Paul Mackerras <paulus@samba.org>
Cc: linuxppc64-dev@ozlabs.org, linux-kernel@vger.kernel.org,
Al Viro <viro@ftp.linux.org.uk>, Mark Nutter <mnutter@us.ibm.com>,
Arnd Bergmann <arndb@de.ibm.com>
Subject: [PATCH 01/13] spufs: fix locking in spu_acquire_runnable
Date: Wed, 04 Jan 2006 20:31:21 +0100 [thread overview]
Message-ID: <20060104194500.180477000@localhost> (raw)
In-Reply-To: 20060104193120.050539000@localhost
[-- Attachment #1: spufs-lock.diff --]
[-- Type: text/plain, Size: 1195 bytes --]
We need to check for validity of owner under down_write,
down_read is not enough.
Noticed by Al Viro.
Signed-off-by: Arnd Bergmann <arndb@de.ibm.com>
Index: linux-cg/arch/powerpc/platforms/cell/spufs/context.c
===================================================================
--- linux-cg.orig/arch/powerpc/platforms/cell/spufs/context.c 2005-12-22 12:10:15.000000000 +0000
+++ linux-cg/arch/powerpc/platforms/cell/spufs/context.c 2005-12-22 12:10:20.000000000 +0000
@@ -120,27 +120,29 @@
ctx->spu->prio = current->prio;
return 0;
}
+ up_read(&ctx->state_sema);
+
+ down_write(&ctx->state_sema);
/* ctx is about to be freed, can't acquire any more */
if (!ctx->owner) {
ret = -EINVAL;
goto out;
}
- up_read(&ctx->state_sema);
- down_write(&ctx->state_sema);
if (ctx->state == SPU_STATE_SAVED) {
ret = spu_activate(ctx, 0);
ctx->state = SPU_STATE_RUNNABLE;
}
- downgrade_write(&ctx->state_sema);
if (ret)
goto out;
+ downgrade_write(&ctx->state_sema);
/* On success, we return holding the lock */
+
return ret;
out:
/* Release here, to simplify calling code. */
- up_read(&ctx->state_sema);
+ up_write(&ctx->state_sema);
return ret;
}
--
next prev parent reply other threads:[~2006-01-04 20:00 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-01-04 19:31 [PATCH 00/13] spufs fixes and cleanups Arnd Bergmann
2006-01-04 19:31 ` Arnd Bergmann [this message]
2006-01-04 19:31 ` [PATCH 02/13] spufs: dont hold root->isem in spu_forget Arnd Bergmann
2006-01-04 19:31 ` [PATCH 03/13] spufs: check for proper file pointer in sys_spu_run Arnd Bergmann
2006-01-04 19:31 ` [PATCH 04/13] spufs: serialize sys_spu_run per spu Arnd Bergmann
2006-01-04 19:31 ` [PATCH 05/13] spufs fix spu_acquire_runnable error path Arnd Bergmann
2006-01-04 19:31 ` [PATCH 06/13] spufs: dont leak directories in failed spu_create Arnd Bergmann
2006-01-04 19:31 ` [PATCH 07/13] spufs: fix spufs_fill_dir error path Arnd Bergmann
2006-01-04 19:31 ` [PATCH 08/13] spufs: clean up use of bitops Arnd Bergmann
2006-01-04 19:31 ` [PATCH 09/13] spufs: move spu_run call to its own file Arnd Bergmann
2006-01-04 19:31 ` [PATCH 10/13] spufs: abstract priv1 register access Arnd Bergmann
2006-01-04 19:31 ` [PATCH 11/13] spufs: fix sparse warnings Arnd Bergmann
2006-01-04 19:31 ` [PATCH 12/13] spufs: fix allocation on 64k pages Arnd Bergmann
2006-01-04 19:31 ` [PATCH 13/13] spufs: set irq affinity for running threads Arnd Bergmann
2006-01-05 4:42 ` Nathan Lynch
2006-01-05 14:05 ` Arnd Bergmann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060104194500.180477000@localhost \
--to=arnd@arndb.de \
--cc=arndb@de.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc64-dev@ozlabs.org \
--cc=mnutter@us.ibm.com \
--cc=paulus@samba.org \
--cc=viro@ftp.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox