From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ingo Molnar Date: Thu, 20 Jan 2005 16:14:50 +0000 Subject: [patch] stricter type-checking rwlock primitives, x86 Message-Id: <20050120161450.GC13812@elte.hu> List-Id: References: <20050119092013.GA2045@elte.hu> <16878.54402.344079.528038@cargo.ozlabs.ibm.com> <20050120023445.GA3475@taniwha.stupidest.org> <20050119190104.71f0a76f.akpm@osdl.org> <20050120031854.GA8538@taniwha.stupidest.org> <16879.29449.734172.893834@wombat.chubb.wattle.id.au> <20050120160839.GA13067@elte.hu> <20050120161116.GA13812@elte.hu> <20050120161259.GB13812@elte.hu> In-Reply-To: <20050120161259.GB13812@elte.hu> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Linus Torvalds Cc: Peter Chubb , Chris Wedgwood , Andrew Morton , paulus@samba.org, linux-kernel@vger.kernel.org, tony.luck@intel.com, dsw@gelato.unsw.edu.au, benh@kernel.crashing.org, linux-ia64@vger.kernel.org, hch@infradead.org, wli@holomorphy.com, jbarnes@sgi.com [patch respun with s/trylock_test/can_lock/] -- turn x86 rwlock macros into inline functions, to get stricter type-checking. Test-built/booted on x86. (patch comes after all previous spinlock patches.) Ingo Signed-off-by: Ingo Molnar --- linux/include/asm-i386/spinlock.h.orig +++ linux/include/asm-i386/spinlock.h @@ -198,21 +198,33 @@ typedef struct { #define RW_LOCK_UNLOCKED (rwlock_t) { RW_LOCK_BIAS RWLOCK_MAGIC_INIT } -#define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0) +static inline void rwlock_init(rwlock_t *rw) +{ + *rw = RW_LOCK_UNLOCKED; +} -#define rwlock_is_locked(x) ((x)->lock != RW_LOCK_BIAS) +static inline int rwlock_is_locked(rwlock_t *rw) +{ + return rw->lock != RW_LOCK_BIAS; +} /** * read_can_lock - would read_trylock() succeed? * @lock: the rwlock in question. */ -#define read_can_lock(x) (atomic_read((atomic_t *)&(x)->lock) > 0) +static inline int read_can_lock(rwlock_t *rw) +{ + return atomic_read((atomic_t *)&rw->lock) > 0; +} /** * write_can_lock - would write_trylock() succeed? * @lock: the rwlock in question. */ -#define write_can_lock(x) ((x)->lock = RW_LOCK_BIAS) +static inline int write_can_lock(rwlock_t *rw) +{ + return atomic_read((atomic_t *)&rw->lock) = RW_LOCK_BIAS; +} /* * On x86, we implement read-write locks as a 32-bit counter @@ -241,8 +253,16 @@ static inline void _raw_write_lock(rwloc __build_write_lock(rw, "__write_lock_failed"); } -#define _raw_read_unlock(rw) asm volatile("lock ; incl %0" :"=m" ((rw)->lock) : : "memory") -#define _raw_write_unlock(rw) asm volatile("lock ; addl $" RW_LOCK_BIAS_STR ",%0":"=m" ((rw)->lock) : : "memory") +static inline void _raw_read_unlock(rwlock_t *rw) +{ + asm volatile("lock ; incl %0" :"=m" (rw->lock) : : "memory"); +} + +static inline void _raw_write_unlock(rwlock_t *rw) +{ + asm volatile("lock ; addl $" RW_LOCK_BIAS_STR + ",%0":"=m" (rw->lock) : : "memory"); +} static inline int _raw_read_trylock(rwlock_t *lock) {