* *_trylock return on success?
@ 2000-11-25 15:07 Roger Larsson
2000-11-25 17:49 ` Rik van Riel
0 siblings, 1 reply; 9+ messages in thread
From: Roger Larsson @ 2000-11-25 15:07 UTC (permalink / raw)
To: Linux Kernel Mailing List, Nigel Gamble
Hi,
Background information:
compiled and tested a test11 with the Montavista preemptive patch.
After pressing Magic-SysRq-M all processes that tried to do IO hung in 'D'
Last message "Buffer memory ..."
Pressing Magic-SysRq-M again, all hung processes continued...
Checking the patch it looks like this
printk("Buffer memory: %6dkB\n",
atomic_read(&buffermem_pages) << (PAGE_SHIFT-10));
-#ifdef CONFIG_SMP /* trylock does nothing on UP and so we could deadlock */
- if (!spin_trylock(&lru_list_lock))
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
+ if (!mutex_trylock(&lru_list_mtx))
return;
for(nlist = 0; nlist < NR_LIST; nlist++) {
Ok, so I run some more code now than before (UP system with PREEMPT).
mutex_trylock is defined as:
+#define mutex_trylock(x) down_trylock(x)
Noticed that if the spin_trylock returns 0 on success, I will get the
behavior I see.
Not printing buffer info first time.
Holding the lock - stopping other fs processes.
Failing the mutex_trylock next attempt, interprete as success
- continuing and printing the buffer info.
- finally release the mutex
I removed the not (!) and now it works as expected.
Questions:
What are _trylocks supposed to return?
Does spin_trylock and down_trylock behave differently?
Why isn't the expected return value documented?
/RogerL
--
--
Home page:
http://www.norran.net/nra02596/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: *_trylock return on success?
2000-11-25 15:07 Roger Larsson
@ 2000-11-25 17:49 ` Rik van Riel
2000-11-25 18:30 ` Philipp Rumpf
2000-11-25 18:58 ` Roger Larsson
0 siblings, 2 replies; 9+ messages in thread
From: Rik van Riel @ 2000-11-25 17:49 UTC (permalink / raw)
To: Roger Larsson; +Cc: Linux Kernel Mailing List, Nigel Gamble
On Sat, 25 Nov 2000, Roger Larsson wrote:
> Questions:
> What are _trylocks supposed to return?
It depends on the type of _trylock ;(
> Does spin_trylock and down_trylock behave differently?
> Why isn't the expected return value documented?
The whole trylock stuff is, IMHO, a big mess. When you
change from one type of trylock to another, you may be
forced to invert the logic of your code since the return
code from the different locks is different.
For bitflags, for example, the trylock returns the state
the bit had before the lock (ie. 1 if the thing was already
locked).
For spinlocks, it'll probably return something else ;/
regards,
Rik
--
Hollywood goes for world dumbination,
Trailer at 11.
http://www.conectiva.com/ http://www.surriel.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: *_trylock return on success?
2000-11-25 17:49 ` Rik van Riel
@ 2000-11-25 18:30 ` Philipp Rumpf
2000-11-25 19:03 ` Roger Larsson
2000-11-25 18:58 ` Roger Larsson
1 sibling, 1 reply; 9+ messages in thread
From: Philipp Rumpf @ 2000-11-25 18:30 UTC (permalink / raw)
To: Rik van Riel; +Cc: Roger Larsson, Linux Kernel Mailing List, Nigel Gamble
On Sat, Nov 25, 2000 at 03:49:25PM -0200, Rik van Riel wrote:
> On Sat, 25 Nov 2000, Roger Larsson wrote:
>
> > Questions:
> > What are _trylocks supposed to return?
>
> It depends on the type of _trylock ;(
>
> > Does spin_trylock and down_trylock behave differently?
> > Why isn't the expected return value documented?
>
> The whole trylock stuff is, IMHO, a big mess. When you
> change from one type of trylock to another, you may be
> forced to invert the logic of your code since the return
> code from the different locks is different.
>
> For bitflags, for example, the trylock returns the state
> the bit had before the lock (ie. 1 if the thing was already
> locked).
I assume you're talking about test_and_{set,clear}_bit here. Their return
value isn't consistent with the other _trylock functions since they're not
_trylock functions.
I think the real problem is that people use test_and_set_bit for locks,
which is almost never[1] a good idea. The overhead for a semaphore shouldn't
be too much in most cases, and that way it is obvious what you want to do -
and, hopefully, even more obvious if you end up with a semaphore that can
be turned into a spinlock without further changes.
> For spinlocks, it'll probably return something else ;/
_trylock functions return 0 for success.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: *_trylock return on success?
2000-11-25 17:49 ` Rik van Riel
2000-11-25 18:30 ` Philipp Rumpf
@ 2000-11-25 18:58 ` Roger Larsson
1 sibling, 0 replies; 9+ messages in thread
From: Roger Larsson @ 2000-11-25 18:58 UTC (permalink / raw)
To: Rik van Riel, Linus Torvalds, Nigel Gamble; +Cc: Linux Kernel Mailing List
On Saturday 25 November 2000 18:49, Rik van Riel wrote:
> On Sat, 25 Nov 2000, Roger Larsson wrote:
> > Questions:
> > What are _trylocks supposed to return?
>
> It depends on the type of _trylock ;(
>
> > Does spin_trylock and down_trylock behave differently?
> > Why isn't the expected return value documented?
>
> The whole trylock stuff is, IMHO, a big mess. When you
> change from one type of trylock to another, you may be
> forced to invert the logic of your code since the return
> code from the different locks is different.
>
> For bitflags, for example, the trylock returns the state
> the bit had before the lock (ie. 1 if the thing was already
> locked).
>
This holds for down_trylocks too.
It looks like it is the spinlocks that are wrong... :-(
As most return values tend to be error returns that also
matches other code in functionallity.
>
> For spinlocks, it'll probably return something else ;/
It does...
I guess fixing this is too much too late?
It looks like ppc mixes the ways... from arch/ppc/lib/locks.c:46
int spin_trylock(spinlock_t *lock)
{
if (__spin_trylock(&lock->lock)) /* one on failure */
return 0; /* zero on failure */
lock->owner_cpu = smp_processor_id();
lock->owner_pc = (unsigned long)__builtin_return_address(0);
return 1;
}
BUT anyway...
The thing I hit is not a bug in the kernel proper - it is in the preemptive
stuff.
/RogerL
--
Home page:
http://www.norran.net/nra02596/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: *_trylock return on success?
2000-11-25 18:30 ` Philipp Rumpf
@ 2000-11-25 19:03 ` Roger Larsson
2000-11-25 19:22 ` Philipp Rumpf
0 siblings, 1 reply; 9+ messages in thread
From: Roger Larsson @ 2000-11-25 19:03 UTC (permalink / raw)
To: Philipp Rumpf; +Cc: Linux Kernel Mailing List
On Saturday 25 November 2000 19:30, Philipp Rumpf wrote:
> On Sat, Nov 25, 2000 at 03:49:25PM -0200, Rik van Riel wrote:
> > On Sat, 25 Nov 2000, Roger Larsson wrote:
> > > Questions:
> > > What are _trylocks supposed to return?
> >
> > It depends on the type of _trylock ;(
> >
> > > Does spin_trylock and down_trylock behave differently?
> > > Why isn't the expected return value documented?
> >
> > The whole trylock stuff is, IMHO, a big mess. When you
> > change from one type of trylock to another, you may be
> > forced to invert the logic of your code since the return
> > code from the different locks is different.
> >
> > For bitflags, for example, the trylock returns the state
> > the bit had before the lock (ie. 1 if the thing was already
> > locked).
>
> I assume you're talking about test_and_{set,clear}_bit here. Their return
> value isn't consistent with the other _trylock functions since they're not
> _trylock functions.
>
> I think the real problem is that people use test_and_set_bit for locks,
> which is almost never[1] a good idea. The overhead for a semaphore
> shouldn't be too much in most cases, and that way it is obvious what you
> want to do - and, hopefully, even more obvious if you end up with a
> semaphore that can be turned into a spinlock without further changes.
>
> > For spinlocks, it'll probably return something else ;/
>
> _trylock functions return 0 for success.
Not spin_trylock
Simple example code from
code from include/asm-mips/spinlock.h:65
#define spin_trylock(lock) (!test_and_set_bit(0,(lock)))
/RogerL
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> Please read the FAQ at http://www.tux.org/lkml/
--
--
Home page:
http://www.norran.net/nra02596/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: *_trylock return on success?
2000-11-25 19:03 ` Roger Larsson
@ 2000-11-25 19:22 ` Philipp Rumpf
2000-11-25 21:05 ` Roger Larsson
0 siblings, 1 reply; 9+ messages in thread
From: Philipp Rumpf @ 2000-11-25 19:22 UTC (permalink / raw)
To: Roger Larsson; +Cc: Linux Kernel Mailing List
On Sat, Nov 25, 2000 at 08:03:49PM +0100, Roger Larsson wrote:
> > _trylock functions return 0 for success.
>
> Not spin_trylock
Argh, I missed the (recent ?) change to make x86 spinlocks use 1 to mean
unlocked. You're correct, and obviously this should be fixed.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: *_trylock return on success?
2000-11-25 19:22 ` Philipp Rumpf
@ 2000-11-25 21:05 ` Roger Larsson
2000-11-28 1:07 ` Roger Larsson
0 siblings, 1 reply; 9+ messages in thread
From: Roger Larsson @ 2000-11-25 21:05 UTC (permalink / raw)
To: Philipp Rumpf, Linus Torvalds; +Cc: Linux Kernel Mailing List, Nigel Gamble
On Saturday 25 November 2000 20:22, Philipp Rumpf wrote:
> On Sat, Nov 25, 2000 at 08:03:49PM +0100, Roger Larsson wrote:
> > > _trylock functions return 0 for success.
> >
> > Not spin_trylock
>
> Argh, I missed the (recent ?) change to make x86 spinlocks use 1 to mean
> unlocked. You're correct, and obviously this should be fixed.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> Please read the FAQ at http://www.tux.org/lkml/
If this are to change in 2.4 I would suggest
to renaming it to mutex_lock (as in Nigels preemptive kernel patch)
Why?
A) the name spin_lock describes how the function is implemented and not
the intended purpose.
B) with a preemptive kernel we will have more than four intended purposes:
1) SMP - spin_lock, prevent two processors to run currently
2) UP - not used, code can only be executed by one thread.
3) PREEMTIVE - lock a region for preemption to avoid concurrent execution.
4) debug - addition of debug checks.
With Nigels patch most are changed, with some additional stuff...
My suggestion, change the name to mutex_lock and negate let mutex_trylock
follow the example of other _trylocks (returning 0 for success).
Ok?
If it is ok, I can prepare a patch (earliest monday)
/RogerL
--
Home page:
http://www.norran.net/nra02596/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: *_trylock return on success?
2000-11-25 21:05 ` Roger Larsson
@ 2000-11-28 1:07 ` Roger Larsson
0 siblings, 0 replies; 9+ messages in thread
From: Roger Larsson @ 2000-11-28 1:07 UTC (permalink / raw)
To: Philipp Rumpf, Linus Torvalds; +Cc: Linux Kernel Mailing List
[-- Attachment #1: Type: text/plain, Size: 1647 bytes --]
On Saturday 25 November 2000 22:05, Roger Larsson wrote:
> On Saturday 25 November 2000 20:22, Philipp Rumpf wrote:
> > On Sat, Nov 25, 2000 at 08:03:49PM +0100, Roger Larsson wrote:
> > > > _trylock functions return 0 for success.
> > >
> > > Not spin_trylock
> >
> > Argh, I missed the (recent ?) change to make x86 spinlocks use 1 to mean
> > unlocked. You're correct, and obviously this should be fixed.
Have looked more into this now...
tasklet_trylock is also wrong (but there are only four of them)
Is this 2.4 only, or where there spin locks earlier too?
My suggestion now is a few steps:
1) to release a kernel version that has corrected _trylocks;
spin2_trylock and tasklet2_trylock.
[with corresponding updates in as many places as possible:
s/!spin_trylock/spin2_trylock/g
s/spin_trylock/!spin2_trylock/g
. . .]
(ready for spin trylock, not done for tasklet yet..., attached,
hope it got included OK - not fully used to kmail)
2) This will in house only drives or compilations that in some
strange way uses this calls...
3a) (DANGEROUS) global rename spin2_trylock to spin_trylock
[no logic change this time - only name]
3b) (dangerous) add compatibility interface
#define spin_trylock(L) (!spin2_trylock(L))
Probably not necessary since it can not be linked against.
Binary modules will contain their own compatibility code :-)
Probably preferred by those who maintain drivers for several
releases; 2.2, 2.4, ...
3c) do not do anything more...
Alternative:
1b) do nothing at all - suffer later
/RogerL
--
Home page:
http://www.norran.net/nra02596/
[-- Attachment #2: patch-2.4.0-test11-spin2_trylock --]
[-- Type: text/plain, Size: 23376 bytes --]
diff -Naur v2.4.0-test11-linux/arch/alpha/kernel/alpha_ksyms.c linux/arch/alpha/kernel/alpha_ksyms.c
--- v2.4.0-test11-linux/arch/alpha/kernel/alpha_ksyms.c Mon Nov 27 21:06:14 2000
+++ linux/arch/alpha/kernel/alpha_ksyms.c Tue Nov 28 00:35:13 2000
@@ -198,7 +198,7 @@
#if DEBUG_SPINLOCK
EXPORT_SYMBOL(spin_unlock);
EXPORT_SYMBOL(debug_spin_lock);
-EXPORT_SYMBOL(debug_spin_trylock);
+EXPORT_SYMBOL(debug_spin2_trylock);
#endif
#if DEBUG_RWLOCK
EXPORT_SYMBOL(write_lock);
diff -Naur v2.4.0-test11-linux/arch/alpha/kernel/irq_smp.c linux/arch/alpha/kernel/irq_smp.c
--- v2.4.0-test11-linux/arch/alpha/kernel/irq_smp.c Thu Sep 21 19:53:32 2000
+++ linux/arch/alpha/kernel/irq_smp.c Tue Nov 28 00:35:31 2000
@@ -96,7 +96,7 @@
if (!local_bh_count(cpu)
&& spin_is_locked(&global_bh_lock))
continue;
- if (spin_trylock(&global_irq_lock))
+ if (!spin2_trylock(&global_irq_lock))
break;
}
}
@@ -105,7 +105,7 @@
static inline void
get_irqlock(int cpu, void* where)
{
- if (!spin_trylock(&global_irq_lock)) {
+ if (spin2_trylock(&global_irq_lock)) {
/* Do we already hold the lock? */
if (cpu == global_irq_holder)
return;
diff -Naur v2.4.0-test11-linux/arch/alpha/kernel/smp.c linux/arch/alpha/kernel/smp.c
--- v2.4.0-test11-linux/arch/alpha/kernel/smp.c Fri Oct 13 00:57:30 2000
+++ linux/arch/alpha/kernel/smp.c Tue Nov 28 00:34:59 2000
@@ -1078,10 +1078,10 @@
}
int
-debug_spin_trylock(spinlock_t * lock, const char *base_file, int line_no)
+debug_spin2_trylock(spinlock_t * lock, const char *base_file, int line_no)
{
int ret;
- if ((ret = !test_and_set_bit(0, lock))) {
+ if ((ret = test_and_set_bit(0, lock))) {
lock->on_cpu = smp_processor_id();
lock->previous = __builtin_return_address(0);
lock->task = current;
diff -Naur v2.4.0-test11-linux/arch/mips64/sgi-ip27/ip27-irq.c linux/arch/mips64/sgi-ip27/ip27-irq.c
--- v2.4.0-test11-linux/arch/mips64/sgi-ip27/ip27-irq.c Thu Sep 21 19:53:32 2000
+++ linux/arch/mips64/sgi-ip27/ip27-irq.c Tue Nov 28 00:44:48 2000
@@ -480,7 +480,7 @@
continue;
if (!local_bh_count(cpu) && spin_is_locked(&global_bh_lock))
continue;
- if (spin_trylock(&global_irq_lock))
+ if (!spin2_trylock(&global_irq_lock))
break;
}
}
@@ -497,7 +497,7 @@
static inline void get_irqlock(int cpu)
{
- if (!spin_trylock(&global_irq_lock)) {
+ if (spin2_trylock(&global_irq_lock)) {
/* do we already hold the lock? */
if ((unsigned char) cpu == global_irq_holder)
return;
diff -Naur v2.4.0-test11-linux/arch/ppc/kernel/misc.S linux/arch/ppc/kernel/misc.S
--- v2.4.0-test11-linux/arch/ppc/kernel/misc.S Mon Nov 27 21:06:14 2000
+++ linux/arch/ppc/kernel/misc.S Tue Nov 28 00:40:02 2000
@@ -434,6 +434,7 @@
* Environments Manual suggests not doing unnecessary stcwx.'s
* since they may inhibit forward progress by other CPUs in getting
* a lock.
+ * Returns: 0 on success ???
*/
_GLOBAL(__spin_trylock)
mr r4,r3
diff -Naur v2.4.0-test11-linux/arch/ppc/kernel/ppc_ksyms.c linux/arch/ppc/kernel/ppc_ksyms.c
--- v2.4.0-test11-linux/arch/ppc/kernel/ppc_ksyms.c Mon Nov 27 21:06:14 2000
+++ linux/arch/ppc/kernel/ppc_ksyms.c Tue Nov 28 00:37:23 2000
@@ -197,7 +197,7 @@
EXPORT_SYMBOL(__global_restore_flags);
EXPORT_SYMBOL(_spin_lock);
EXPORT_SYMBOL(_spin_unlock);
-EXPORT_SYMBOL(spin_trylock);
+EXPORT_SYMBOL(spin2_trylock);
EXPORT_SYMBOL(_read_lock);
EXPORT_SYMBOL(_read_unlock);
EXPORT_SYMBOL(_write_lock);
diff -Naur v2.4.0-test11-linux/arch/ppc/lib/locks.c linux/arch/ppc/lib/locks.c
--- v2.4.0-test11-linux/arch/ppc/lib/locks.c Thu Oct 7 19:17:08 1999
+++ linux/arch/ppc/lib/locks.c Tue Nov 28 00:41:25 2000
@@ -43,13 +43,13 @@
lock->owner_cpu = cpu;
}
-int spin_trylock(spinlock_t *lock)
+int spin2_trylock(spinlock_t *lock)
{
if (__spin_trylock(&lock->lock))
- return 0;
+ return 1;
lock->owner_cpu = smp_processor_id();
lock->owner_pc = (unsigned long)__builtin_return_address(0);
- return 1;
+ return 0;
}
diff -Naur v2.4.0-test11-linux/arch/sparc/kernel/sparc_ksyms.c linux/arch/sparc/kernel/sparc_ksyms.c
--- v2.4.0-test11-linux/arch/sparc/kernel/sparc_ksyms.c Thu Sep 21 19:54:24 2000
+++ linux/arch/sparc/kernel/sparc_ksyms.c Tue Nov 28 00:35:38 2000
@@ -101,7 +101,7 @@
#ifdef SPIN_LOCK_DEBUG
EXPORT_SYMBOL(_do_spin_lock);
EXPORT_SYMBOL(_do_spin_unlock);
-EXPORT_SYMBOL(_spin_trylock);
+EXPORT_SYMBOL(_spin2_trylock);
EXPORT_SYMBOL(_do_read_lock);
EXPORT_SYMBOL(_do_read_unlock);
EXPORT_SYMBOL(_do_write_lock);
diff -Naur v2.4.0-test11-linux/arch/sparc/lib/debuglocks.c linux/arch/sparc/lib/debuglocks.c
--- v2.4.0-test11-linux/arch/sparc/lib/debuglocks.c Fri Sep 10 20:06:19 1999
+++ linux/arch/sparc/lib/debuglocks.c Tue Nov 28 00:36:37 2000
@@ -85,7 +85,7 @@
lock->owner_pc = (cpu & 3) | (caller & ~3);
}
-int _spin_trylock(spinlock_t *lock)
+int _spin2_trylock(spinlock_t *lock)
{
unsigned long val;
unsigned long caller;
@@ -98,7 +98,7 @@
/* We got it, record our identity for debugging. */
lock->owner_pc = (cpu & 3) | (caller & ~3);
}
- return val == 0;
+ return val != 0;
}
void _do_spin_unlock(spinlock_t *lock)
diff -Naur v2.4.0-test11-linux/arch/sparc64/kernel/sparc64_ksyms.c linux/arch/sparc64/kernel/sparc64_ksyms.c
--- v2.4.0-test11-linux/arch/sparc64/kernel/sparc64_ksyms.c Mon Nov 27 21:06:14 2000
+++ linux/arch/sparc64/kernel/sparc64_ksyms.c Tue Nov 28 00:48:13 2000
@@ -99,7 +99,7 @@
#ifdef SPIN_LOCK_DEBUG
extern void _do_spin_lock (spinlock_t *lock, char *str);
extern void _do_spin_unlock (spinlock_t *lock);
-extern int _spin_trylock (spinlock_t *lock);
+extern int _spin2_trylock (spinlock_t *lock);
extern void _do_read_lock(rwlock_t *rw, char *str);
extern void _do_read_unlock(rwlock_t *rw, char *str);
extern void _do_write_lock(rwlock_t *rw, char *str);
@@ -142,7 +142,7 @@
#ifdef SPIN_LOCK_DEBUG
EXPORT_SYMBOL(_do_spin_lock);
EXPORT_SYMBOL(_do_spin_unlock);
-EXPORT_SYMBOL(_spin_trylock);
+EXPORT_SYMBOL(_spin2_trylock);
EXPORT_SYMBOL(_do_read_lock);
EXPORT_SYMBOL(_do_read_unlock);
EXPORT_SYMBOL(_do_write_lock);
diff -Naur v2.4.0-test11-linux/arch/sparc64/lib/debuglocks.c linux/arch/sparc64/lib/debuglocks.c
--- v2.4.0-test11-linux/arch/sparc64/lib/debuglocks.c Wed May 31 20:13:23 2000
+++ linux/arch/sparc64/lib/debuglocks.c Tue Nov 28 00:44:30 2000
@@ -78,7 +78,7 @@
lock->owner_cpu = cpu;
}
-int _spin_trylock(spinlock_t *lock)
+int _spin2_trylock(spinlock_t *lock)
{
unsigned long val, caller;
int cpu = smp_processor_id();
@@ -93,7 +93,7 @@
lock->owner_pc = ((unsigned int)caller);
lock->owner_cpu = cpu;
}
- return val == 0;
+ return val != 0;
}
void _do_spin_unlock(spinlock_t *lock)
diff -Naur v2.4.0-test11-linux/drivers/net/ppp_async.c linux/drivers/net/ppp_async.c
--- v2.4.0-test11-linux/drivers/net/ppp_async.c Fri Apr 21 22:31:10 2000
+++ linux/drivers/net/ppp_async.c Tue Nov 28 00:34:01 2000
@@ -33,9 +33,9 @@
#include <linux/init.h>
#include <asm/uaccess.h>
-#ifndef spin_trylock_bh
-#define spin_trylock_bh(lock) ({ int __r; local_bh_disable(); \
- __r = spin_trylock(lock); \
+#ifndef spin2_trylock_bh
+#define spin2_trylock_bh(lock) ({ int __r; local_bh_disable(); \
+ __r = spin2_trylock(lock); \
if (!__r) local_bh_enable(); \
__r; })
#endif
@@ -637,7 +637,7 @@
int tty_stuffed = 0;
set_bit(XMIT_WAKEUP, &ap->xmit_flags);
- if (!spin_trylock_bh(&ap->xmit_lock))
+ if (spin2_trylock_bh(&ap->xmit_lock))
return 0;
for (;;) {
if (test_and_clear_bit(XMIT_WAKEUP, &ap->xmit_flags))
@@ -666,7 +666,7 @@
if (!(test_bit(XMIT_WAKEUP, &ap->xmit_flags)
|| (!tty_stuffed && ap->tpkt != 0)))
break;
- if (!spin_trylock_bh(&ap->xmit_lock))
+ if (spin2_trylock_bh(&ap->xmit_lock))
break;
}
return done;
diff -Naur v2.4.0-test11-linux/drivers/net/ppp_synctty.c linux/drivers/net/ppp_synctty.c
--- v2.4.0-test11-linux/drivers/net/ppp_synctty.c Wed May 31 20:13:26 2000
+++ linux/drivers/net/ppp_synctty.c Tue Nov 28 00:34:32 2000
@@ -44,9 +44,9 @@
#include <linux/init.h>
#include <asm/uaccess.h>
-#ifndef spin_trylock_bh
-#define spin_trylock_bh(lock) ({ int __r; local_bh_disable(); \
- __r = spin_trylock(lock); \
+#ifndef spin2_trylock_bh
+#define spin2_trylock_bh(lock) ({ int __r; local_bh_disable(); \
+ __r = spin2_trylock(lock); \
if (!__r) local_bh_enable(); \
__r; })
#endif
@@ -590,7 +590,7 @@
int tty_stuffed = 0;
set_bit(XMIT_WAKEUP, &ap->xmit_flags);
- if (!spin_trylock_bh(&ap->xmit_lock))
+ if (spin2_trylock_bh(&ap->xmit_lock))
return 0;
for (;;) {
if (test_and_clear_bit(XMIT_WAKEUP, &ap->xmit_flags))
@@ -615,7 +615,7 @@
if (!(test_bit(XMIT_WAKEUP, &ap->xmit_flags)
|| (!tty_stuffed && ap->tpkt != 0)))
break;
- if (!spin_trylock_bh(&ap->xmit_lock))
+ if (spin2_trylock_bh(&ap->xmit_lock))
break;
}
return done;
diff -Naur v2.4.0-test11-linux/fs/buffer.c linux/fs/buffer.c
--- v2.4.0-test11-linux/fs/buffer.c Mon Nov 27 21:06:27 2000
+++ linux/fs/buffer.c Tue Nov 28 00:12:17 2000
@@ -2364,7 +2364,7 @@
atomic_read(&buffermem_pages) << (PAGE_SHIFT-10));
#ifdef CONFIG_SMP /* trylock does nothing on UP and so we could deadlock */
- if (!spin_trylock(&lru_list_lock))
+ if (spin2_trylock(&lru_list_lock))
return;
for(nlist = 0; nlist < NR_LIST; nlist++) {
found = locked = dirty = used = lastused = protected = 0;
diff -Naur v2.4.0-test11-linux/include/asm-alpha/spinlock.h linux/include/asm-alpha/spinlock.h
--- v2.4.0-test11-linux/include/asm-alpha/spinlock.h Mon Nov 27 21:06:28 2000
+++ linux/include/asm-alpha/spinlock.h Tue Nov 28 00:22:55 2000
@@ -41,10 +41,10 @@
#if DEBUG_SPINLOCK
extern void spin_unlock(spinlock_t * lock);
extern void debug_spin_lock(spinlock_t * lock, const char *, int);
-extern int debug_spin_trylock(spinlock_t * lock, const char *, int);
+extern int debug_spin2_trylock(spinlock_t * lock, const char *, int);
#define spin_lock(LOCK) debug_spin_lock(LOCK, __BASE_FILE__, __LINE__)
-#define spin_trylock(LOCK) debug_spin_trylock(LOCK, __BASE_FILE__, __LINE__)
+#define spin2_trylock(LOCK) debug_spin2_trylock(LOCK, __BASE_FILE__, __LINE__)
#define spin_lock_own(LOCK, LOCATION) \
do { \
@@ -84,7 +84,7 @@
: "m"(lock->lock) : "memory");
}
-#define spin_trylock(lock) (!test_and_set_bit(0,(lock)))
+#define spin2_trylock(lock) (test_and_set_bit(0,(lock)))
#define spin_lock_own(LOCK, LOCATION) ((void)0)
#endif /* DEBUG_SPINLOCK */
diff -Naur v2.4.0-test11-linux/include/asm-i386/spinlock.h linux/include/asm-i386/spinlock.h
--- v2.4.0-test11-linux/include/asm-i386/spinlock.h Fri Oct 13 00:58:05 2000
+++ linux/include/asm-i386/spinlock.h Tue Nov 28 00:21:56 2000
@@ -65,14 +65,14 @@
#define spin_unlock_string \
"movb $1,%0"
-static inline int spin_trylock(spinlock_t *lock)
+static inline int spin2_trylock(spinlock_t *lock)
{
char oldval;
__asm__ __volatile__(
"xchgb %b0,%1"
:"=q" (oldval), "=m" (lock->lock)
:"0" (0) : "memory");
- return oldval > 0;
+ return oldval == 0;
}
static inline void spin_lock(spinlock_t *lock)
diff -Naur v2.4.0-test11-linux/include/asm-ia64/spinlock.h linux/include/asm-ia64/spinlock.h
--- v2.4.0-test11-linux/include/asm-ia64/spinlock.h Mon Nov 27 21:04:56 2000
+++ linux/include/asm-ia64/spinlock.h Tue Nov 28 00:29:40 2000
@@ -48,7 +48,7 @@
: "ar.ccv", "ar.pfs", "b7", "p15", "r28", "r29", "r30", "memory"); \
}
-#define spin_trylock(x) \
+#define spin2_trylock(x) \
({ \
register char *addr __asm__ ("r31") = (char *) &(x)->lock; \
register long result; \
@@ -59,7 +59,7 @@
";;\n" \
IA64_SEMFIX"cmpxchg1.acq %0=[%1],r30,ar.ccv\n" \
: "=r"(result) : "r"(addr) : "ar.ccv", "r30", "memory"); \
- (result == 0); \
+ (result != 0); \
})
#define spin_is_locked(x) ((x)->lock != 0)
@@ -98,7 +98,7 @@
#define spin_is_locked(x) ((x)->lock != 0)
#define spin_unlock(x) do {((spinlock_t *) x)->lock = 0; barrier(); } while (0)
-#define spin_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) == 0)
+#define spin2_trylock(x) (cmpxchg_acq(&(x)->lock, 0, 1) != 0)
#define spin_unlock_wait(x) do { barrier(); } while ((x)->lock)
#endif /* !NEW_LOCK */
diff -Naur v2.4.0-test11-linux/include/asm-mips/spinlock.h linux/include/asm-mips/spinlock.h
--- v2.4.0-test11-linux/include/asm-mips/spinlock.h Wed May 31 20:13:32 2000
+++ linux/include/asm-mips/spinlock.h Tue Nov 28 00:22:05 2000
@@ -62,7 +62,7 @@
: "memory");
}
-#define spin_trylock(lock) (!test_and_set_bit(0,(lock)))
+#define spin2_trylock(lock) (test_and_set_bit(0,(lock)))
/*
* Read-write spinlocks, allowing multiple readers but only one writer.
diff -Naur v2.4.0-test11-linux/include/asm-mips64/spinlock.h linux/include/asm-mips64/spinlock.h
--- v2.4.0-test11-linux/include/asm-mips64/spinlock.h Wed May 31 20:13:33 2000
+++ linux/include/asm-mips64/spinlock.h Tue Nov 28 00:31:05 2000
@@ -65,12 +65,12 @@
: "memory");
}
-static inline unsigned int spin_trylock(spinlock_t *lock)
+static inline unsigned int spin2_trylock(spinlock_t *lock)
{
unsigned int temp, res;
__asm__ __volatile__(
- ".set\tnoreorder\t\t\t# spin_trylock\n\t"
+ ".set\tnoreorder\t\t\t# spin2_trylock\n\t"
"1:\tll\t%0, %1\n\t"
"or\t%2, %0, %3\n\t"
"sc\t%2, %1\n\t"
@@ -81,7 +81,7 @@
:"r" (1), "m" (*lock)
: "memory");
- return res == 0;
+ return res != 0;
}
/*
diff -Naur v2.4.0-test11-linux/include/asm-ppc/spinlock.h linux/include/asm-ppc/spinlock.h
--- v2.4.0-test11-linux/include/asm-ppc/spinlock.h Mon Nov 27 21:06:30 2000
+++ linux/include/asm-ppc/spinlock.h Tue Nov 28 00:28:14 2000
@@ -19,12 +19,12 @@
extern void _spin_lock(spinlock_t *lock);
extern void _spin_unlock(spinlock_t *lock);
-extern int spin_trylock(spinlock_t *lock);
+extern int spin2_trylock(spinlock_t *lock);
#define spin_lock(lp) _spin_lock(lp)
#define spin_unlock(lp) _spin_unlock(lp)
-extern unsigned long __spin_trylock(volatile unsigned long *lock);
+extern unsigned long __spin2_trylock(volatile unsigned long *lock);
/*
* Read-write spinlocks, allowing multiple readers
diff -Naur v2.4.0-test11-linux/include/asm-s390/spinlock.h linux/include/asm-s390/spinlock.h
--- v2.4.0-test11-linux/include/asm-s390/spinlock.h Fri May 12 20:41:44 2000
+++ linux/include/asm-s390/spinlock.h Tue Nov 28 01:04:34 2000
@@ -37,7 +37,7 @@
: "0" (lp->lock) : "0", "1");
}
-extern inline int spin_trylock(spinlock_t *lp)
+extern inline int spin2_trylock(spinlock_t *lp)
{
unsigned long result;
__asm__ __volatile(" slr %1,%1\n"
@@ -45,7 +45,7 @@
"0: cs %1,0,%0"
: "=m" (lp->lock), "=&d" (result)
: "0" (lp->lock) : "0");
- return !result;
+ return result != 0;
}
diff -Naur v2.4.0-test11-linux/include/asm-sparc/spinlock.h linux/include/asm-sparc/spinlock.h
--- v2.4.0-test11-linux/include/asm-sparc/spinlock.h Thu Sep 21 19:54:16 2000
+++ linux/include/asm-sparc/spinlock.h Tue Nov 28 00:47:42 2000
@@ -33,10 +33,10 @@
#define spin_unlock_wait(lp) do { barrier(); } while(*(volatile unsigned char *)(&(lp)->lock))
extern void _do_spin_lock(spinlock_t *lock, char *str);
-extern int _spin_trylock(spinlock_t *lock);
+extern int _spin2_trylock(spinlock_t *lock);
extern void _do_spin_unlock(spinlock_t *lock);
-#define spin_trylock(lp) _spin_trylock(lp)
+#define spin2_trylock(lp) _spin2_trylock(lp)
#define spin_lock(lock) _do_spin_lock(lock, "spin_lock")
#define spin_unlock(lock) _do_spin_unlock(lock)
@@ -115,14 +115,14 @@
: "g2", "memory", "cc");
}
-extern __inline__ int spin_trylock(spinlock_t *lock)
+extern __inline__ int spin2_trylock(spinlock_t *lock)
{
unsigned int result;
__asm__ __volatile__("ldstub [%1], %0"
: "=r" (result)
: "r" (lock)
: "memory");
- return (result == 0);
+ return (result != 0);
}
extern __inline__ void spin_unlock(spinlock_t *lock)
diff -Naur v2.4.0-test11-linux/include/asm-sparc64/spinlock.h linux/include/asm-sparc64/spinlock.h
--- v2.4.0-test11-linux/include/asm-sparc64/spinlock.h Thu Sep 21 19:54:16 2000
+++ linux/include/asm-sparc64/spinlock.h Tue Nov 28 00:29:10 2000
@@ -55,7 +55,7 @@
: "g7", "memory");
}
-extern __inline__ int spin_trylock(spinlock_t *lock)
+extern __inline__ int spin2_trylock(spinlock_t *lock)
{
unsigned int result;
__asm__ __volatile__("ldstub [%1], %0\n\t"
@@ -63,7 +63,7 @@
: "=r" (result)
: "r" (lock)
: "memory");
- return (result == 0);
+ return (result != 0);
}
extern __inline__ void spin_unlock(spinlock_t *lock)
@@ -95,9 +95,9 @@
extern void _do_spin_lock (spinlock_t *lock, char *str);
extern void _do_spin_unlock (spinlock_t *lock);
-extern int _spin_trylock (spinlock_t *lock);
+extern int _spin2_trylock (spinlock_t *lock);
-#define spin_trylock(lp) _spin_trylock(lp)
+#define spin2_trylock(lp) _spin2_trylock(lp)
#define spin_lock(lock) _do_spin_lock(lock, "spin_lock")
#define spin_unlock(lock) _do_spin_unlock(lock)
diff -Naur v2.4.0-test11-linux/include/linux/spinlock.h linux/include/linux/spinlock.h
--- v2.4.0-test11-linux/include/linux/spinlock.h Thu Sep 21 19:53:48 2000
+++ linux/include/linux/spinlock.h Tue Nov 28 01:07:34 2000
@@ -58,7 +58,7 @@
#define spin_lock_init(lock) do { } while(0)
#define spin_lock(lock) (void)(lock) /* Not "unused variable". */
#define spin_is_locked(lock) (0)
-#define spin_trylock(lock) ({1; })
+#define spin2_trylock(lock) ({0; }) /* Note: returns 0 on success */
#define spin_unlock_wait(lock) do { } while(0)
#define spin_unlock(lock) do { } while(0)
@@ -71,7 +71,9 @@
#define spin_lock_init(x) do { (x)->lock = 0; } while (0)
#define spin_is_locked(lock) (test_bit(0,(lock)))
-#define spin_trylock(lock) (!test_and_set_bit(0,(lock)))
+#define spin2_trylock(lock) (test_and_set_bit(0,(lock))) /* Note:
+ returns 0 on
+ success */
#define spin_lock(x) do { (x)->lock = 1; } while (0)
#define spin_unlock_wait(x) do { } while (0)
@@ -90,7 +92,8 @@
#define spin_lock_init(x) do { (x)->lock = 0; } while (0)
#define spin_is_locked(lock) (test_bit(0,(lock)))
-#define spin_trylock(lock) (!test_and_set_bit(0,(lock)))
+#define spin2_trylock(lock) (test_and_set_bit(0,(lock))) /* Note: returns
+ 0 on success */
#define spin_lock(x) do {unsigned long __spinflags; save_flags(__spinflags); cli(); if ((x)->lock&&(x)->babble) {printk("%s:%d: spin_lock(%s:%p) already locked\n", __BASE_FILE__,__LINE__, (x)->module, (x));(x)->babble--;} (x)->lock = 1; restore_flags(__spinflags);} while (0)
#define spin_unlock_wait(x) do {unsigned long __spinflags; save_flags(__spinflags); cli(); if ((x)->lock&&(x)->babble) {printk("%s:%d: spin_unlock_wait(%s:%p) deadlock\n", __BASE_FILE__,__LINE__, (x)->module, (x));(x)->babble--;} restore_flags(__spinflags);} while (0)
diff -Naur v2.4.0-test11-linux/kernel/softirq.c linux/kernel/softirq.c
--- v2.4.0-test11-linux/kernel/softirq.c Fri Oct 13 00:58:09 2000
+++ linux/kernel/softirq.c Tue Nov 28 00:12:31 2000
@@ -246,7 +246,7 @@
{
int cpu = smp_processor_id();
- if (!spin_trylock(&global_bh_lock))
+ if (spin2_trylock(&global_bh_lock))
goto resched;
if (!hardirq_trylock(cpu))
diff -Naur v2.4.0-test11-linux/net/core/dev.c linux/net/core/dev.c
--- v2.4.0-test11-linux/net/core/dev.c Mon Nov 27 21:06:32 2000
+++ linux/net/core/dev.c Tue Nov 28 00:32:08 2000
@@ -1251,7 +1251,7 @@
smp_mb__before_clear_bit();
clear_bit(__LINK_STATE_SCHED, &dev->state);
- if (spin_trylock(&dev->queue_lock)) {
+ if (!spin2_trylock(&dev->queue_lock)) {
qdisc_run(dev);
spin_unlock(&dev->queue_lock);
} else {
diff -Naur v2.4.0-test11-linux/net/core/dst.c linux/net/core/dst.c
--- v2.4.0-test11-linux/net/core/dst.c Wed May 3 01:48:16 2000
+++ linux/net/core/dst.c Tue Nov 28 00:32:18 2000
@@ -45,7 +45,7 @@
int delayed = 0;
struct dst_entry * dst, **dstp;
- if (!spin_trylock(&dst_lock)) {
+ if (spin2_trylock(&dst_lock)) {
mod_timer(&dst_gc_timer, jiffies + HZ/10);
return;
}
diff -Naur v2.4.0-test11-linux/net/ipv4/icmp.c linux/net/ipv4/icmp.c
--- v2.4.0-test11-linux/net/ipv4/icmp.c Thu Sep 21 19:53:51 2000
+++ linux/net/ipv4/icmp.c Tue Nov 28 00:32:24 2000
@@ -349,7 +349,7 @@
static int icmp_xmit_lock_bh(void)
{
- if (!spin_trylock(&icmp_socket->sk->lock.slock)) {
+ if (spin2_trylock(&icmp_socket->sk->lock.slock)) {
if (icmp_xmit_holder == smp_processor_id())
return -EAGAIN;
spin_lock(&icmp_socket->sk->lock.slock);
diff -Naur v2.4.0-test11-linux/net/ipv4/ipmr.c linux/net/ipv4/ipmr.c
--- v2.4.0-test11-linux/net/ipv4/ipmr.c Thu Sep 21 19:53:51 2000
+++ linux/net/ipv4/ipmr.c Tue Nov 28 00:32:31 2000
@@ -319,7 +319,7 @@
unsigned long expires;
struct mfc_cache *c, **cp;
- if (!spin_trylock(&mfc_unres_lock)) {
+ if (spin2_trylock(&mfc_unres_lock)) {
mod_timer(&ipmr_expire_timer, jiffies+HZ/10);
return;
}
diff -Naur v2.4.0-test11-linux/net/ipv6/icmp.c linux/net/ipv6/icmp.c
--- v2.4.0-test11-linux/net/ipv6/icmp.c Mon Mar 27 20:35:57 2000
+++ linux/net/ipv6/icmp.c Tue Nov 28 00:32:37 2000
@@ -92,7 +92,7 @@
static int icmpv6_xmit_lock_bh(void)
{
- if (!spin_trylock(&icmpv6_socket->sk->lock.slock)) {
+ if (spin2_trylock(&icmpv6_socket->sk->lock.slock)) {
if (icmpv6_xmit_holder == smp_processor_id())
return -EAGAIN;
spin_lock(&icmpv6_socket->sk->lock.slock);
diff -Naur v2.4.0-test11-linux/net/ipv6/ip6_fib.c linux/net/ipv6/ip6_fib.c
--- v2.4.0-test11-linux/net/ipv6/ip6_fib.c Fri Oct 13 00:58:10 2000
+++ linux/net/ipv6/ip6_fib.c Tue Nov 28 00:32:43 2000
@@ -1174,7 +1174,7 @@
gc_args.timeout = (int)dummy;
} else {
local_bh_disable();
- if (!spin_trylock(&fib6_gc_lock)) {
+ if (spin2_trylock(&fib6_gc_lock)) {
mod_timer(&ip6_fib_timer, jiffies + HZ);
local_bh_enable();
return;
diff -Naur v2.4.0-test11-linux/net/sched/sch_generic.c linux/net/sched/sch_generic.c
--- v2.4.0-test11-linux/net/sched/sch_generic.c Thu Sep 21 19:54:19 2000
+++ linux/net/sched/sch_generic.c Tue Nov 28 00:32:51 2000
@@ -81,7 +81,7 @@
/* Dequeue packet */
if ((skb = q->dequeue(q)) != NULL) {
- if (spin_trylock(&dev->xmit_lock)) {
+ if (!spin2_trylock(&dev->xmit_lock)) {
/* Remember that the driver is grabbed by us. */
dev->xmit_lock_owner = smp_processor_id();
diff -Naur v2.4.0-test11-linux/net/sched/sch_teql.c linux/net/sched/sch_teql.c
--- v2.4.0-test11-linux/net/sched/sch_teql.c Fri Jul 28 23:45:29 2000
+++ linux/net/sched/sch_teql.c Tue Nov 28 00:32:59 2000
@@ -302,7 +302,7 @@
switch (teql_resolve(skb, skb_res, slave)) {
case 0:
- if (spin_trylock(&slave->xmit_lock)) {
+ if (!spin2_trylock(&slave->xmit_lock)) {
slave->xmit_lock_owner = smp_processor_id();
if (!netif_queue_stopped(slave) &&
slave->hard_start_xmit(skb, slave) == 0) {
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: *_trylock return on success?
@ 2000-12-04 19:46 george anzinger
0 siblings, 0 replies; 9+ messages in thread
From: george anzinger @ 2000-12-04 19:46 UTC (permalink / raw)
To: linux-kernel@vger.redhat.com, Rik van Riel, Philipp Rumpf,
Roger Larsson
So what is a coder to do. We need to define the pi_mutex_trylock(). If
I understand this thread, it should return 0 on success. Is this
correct?
George
On Saturday 25 November 2000 22:05, Roger Larsson wrote:
> On Saturday 25 November 2000 20:22, Philipp Rumpf wrote:
> > On Sat, Nov 25, 2000 at 08:03:49PM +0100, Roger Larsson wrote:
> > > > _trylock functions return 0 for success.
> > >
> > > Not spin_trylock
> >
> > Argh, I missed the (recent ?) change to make x86 spinlocks use 1 to mean
> > unlocked. You're correct, and obviously this should be fixed.
Have looked more into this now...
tasklet_trylock is also wrong (but there are only four of them)
Is this 2.4 only, or where there spin locks earlier too?
My suggestion now is a few steps:
1) to release a kernel version that has corrected _trylocks;
spin2_trylock and tasklet2_trylock.
[with corresponding updates in as many places as possible:
s/!spin_trylock/spin2_trylock/g
s/spin_trylock/!spin2_trylock/g
. . .]
(ready for spin trylock, not done for tasklet yet..., attached,
hope it got included OK - not fully used to kmail)
2) This will in house only drives or compilations that in some
strange way uses this calls...
3a) (DANGEROUS) global rename spin2_trylock to spin_trylock
[no logic change this time - only name]
3b) (dangerous) add compatibility interface
#define spin_trylock(L) (!spin2_trylock(L))
Probably not necessary since it can not be linked against.
Binary modules will contain their own compatibility code :-)
Probably preferred by those who maintain drivers for several
releases; 2.2, 2.4, ...
3c) do not do anything more...
Alternative:
1b) do nothing at all - suffer later
/RogerL
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2000-12-04 20:18 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2000-12-04 19:46 *_trylock return on success? george anzinger
-- strict thread matches above, loose matches on Subject: below --
2000-11-25 15:07 Roger Larsson
2000-11-25 17:49 ` Rik van Riel
2000-11-25 18:30 ` Philipp Rumpf
2000-11-25 19:03 ` Roger Larsson
2000-11-25 19:22 ` Philipp Rumpf
2000-11-25 21:05 ` Roger Larsson
2000-11-28 1:07 ` Roger Larsson
2000-11-25 18:58 ` Roger Larsson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox