* [RFC PATCH V2 FIX] Fix deadlock during find
@ 2008-11-20 13:45 Michael Trimarchi
2008-11-25 17:06 ` Paul Mundt
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Michael Trimarchi @ 2008-11-20 13:45 UTC (permalink / raw)
To: linux-sh
Fix a deadlock during find.
Continue with testing i find and errata implementation in the unlock.
This patch fix the deadlock.
Signed-off-by: Michael Trimarchi <trimarchimichael@yahoo.it>
---
arch/sh/include/asm/mutex_sh.h | 12 ++++++------
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/sh/include/asm/mutex_sh.h b/arch/sh/include/asm/mutex_sh.h
index c98e875..c65f19e 100644
--- a/arch/sh/include/asm/mutex_sh.h
+++ b/arch/sh/include/asm/mutex_sh.h
@@ -48,18 +48,18 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
static inline void
__mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *))
{
- int __res, __orig;
+ int __res;
__asm__ __volatile__ (
- "movli.l @%2, %0 \n\t"
- "mov %0, %1 \n\t"
+ "1: movli.l @%1, %0 \n\t"
"add #1, %0 \n\t"
- "movco.l %0, @%2 "
- : "=&z" (__res), "=&r" (__orig)
+ "movco.l %0, @%1 \n\t"
+ "bf 1b\n\t"
+ : "=&z" (__res)
: "r" (&(count)->counter)
: "t" );
- if (unlikely(__orig != 0))
+ if (unlikely(__res <= 0))
fail_fn(count);
}
--
1.5.6.5
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC PATCH V2 FIX] Fix deadlock during find
2008-11-20 13:45 [RFC PATCH V2 FIX] Fix deadlock during find Michael Trimarchi
@ 2008-11-25 17:06 ` Paul Mundt
2008-11-26 13:04 ` Michael Trimarchi
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Paul Mundt @ 2008-11-25 17:06 UTC (permalink / raw)
To: linux-sh
On Thu, Nov 20, 2008 at 02:45:02PM +0100, Michael Trimarchi wrote:
> Fix a deadlock during find.
>
> Continue with testing i find and errata implementation in the unlock.
> This patch fix the deadlock.
>
> Signed-off-by: Michael Trimarchi <trimarchimichael@yahoo.it>
>
> ---
> arch/sh/include/asm/mutex_sh.h | 12 ++++++------
> 1 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/sh/include/asm/mutex_sh.h b/arch/sh/include/asm/mutex_sh.h
> index c98e875..c65f19e 100644
> --- a/arch/sh/include/asm/mutex_sh.h
> +++ b/arch/sh/include/asm/mutex_sh.h
> @@ -48,18 +48,18 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
> static inline void
> __mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *))
> {
> - int __res, __orig;
> + int __res;
>
> __asm__ __volatile__ (
> - "movli.l @%2, %0 \n\t"
> - "mov %0, %1 \n\t"
> + "1: movli.l @%1, %0 \n\t"
> "add #1, %0 \n\t"
> - "movco.l %0, @%2 "
> - : "=&z" (__res), "=&r" (__orig)
> + "movco.l %0, @%1 \n\t"
> + "bf 1b\n\t"
> + : "=&z" (__res)
> : "r" (&(count)->counter)
> : "t" );
>
> - if (unlikely(__orig != 0))
> + if (unlikely(__res <= 0))
> fail_fn(count);
> }
>
Making __mutex_fastpath_unlock() loop seems counter-intuitive. I think
the initial test on __orig is what was causing you issues rather than the
need for looping. I do see why ARM did it this way, but we don't have
precisely the same semantics there.
Does the following patch against current git pass your test cases?
---
arch/sh/include/asm/mutex-llsc.h | 37 +++++++++++++++++++++----------------
1 file changed, 21 insertions(+), 16 deletions(-)
diff --git a/arch/sh/include/asm/mutex-llsc.h b/arch/sh/include/asm/mutex-llsc.h
index 7c75af5..a91990c 100644
--- a/arch/sh/include/asm/mutex-llsc.h
+++ b/arch/sh/include/asm/mutex-llsc.h
@@ -21,16 +21,18 @@
static inline void
__mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
{
- int __res;
+ int __ex_flag, __res;
__asm__ __volatile__ (
- "movli.l @%1, %0 \n"
- "dt %0 \n"
- "movco.l %0, @%1 \n"
- : "=&z" (__res)
+ "movli.l @%2, %0 \n"
+ "add #-1, %0 \n"
+ "movco.l %0, @%2 \n"
+ "movt %1 \n"
+ : "=&z" (__res), "=&r" (__ex_flag)
: "r" (&(count)->counter)
: "t");
+ __res |= !__ex_flag;
if (unlikely(__res != 0))
fail_fn(count);
}
@@ -38,16 +40,18 @@ __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
static inline int
__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
{
- int __res;
+ int __ex_flag, __res;
__asm__ __volatile__ (
- "movli.l @%1, %0 \n"
- "dt %0 \n"
- "movco.l %0, @%1 \n"
- : "=&z" (__res)
+ "movli.l @%2, %0 \n"
+ "add #-1, %0 \n"
+ "movco.l %0, @%2 \n"
+ "movt %1 \n"
+ : "=&z" (__res), "=&r" (__ex_flag)
: "r" (&(count)->counter)
: "t");
+ __res |= !__ex_flag;
if (unlikely(__res != 0))
__res = fail_fn(count);
@@ -57,18 +61,19 @@ __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
static inline void
__mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *))
{
- int __res;
+ int __ex_flag, __res;
__asm__ __volatile__ (
- "1: movli.l @%1, %0 \n\t"
+ "movli.l @%2, %0 \n\t"
"add #1, %0 \n\t"
- "movco.l %0, @%1 \n\t"
- "bf 1b\n\t"
- : "=&z" (__res)
+ "movco.l %0, @%2 \n\t"
+ "movt %1 \n\t"
+ : "=&z" (__res), "=&r" (__ex_flag)
: "r" (&(count)->counter)
: "t");
- if (unlikely(__res <= 0))
+ __res |= !__ex_flag;
+ if (unlikely(__res != 0))
fail_fn(count);
}
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC PATCH V2 FIX] Fix deadlock during find
2008-11-20 13:45 [RFC PATCH V2 FIX] Fix deadlock during find Michael Trimarchi
2008-11-25 17:06 ` Paul Mundt
@ 2008-11-26 13:04 ` Michael Trimarchi
2008-12-06 13:01 ` Paul Mundt
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Michael Trimarchi @ 2008-11-26 13:04 UTC (permalink / raw)
To: linux-sh
Hi,
----- Messaggio originale -----
> Da: Paul Mundt <lethal@linux-sh.org>
> A: Michael Trimarchi <trimarchimichael@yahoo.it>
> Cc: linux-sh@vger.kernel.org
> Inviato: Martedì 25 novembre 2008, 18:06:03
> Oggetto: Re: [RFC PATCH V2 FIX] Fix deadlock during find
>
> On Thu, Nov 20, 2008 at 02:45:02PM +0100, Michael Trimarchi wrote:
> > Fix a deadlock during find.
> >
> > Continue with testing i find and errata implementation in the unlock.
> > This patch fix the deadlock.
> >
> > Signed-off-by: Michael Trimarchi
> >
> > ---
> > arch/sh/include/asm/mutex_sh.h | 12 ++++++------
> > 1 files changed, 6 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/sh/include/asm/mutex_sh.h b/arch/sh/include/asm/mutex_sh.h
> > index c98e875..c65f19e 100644
> > --- a/arch/sh/include/asm/mutex_sh.h
> > +++ b/arch/sh/include/asm/mutex_sh.h
> > @@ -48,18 +48,18 @@ __mutex_fastpath_lock_retval(atomic_t *count, int
> (*fail_fn)(atomic_t *))
> > static inline void
> > __mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *))
> > {
> > - int __res, __orig;
> > + int __res;
> >
> > __asm__ __volatile__ (
> > - "movli.l @%2, %0 \n\t"
> > - "mov %0, %1 \n\t"
> > + "1: movli.l @%1, %0 \n\t"
> > "add #1, %0 \n\t"
> > - "movco.l %0, @%2 "
> > - : "=&z" (__res), "=&r" (__orig)
> > + "movco.l %0, @%1 \n\t"
> > + "bf 1b\n\t"
> > + : "=&z" (__res)
> > : "r" (&(count)->counter)
> > : "t" );
> >
> > - if (unlikely(__orig != 0))
> > + if (unlikely(__res <= 0))
> > fail_fn(count);
> > }
> >
>
> Making __mutex_fastpath_unlock() loop seems counter-intuitive. I think
> the initial test on __orig is what was causing you issues rather than the
> need for looping. I do see why ARM did it this way, but we don't have
> precisely the same semantics there.
>
> Does the following patch against current git pass your test cases?
>
This is an old one patch. You integrate the correct one V3
Regards MIchael
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH V2 FIX] Fix deadlock during find
2008-11-20 13:45 [RFC PATCH V2 FIX] Fix deadlock during find Michael Trimarchi
2008-11-25 17:06 ` Paul Mundt
2008-11-26 13:04 ` Michael Trimarchi
@ 2008-12-06 13:01 ` Paul Mundt
2008-12-06 14:16 ` Michael Trimarchi
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Paul Mundt @ 2008-12-06 13:01 UTC (permalink / raw)
To: linux-sh
On Wed, Nov 26, 2008 at 01:04:28PM +0000, Michael Trimarchi wrote:
> ----- Messaggio originale -----
> > Da: Paul Mundt <lethal@linux-sh.org>
> > A: Michael Trimarchi <trimarchimichael@yahoo.it>
> > Cc: linux-sh@vger.kernel.org
> > Inviato: Marted? 25 novembre 2008, 18:06:03
> > Oggetto: Re: [RFC PATCH V2 FIX] Fix deadlock during find
> >
[snip]
> > > @@ -48,18 +48,18 @@ __mutex_fastpath_lock_retval(atomic_t *count, int
> > (*fail_fn)(atomic_t *))
> > > static inline void
> > > __mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *))
> > > {
> > > - int __res, __orig;
> > > + int __res;
> > >
> > > __asm__ __volatile__ (
> > > - "movli.l @%2, %0 \n\t"
> > > - "mov %0, %1 \n\t"
> > > + "1: movli.l @%1, %0 \n\t"
> > > "add #1, %0 \n\t"
> > > - "movco.l %0, @%2 "
> > > - : "=&z" (__res), "=&r" (__orig)
> > > + "movco.l %0, @%1 \n\t"
> > > + "bf 1b\n\t"
> > > + : "=&z" (__res)
> > > : "r" (&(count)->counter)
> > > : "t" );
> > >
> > > - if (unlikely(__orig != 0))
> > > + if (unlikely(__res <= 0))
> > > fail_fn(count);
> > > }
> > >
> >
> > Making __mutex_fastpath_unlock() loop seems counter-intuitive. I think
> > the initial test on __orig is what was causing you issues rather than the
> > need for looping. I do see why ARM did it this way, but we don't have
> > precisely the same semantics there.
> >
> > Does the following patch against current git pass your test cases?
> >
> This is an old one patch. You integrate the correct one V3
>
The patch in question was against what is in current git. The very
definition of the fast-path is that it is a single-shot that isn't busy
looping, as that is what the slow path does. Unless you see any
particular issues with my patch, I will queue it up, as it brings us back
in line with what the fast-path semantics are supposed to be. So far I
haven't had any issues with the refactored fast-path.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH V2 FIX] Fix deadlock during find
2008-11-20 13:45 [RFC PATCH V2 FIX] Fix deadlock during find Michael Trimarchi
` (2 preceding siblings ...)
2008-12-06 13:01 ` Paul Mundt
@ 2008-12-06 14:16 ` Michael Trimarchi
2008-12-06 14:17 ` Michael Trimarchi
2008-12-06 14:17 ` Michael Trimarchi
5 siblings, 0 replies; 7+ messages in thread
From: Michael Trimarchi @ 2008-12-06 14:16 UTC (permalink / raw)
To: linux-sh
Hi,
--- Sab 6/12/08, Paul Mundt <lethal@linux-sh.org> ha scritto:
> Da: Paul Mundt <lethal@linux-sh.org>
> Oggetto: Re: [RFC PATCH V2 FIX] Fix deadlock during find
> A: "Michael Trimarchi" <trimarchimichael@yahoo.it>
> Cc: linux-sh@vger.kernel.org
> Data: Sabato 6 dicembre 2008, 14:01
> On Wed, Nov 26, 2008 at 01:04:28PM +0000, Michael Trimarchi
> wrote:
> > ----- Messaggio originale -----
> > > Da: Paul Mundt <lethal@linux-sh.org>
> > > A: Michael Trimarchi
> <trimarchimichael@yahoo.it>
> > > Cc: linux-sh@vger.kernel.org
> > > Inviato: Marted? 25 novembre 2008, 18:06:03
> > > Oggetto: Re: [RFC PATCH V2 FIX] Fix deadlock
> during find
> > >
> [snip]
>
> > > > @@ -48,18 +48,18 @@
> __mutex_fastpath_lock_retval(atomic_t *count, int
> > > (*fail_fn)(atomic_t *))
> > > > static inline void
> > > > __mutex_fastpath_unlock(atomic_t *count,
> void (*fail_fn)(atomic_t *))
> > > > {
> > > > - int __res, __orig;
> > > > + int __res;
> > > >
> > > > __asm__ __volatile__ (
> > > > - "movli.l @%2, %0
> \n\t"
> > > > - "mov %0, %1
> \n\t"
> > > > + "1: movli.l @%1, %0
> \n\t"
> > > > "add #1, %0
> \n\t"
> > > > - "movco.l %0, @%2 "
> > > > - : "=&z" (__res),
> "=&r" (__orig)
> > > > + "movco.l %0, @%1
> \n\t"
> > > > + "bf
> 1b\n\t"
> > > > + : "=&z" (__res)
> > > > : "r"
> (&(count)->counter)
> > > > : "t" );
> > > >
> > > > - if (unlikely(__orig != 0))
> > > > + if (unlikely(__res <= 0))
> > > > fail_fn(count);
> > > > }
> > > >
> > >
> > > Making __mutex_fastpath_unlock() loop seems
> counter-intuitive. I think
> > > the initial test on __orig is what was causing
> you issues rather than the
> > > need for looping. I do see why ARM did it this
> way, but we don't have
> > > precisely the same semantics there.
> > >
> > > Does the following patch against current git pass
> your test cases?
> > >
> > This is an old one patch. You integrate the correct
> one V3
> >
> The patch in question was against what is in current git.
> The very
> definition of the fast-path is that it is a single-shot
> that isn't busy
> looping, as that is what the slow path does. Unless you see
> any
> particular issues with my patch, I will queue it up, as it
> brings us back
> in line with what the fast-path semantics are supposed to
> be. So far I
> haven't had any issues with the refactored fast-path.
I'm agree with you.
Regards Michael
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH V2 FIX] Fix deadlock during find
2008-11-20 13:45 [RFC PATCH V2 FIX] Fix deadlock during find Michael Trimarchi
` (3 preceding siblings ...)
2008-12-06 14:16 ` Michael Trimarchi
@ 2008-12-06 14:17 ` Michael Trimarchi
2008-12-06 14:17 ` Michael Trimarchi
5 siblings, 0 replies; 7+ messages in thread
From: Michael Trimarchi @ 2008-12-06 14:17 UTC (permalink / raw)
To: linux-sh
Hi,
--- Sab 6/12/08, Paul Mundt <lethal@linux-sh.org> ha scritto:
> Da: Paul Mundt <lethal@linux-sh.org>
> Oggetto: Re: [RFC PATCH V2 FIX] Fix deadlock during find
> A: "Michael Trimarchi" <trimarchimichael@yahoo.it>
> Cc: linux-sh@vger.kernel.org
> Data: Sabato 6 dicembre 2008, 14:01
> On Wed, Nov 26, 2008 at 01:04:28PM +0000, Michael Trimarchi
> wrote:
> > ----- Messaggio originale -----
> > > Da: Paul Mundt <lethal@linux-sh.org>
> > > A: Michael Trimarchi
> <trimarchimichael@yahoo.it>
> > > Cc: linux-sh@vger.kernel.org
> > > Inviato: Marted? 25 novembre 2008, 18:06:03
> > > Oggetto: Re: [RFC PATCH V2 FIX] Fix deadlock
> during find
> > >
> [snip]
>
> > > > @@ -48,18 +48,18 @@
> __mutex_fastpath_lock_retval(atomic_t *count, int
> > > (*fail_fn)(atomic_t *))
> > > > static inline void
> > > > __mutex_fastpath_unlock(atomic_t *count,
> void (*fail_fn)(atomic_t *))
> > > > {
> > > > - int __res, __orig;
> > > > + int __res;
> > > >
> > > > __asm__ __volatile__ (
> > > > - "movli.l @%2, %0
> \n\t"
> > > > - "mov %0, %1
> \n\t"
> > > > + "1: movli.l @%1, %0
> \n\t"
> > > > "add #1, %0
> \n\t"
> > > > - "movco.l %0, @%2 "
> > > > - : "=&z" (__res),
> "=&r" (__orig)
> > > > + "movco.l %0, @%1
> \n\t"
> > > > + "bf
> 1b\n\t"
> > > > + : "=&z" (__res)
> > > > : "r"
> (&(count)->counter)
> > > > : "t" );
> > > >
> > > > - if (unlikely(__orig != 0))
> > > > + if (unlikely(__res <= 0))
> > > > fail_fn(count);
> > > > }
> > > >
> > >
> > > Making __mutex_fastpath_unlock() loop seems
> counter-intuitive. I think
> > > the initial test on __orig is what was causing
> you issues rather than the
> > > need for looping. I do see why ARM did it this
> way, but we don't have
> > > precisely the same semantics there.
> > >
> > > Does the following patch against current git pass
> your test cases?
> > >
> > This is an old one patch. You integrate the correct
> one V3
> >
> The patch in question was against what is in current git.
> The very
> definition of the fast-path is that it is a single-shot
> that isn't busy
> looping, as that is what the slow path does. Unless you see
> any
> particular issues with my patch, I will queue it up, as it
> brings us back
> in line with what the fast-path semantics are supposed to
> be. So far I
> haven't had any issues with the refactored fast-path.
I'm agree with you.
Regards Michael
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH V2 FIX] Fix deadlock during find
2008-11-20 13:45 [RFC PATCH V2 FIX] Fix deadlock during find Michael Trimarchi
` (4 preceding siblings ...)
2008-12-06 14:17 ` Michael Trimarchi
@ 2008-12-06 14:17 ` Michael Trimarchi
5 siblings, 0 replies; 7+ messages in thread
From: Michael Trimarchi @ 2008-12-06 14:17 UTC (permalink / raw)
To: linux-sh
--- Sab 6/12/08, Paul Mundt <lethal@linux-sh.org> ha scritto:
> Da: Paul Mundt <lethal@linux-sh.org>
> Oggetto: Re: [RFC PATCH V2 FIX] Fix deadlock during find
> A: "Michael Trimarchi" <trimarchimichael@yahoo.it>
> Cc: linux-sh@vger.kernel.org
> Data: Sabato 6 dicembre 2008, 14:01
> On Wed, Nov 26, 2008 at 01:04:28PM +0000, Michael Trimarchi
> wrote:
> > ----- Messaggio originale -----
> > > Da: Paul Mundt <lethal@linux-sh.org>
> > > A: Michael Trimarchi
> <trimarchimichael@yahoo.it>
> > > Cc: linux-sh@vger.kernel.org
> > > Inviato: Marted? 25 novembre 2008, 18:06:03
> > > Oggetto: Re: [RFC PATCH V2 FIX] Fix deadlock
> during find
> > >
> [snip]
>
> > > > @@ -48,18 +48,18 @@
> __mutex_fastpath_lock_retval(atomic_t *count, int
> > > (*fail_fn)(atomic_t *))
> > > > static inline void
> > > > __mutex_fastpath_unlock(atomic_t *count,
> void (*fail_fn)(atomic_t *))
> > > > {
> > > > - int __res, __orig;
> > > > + int __res;
> > > >
> > > > __asm__ __volatile__ (
> > > > - "movli.l @%2, %0
> \n\t"
> > > > - "mov %0, %1
> \n\t"
> > > > + "1: movli.l @%1, %0
> \n\t"
> > > > "add #1, %0
> \n\t"
> > > > - "movco.l %0, @%2 "
> > > > - : "=&z" (__res),
> "=&r" (__orig)
> > > > + "movco.l %0, @%1
> \n\t"
> > > > + "bf
> 1b\n\t"
> > > > + : "=&z" (__res)
> > > > : "r"
> (&(count)->counter)
> > > > : "t" );
> > > >
> > > > - if (unlikely(__orig != 0))
> > > > + if (unlikely(__res <= 0))
> > > > fail_fn(count);
> > > > }
> > > >
> > >
> > > Making __mutex_fastpath_unlock() loop seems
> counter-intuitive. I think
> > > the initial test on __orig is what was causing
> you issues rather than the
> > > need for looping. I do see why ARM did it this
> way, but we don't have
> > > precisely the same semantics there.
> > >
> > > Does the following patch against current git pass
> your test cases?
> > >
> > This is an old one patch. You integrate the correct
> one V3
> >
> The patch in question was against what is in current git.
> The very
> definition of the fast-path is that it is a single-shot
> that isn't busy
> looping, as that is what the slow path does. Unless you see
> any
> particular issues with my patch, I will queue it up, as it
> brings us back
> in line with what the fast-path semantics are supposed to
> be. So far I
> haven't had any issues with the refactored fast-path.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2008-12-06 14:17 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-11-20 13:45 [RFC PATCH V2 FIX] Fix deadlock during find Michael Trimarchi
2008-11-25 17:06 ` Paul Mundt
2008-11-26 13:04 ` Michael Trimarchi
2008-12-06 13:01 ` Paul Mundt
2008-12-06 14:16 ` Michael Trimarchi
2008-12-06 14:17 ` Michael Trimarchi
2008-12-06 14:17 ` Michael Trimarchi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox