linux-alpha.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] alpha: spinlock: don't perform memory access in locked critical section
@ 2013-05-06 20:01 Will Deacon
  2013-05-06 20:19 ` Matt Turner
  2013-05-06 20:26 ` Al Viro
  0 siblings, 2 replies; 7+ messages in thread
From: Will Deacon @ 2013-05-06 20:01 UTC (permalink / raw)
  To: linux-alpha
  Cc: linux-kernel, Will Deacon, Richard Henderson, Ivan Kokshaysky,
	Matt Turner

The Alpha Architecture Reference Manual states that any memory access
performed between an LD_xL and a STx_C instruction may cause the
store-conditional to fail unconditionally and, as such, `no useful
program should do this'.

Linux is a useful program, so fix up the Alpha spinlock implementation
to use logical operations rather than load-address instructions for
generating immediates.

Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/alpha/include/asm/spinlock.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/alpha/include/asm/spinlock.h b/arch/alpha/include/asm/spinlock.h
index 3bba21e..0c357cd 100644
--- a/arch/alpha/include/asm/spinlock.h
+++ b/arch/alpha/include/asm/spinlock.h
@@ -29,7 +29,7 @@ static inline void arch_spin_lock(arch_spinlock_t * lock)
 	__asm__ __volatile__(
 	"1:	ldl_l	%0,%1\n"
 	"	bne	%0,2f\n"
-	"	lda	%0,1\n"
+	"	mov	1,%0\n"
 	"	stl_c	%0,%1\n"
 	"	beq	%0,2f\n"
 	"	mb\n"
@@ -86,7 +86,7 @@ static inline void arch_write_lock(arch_rwlock_t *lock)
 	__asm__ __volatile__(
 	"1:	ldl_l	%1,%0\n"
 	"	bne	%1,6f\n"
-	"	lda	%1,1\n"
+	"	mov	1,%1\n"
 	"	stl_c	%1,%0\n"
 	"	beq	%1,6f\n"
 	"	mb\n"
@@ -106,7 +106,7 @@ static inline int arch_read_trylock(arch_rwlock_t * lock)
 
 	__asm__ __volatile__(
 	"1:	ldl_l	%1,%0\n"
-	"	lda	%2,0\n"
+	"	mov	0,%2\n"
 	"	blbs	%1,2f\n"
 	"	subl	%1,2,%2\n"
 	"	stl_c	%2,%0\n"
@@ -128,9 +128,9 @@ static inline int arch_write_trylock(arch_rwlock_t * lock)
 
 	__asm__ __volatile__(
 	"1:	ldl_l	%1,%0\n"
-	"	lda	%2,0\n"
+	"	mov	0,%2\n"
 	"	bne	%1,2f\n"
-	"	lda	%2,1\n"
+	"	mov	1,%2\n"
 	"	stl_c	%2,%0\n"
 	"	beq	%2,6f\n"
 	"2:	mb\n"
-- 
1.8.2.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] alpha: spinlock: don't perform memory access in locked critical section
  2013-05-06 20:01 [PATCH] alpha: spinlock: don't perform memory access in locked critical section Will Deacon
@ 2013-05-06 20:19 ` Matt Turner
  2013-05-06 20:53   ` Al Viro
  2013-05-06 20:26 ` Al Viro
  1 sibling, 1 reply; 7+ messages in thread
From: Matt Turner @ 2013-05-06 20:19 UTC (permalink / raw)
  To: Will Deacon; +Cc: linux-alpha, linux-kernel, Richard Henderson, Ivan Kokshaysky

On Mon, May 6, 2013 at 1:01 PM, Will Deacon <will.deacon@arm.com> wrote:
> The Alpha Architecture Reference Manual states that any memory access
> performed between an LD_xL and a STx_C instruction may cause the
> store-conditional to fail unconditionally and, as such, `no useful
> program should do this'.
>
> Linux is a useful program, so fix up the Alpha spinlock implementation
> to use logical operations rather than load-address instructions for
> generating immediates.
>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
> Cc: Matt Turner <mattst88@gmail.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/alpha/include/asm/spinlock.h | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/alpha/include/asm/spinlock.h b/arch/alpha/include/asm/spinlock.h
> index 3bba21e..0c357cd 100644
> --- a/arch/alpha/include/asm/spinlock.h
> +++ b/arch/alpha/include/asm/spinlock.h
> @@ -29,7 +29,7 @@ static inline void arch_spin_lock(arch_spinlock_t * lock)
>         __asm__ __volatile__(
>         "1:     ldl_l   %0,%1\n"
>         "       bne     %0,2f\n"
> -       "       lda     %0,1\n"
> +       "       mov     1,%0\n"
>         "       stl_c   %0,%1\n"
>         "       beq     %0,2f\n"
>         "       mb\n"
> @@ -86,7 +86,7 @@ static inline void arch_write_lock(arch_rwlock_t *lock)
>         __asm__ __volatile__(
>         "1:     ldl_l   %1,%0\n"
>         "       bne     %1,6f\n"
> -       "       lda     %1,1\n"
> +       "       mov     1,%1\n"
>         "       stl_c   %1,%0\n"
>         "       beq     %1,6f\n"
>         "       mb\n"
> @@ -106,7 +106,7 @@ static inline int arch_read_trylock(arch_rwlock_t * lock)
>
>         __asm__ __volatile__(
>         "1:     ldl_l   %1,%0\n"
> -       "       lda     %2,0\n"
> +       "       mov     0,%2\n"
>         "       blbs    %1,2f\n"
>         "       subl    %1,2,%2\n"
>         "       stl_c   %2,%0\n"
> @@ -128,9 +128,9 @@ static inline int arch_write_trylock(arch_rwlock_t * lock)
>
>         __asm__ __volatile__(
>         "1:     ldl_l   %1,%0\n"
> -       "       lda     %2,0\n"
> +       "       mov     0,%2\n"
>         "       bne     %1,2f\n"
> -       "       lda     %2,1\n"
> +       "       mov     1,%2\n"
>         "       stl_c   %2,%0\n"
>         "       beq     %2,6f\n"
>         "2:     mb\n"
> --
> 1.8.2.2

I'm not sure of the interpretation that LDA counts as a memory access.

The manual says it's Ra <- Rbv + SEXT(disp).

It's not touching memory that I can see.

Does this fix a known problem or is it just something that you noticed?

Matt

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] alpha: spinlock: don't perform memory access in locked critical section
  2013-05-06 20:01 [PATCH] alpha: spinlock: don't perform memory access in locked critical section Will Deacon
  2013-05-06 20:19 ` Matt Turner
@ 2013-05-06 20:26 ` Al Viro
  1 sibling, 0 replies; 7+ messages in thread
From: Al Viro @ 2013-05-06 20:26 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-alpha, linux-kernel, Richard Henderson, Ivan Kokshaysky,
	Matt Turner

On Mon, May 06, 2013 at 09:01:05PM +0100, Will Deacon wrote:
> The Alpha Architecture Reference Manual states that any memory access
> performed between an LD_xL and a STx_C instruction may cause the
> store-conditional to fail unconditionally and, as such, `no useful
> program should do this'.
> 
> Linux is a useful program, so fix up the Alpha spinlock implementation
> to use logical operations rather than load-address instructions for
> generating immediates.

Huh?  Relevant quote is "If any other memory access (ECB, LDx, LDQ_U,
STx_C, STQ_U, WH64x) is executed on the given processor between the
LDx_L and the STx_C, the sequence above may always fail on some
implementations; hence, no no useful programs should do this".  Where
do you see LDA in that list and why would it possibly be there?  And
no, LDx does *not* cover it - the same reference manual gives
LD{Q,L,WU,BU} as expansion for LDx, using LDAx for LD{A,AH}; it's
a separate group of instructions and it does *NOT* do any kind of
memory access.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] alpha: spinlock: don't perform memory access in locked critical section
  2013-05-06 20:19 ` Matt Turner
@ 2013-05-06 20:53   ` Al Viro
  2013-05-06 21:12     ` Will Deacon
  0 siblings, 1 reply; 7+ messages in thread
From: Al Viro @ 2013-05-06 20:53 UTC (permalink / raw)
  To: Matt Turner
  Cc: Will Deacon, linux-alpha, linux-kernel, Richard Henderson,
	Ivan Kokshaysky

On Mon, May 06, 2013 at 01:19:51PM -0700, Matt Turner wrote:

> I'm not sure of the interpretation that LDA counts as a memory access.
> 
> The manual says it's Ra <- Rbv + SEXT(disp).
> 
> It's not touching memory that I can see.

More to the point, the same manual gives explicit list of instructions
that shouldn't occur between LDx_L and STx_C, and LDA does not belong to any
of those.  I suspect that Will has misparsed the notations in there - LDx is
present in the list, but it's _not_ "all instructions with mnemonics starting
with LD", just the 4 "load integer from memory" ones.  FWIW, instructions
with that encoding (x01xxx<a:5><b:5><offs:16>) are grouped so:
	LDAx - LDA, LDAH; load address
	LDx -  LDL, LDQ, LDBU, LDWU; load memory data into integer register
	LDQ_U; load unaligned
	LDx_L - LDL_L, LDQ_L; load locked
	STx_C - STL_C, STQ_C; store conditional
	STx - STL, STQ, STB, STW; store
	STQ_U; store unaligned
They all have the same encoding, naturally enough (operation/register/address
representation), but that's it...  See section 4.2 in reference manual for
details; relevant note follows discussion of LDx_L and it spells the list
out.  LDx is present, LDAx isn't (and neither is LDA by itself).

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] alpha: spinlock: don't perform memory access in locked critical section
  2013-05-06 20:53   ` Al Viro
@ 2013-05-06 21:12     ` Will Deacon
  2013-05-06 21:28       ` Måns Rullgård
  2013-05-06 22:11       ` Will Deacon
  0 siblings, 2 replies; 7+ messages in thread
From: Will Deacon @ 2013-05-06 21:12 UTC (permalink / raw)
  To: Al Viro
  Cc: Matt Turner, linux-alpha@vger.kernel.org,
	linux-kernel@vger.kernel.org, Richard Henderson, Ivan Kokshaysky

Hi Al, Matt,

On Mon, May 06, 2013 at 09:53:30PM +0100, Al Viro wrote:
> On Mon, May 06, 2013 at 01:19:51PM -0700, Matt Turner wrote:
> 
> > I'm not sure of the interpretation that LDA counts as a memory access.
> > 
> > The manual says it's Ra <- Rbv + SEXT(disp).
> > 
> > It's not touching memory that I can see.
> 
> More to the point, the same manual gives explicit list of instructions
> that shouldn't occur between LDx_L and STx_C, and LDA does not belong to any
> of those.  I suspect that Will has misparsed the notations in there - LDx is
> present in the list, but it's _not_ "all instructions with mnemonics starting
> with LD", just the 4 "load integer from memory" ones.  FWIW, instructions
> with that encoding (x01xxx<a:5><b:5><offs:16>) are grouped so:
> 	LDAx - LDA, LDAH; load address
> 	LDx -  LDL, LDQ, LDBU, LDWU; load memory data into integer register
> 	LDQ_U; load unaligned
> 	LDx_L - LDL_L, LDQ_L; load locked
> 	STx_C - STL_C, STQ_C; store conditional
> 	STx - STL, STQ, STB, STW; store
> 	STQ_U; store unaligned

Your suspicions are right! I did assume that LDA fell under the LDx class,
so apologies for the false alarm. I suspect I should try and get out more,
rather than ponder over this reference manual.

The other (hopefully also wrong) worry that I had was when the manual
states that:

`If the virtual and physical addresses for a LDx_L and STx_C sequence are
 not within the same naturally aligned 16-byte sections of virtual and
 physical memory, that sequence may always fail, or may succeed despite
 another processor’s store to the lock range; hence, no useful program
 should do this'

This seems like it might have a curious interaction with CoW paging if
userspace is trying to use these instructions for a lock, since the
physical address for the conditional store might differ from the one which
was passed to the load due to CoW triggered by a different thread. Anyway,
I was still thinking about that one and haven't got as far as TLB
invalidation yet :)

> They all have the same encoding, naturally enough (operation/register/address
> representation), but that's it...  See section 4.2 in reference manual for
> details; relevant note follows discussion of LDx_L and it spells the list
> out.  LDx is present, LDAx isn't (and neither is LDA by itself).

Indeed, and looking at the disassembly, you can see the immediate operand to
LDA encoded into the instruction. I thought that perhaps it might behave
like ldr =<imm> on ARM, which goes and fetches the immediate value from the
literal pool.

Cheers for the explanation,

Will

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] alpha: spinlock: don't perform memory access in locked critical section
  2013-05-06 21:12     ` Will Deacon
@ 2013-05-06 21:28       ` Måns Rullgård
  2013-05-06 22:11       ` Will Deacon
  1 sibling, 0 replies; 7+ messages in thread
From: Måns Rullgård @ 2013-05-06 21:28 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-alpha

Will Deacon <will.deacon@arm.com> writes:

> Hi Al, Matt,
>
> On Mon, May 06, 2013 at 09:53:30PM +0100, Al Viro wrote:
>> On Mon, May 06, 2013 at 01:19:51PM -0700, Matt Turner wrote:
>> 
>> > I'm not sure of the interpretation that LDA counts as a memory access.
>> > 
>> > The manual says it's Ra <- Rbv + SEXT(disp).
>> > 
>> > It's not touching memory that I can see.
>> 
>> More to the point, the same manual gives explicit list of instructions
>> that shouldn't occur between LDx_L and STx_C, and LDA does not belong to any
>> of those.  I suspect that Will has misparsed the notations in there - LDx is
>> present in the list, but it's _not_ "all instructions with mnemonics starting
>> with LD", just the 4 "load integer from memory" ones.  FWIW, instructions
>> with that encoding (x01xxx<a:5><b:5><offs:16>) are grouped so:
>> 	LDAx - LDA, LDAH; load address
>> 	LDx -  LDL, LDQ, LDBU, LDWU; load memory data into integer register
>> 	LDQ_U; load unaligned
>> 	LDx_L - LDL_L, LDQ_L; load locked
>> 	STx_C - STL_C, STQ_C; store conditional
>> 	STx - STL, STQ, STB, STW; store
>> 	STQ_U; store unaligned
>
> Your suspicions are right! I did assume that LDA fell under the LDx class,
> so apologies for the false alarm. I suspect I should try and get out more,
> rather than ponder over this reference manual.

LDA uses the address generation circuitry from the load/store unit, but
it does not actually access memory.  It is merely a convenient way of
performing certain arithmetic operations, be it for scheduling reasons
or for the different range of immediate values available.

-- 
Måns Rullgård
mans@mansr.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] alpha: spinlock: don't perform memory access in locked critical section
  2013-05-06 21:12     ` Will Deacon
  2013-05-06 21:28       ` Måns Rullgård
@ 2013-05-06 22:11       ` Will Deacon
  1 sibling, 0 replies; 7+ messages in thread
From: Will Deacon @ 2013-05-06 22:11 UTC (permalink / raw)
  To: Al Viro
  Cc: Matt Turner, linux-alpha@vger.kernel.org,
	linux-kernel@vger.kernel.org, Richard Henderson, Ivan Kokshaysky

On Mon, May 06, 2013 at 10:12:38PM +0100, Will Deacon wrote:
> The other (hopefully also wrong) worry that I had was when the manual
> states that:
> 
> `If the virtual and physical addresses for a LDx_L and STx_C sequence are
>  not within the same naturally aligned 16-byte sections of virtual and
>  physical memory, that sequence may always fail, or may succeed despite
>  another processor’s store to the lock range; hence, no useful program
>  should do this'
> 
> This seems like it might have a curious interaction with CoW paging if
> userspace is trying to use these instructions for a lock, since the
> physical address for the conditional store might differ from the one which
> was passed to the load due to CoW triggered by a different thread. Anyway,
> I was still thinking about that one and haven't got as far as TLB
> invalidation yet :)

In case anybody is interested, the software broadcasting of TLB maintenance
solves this problem because the PAL_rti on the ret_to_user path will clear
the lock flag.

Will
--
To unsubscribe from this list: send the line "unsubscribe linux-alpha" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-05-06 22:11 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-06 20:01 [PATCH] alpha: spinlock: don't perform memory access in locked critical section Will Deacon
2013-05-06 20:19 ` Matt Turner
2013-05-06 20:53   ` Al Viro
2013-05-06 21:12     ` Will Deacon
2013-05-06 21:28       ` Måns Rullgård
2013-05-06 22:11       ` Will Deacon
2013-05-06 20:26 ` Al Viro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).