* [PATCH] RISC-V: Break load reservations during switch_to
@ 2019-06-05 23:17 Palmer Dabbelt
2019-06-06 8:37 ` Marco Peereboom
2019-06-06 9:05 ` Christoph Hellwig
0 siblings, 2 replies; 6+ messages in thread
From: Palmer Dabbelt @ 2019-06-05 23:17 UTC (permalink / raw)
To: linux-riscv, Paul Walmsley, marco, me, joel; +Cc: linux-kernel, Palmer Dabbelt
The comment describes why in detail. This was found because QEMU never
gives up load reservations, the issue is unlikely to manifest on real
hardware.
Thanks to Carlos Eduardo for finding the bug!
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
---
arch/riscv/kernel/entry.S | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
index 1c1ecc238cfa..e9fc3480e6b4 100644
--- a/arch/riscv/kernel/entry.S
+++ b/arch/riscv/kernel/entry.S
@@ -330,6 +330,24 @@ ENTRY(__switch_to)
add a3, a0, a4
add a4, a1, a4
REG_S ra, TASK_THREAD_RA_RA(a3)
+ /*
+ * The Linux ABI allows programs to depend on load reservations being
+ * broken on context switches, but the ISA doesn't require that the
+ * hardware ever breaks a load reservation. The only way to break a
+ * load reservation is with a store conditional, so we emit one here.
+ * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
+ * know this will always fail, but just to be on the safe side this
+ * writes the same value that was unconditionally written by the
+ * previous instruction.
+ */
+#if (TASK_THREAD_RA_RA != 0)
+# error "The offset between ra and ra is non-zero"
+#endif
+#if (__riscv_xlen == 64)
+ sc.d x0, ra, 0(a3)
+#else
+ sc.w x0, ra, 0(a3)
+#endif
REG_S sp, TASK_THREAD_SP_RA(a3)
REG_S s0, TASK_THREAD_S0_RA(a3)
REG_S s1, TASK_THREAD_S1_RA(a3)
--
2.21.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] RISC-V: Break load reservations during switch_to
2019-06-05 23:17 [PATCH] RISC-V: Break load reservations during switch_to Palmer Dabbelt
@ 2019-06-06 8:37 ` Marco Peereboom
2019-06-06 9:05 ` Christoph Hellwig
1 sibling, 0 replies; 6+ messages in thread
From: Marco Peereboom @ 2019-06-06 8:37 UTC (permalink / raw)
To: Palmer Dabbelt; +Cc: linux-riscv, Paul Walmsley, me, joel, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1673 bytes --]
Ah that’s sneaky!!
> On Jun 6, 2019, at 12:17 AM, Palmer Dabbelt <palmer@sifive.com> wrote:
>
> The comment describes why in detail. This was found because QEMU never
> gives up load reservations, the issue is unlikely to manifest on real
> hardware.
>
> Thanks to Carlos Eduardo for finding the bug!
>
> Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
> ---
> arch/riscv/kernel/entry.S | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
> index 1c1ecc238cfa..e9fc3480e6b4 100644
> --- a/arch/riscv/kernel/entry.S
> +++ b/arch/riscv/kernel/entry.S
> @@ -330,6 +330,24 @@ ENTRY(__switch_to)
> add a3, a0, a4
> add a4, a1, a4
> REG_S ra, TASK_THREAD_RA_RA(a3)
> + /*
> + * The Linux ABI allows programs to depend on load reservations being
> + * broken on context switches, but the ISA doesn't require that the
> + * hardware ever breaks a load reservation. The only way to break a
> + * load reservation is with a store conditional, so we emit one here.
> + * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
> + * know this will always fail, but just to be on the safe side this
> + * writes the same value that was unconditionally written by the
> + * previous instruction.
> + */
> +#if (TASK_THREAD_RA_RA != 0)
> +# error "The offset between ra and ra is non-zero"
> +#endif
> +#if (__riscv_xlen == 64)
> + sc.d x0, ra, 0(a3)
> +#else
> + sc.w x0, ra, 0(a3)
> +#endif
> REG_S sp, TASK_THREAD_SP_RA(a3)
> REG_S s0, TASK_THREAD_S0_RA(a3)
> REG_S s1, TASK_THREAD_S1_RA(a3)
> --
> 2.21.0
>
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] RISC-V: Break load reservations during switch_to
2019-06-05 23:17 [PATCH] RISC-V: Break load reservations during switch_to Palmer Dabbelt
2019-06-06 8:37 ` Marco Peereboom
@ 2019-06-06 9:05 ` Christoph Hellwig
2019-06-06 19:10 ` Palmer Dabbelt
2019-06-06 19:32 ` Andreas Schwab
1 sibling, 2 replies; 6+ messages in thread
From: Christoph Hellwig @ 2019-06-06 9:05 UTC (permalink / raw)
To: Palmer Dabbelt; +Cc: linux-riscv, Paul Walmsley, marco, me, joel, linux-kernel
On Wed, Jun 05, 2019 at 04:17:35PM -0700, Palmer Dabbelt wrote:
> REG_S ra, TASK_THREAD_RA_RA(a3)
> + /*
> + * The Linux ABI allows programs to depend on load reservations being
> + * broken on context switches, but the ISA doesn't require that the
> + * hardware ever breaks a load reservation. The only way to break a
> + * load reservation is with a store conditional, so we emit one here.
> + * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
> + * know this will always fail, but just to be on the safe side this
> + * writes the same value that was unconditionally written by the
> + * previous instruction.
> + */
> +#if (TASK_THREAD_RA_RA != 0)
I don't think this check works as intended. TASK_THREAD_RA_RA is a
parameterized macro, thus the above would never evaluate to 0. The
error message also is rather odd while we're at it.
> +#if (__riscv_xlen == 64)
> + sc.d x0, ra, 0(a3)
> +#else
> + sc.w x0, ra, 0(a3)
> +#endif
I'd rather add an macro ala REG_S to asm.h and distinguish between the
xlen variants there:
#define REG_SC __REG_SEL(sc.d, sc.w)
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] RISC-V: Break load reservations during switch_to
2019-06-06 9:05 ` Christoph Hellwig
@ 2019-06-06 19:10 ` Palmer Dabbelt
2019-06-06 19:32 ` Andreas Schwab
1 sibling, 0 replies; 6+ messages in thread
From: Palmer Dabbelt @ 2019-06-06 19:10 UTC (permalink / raw)
To: Christoph Hellwig
Cc: linux-riscv, Paul Walmsley, marco, me, joel, linux-kernel
On Thu, 06 Jun 2019 02:05:18 PDT (-0700), Christoph Hellwig wrote:
> On Wed, Jun 05, 2019 at 04:17:35PM -0700, Palmer Dabbelt wrote:
>> REG_S ra, TASK_THREAD_RA_RA(a3)
>> + /*
>> + * The Linux ABI allows programs to depend on load reservations being
>> + * broken on context switches, but the ISA doesn't require that the
>> + * hardware ever breaks a load reservation. The only way to break a
>> + * load reservation is with a store conditional, so we emit one here.
>> + * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
>> + * know this will always fail, but just to be on the safe side this
>> + * writes the same value that was unconditionally written by the
>> + * previous instruction.
>> + */
>> +#if (TASK_THREAD_RA_RA != 0)
>
> I don't think this check works as intended. TASK_THREAD_RA_RA is a
> parameterized macro, thus the above would never evaluate to 0. The
> error message also is rather odd while we're at it.
OK, I didn't try it. The resulting number can never be non-zero, which is why
I couldn't come up with an error message that made sense. I'm not opposed to
dropping the check.
>> +#if (__riscv_xlen == 64)
>> + sc.d x0, ra, 0(a3)
>> +#else
>> + sc.w x0, ra, 0(a3)
>> +#endif
>
> I'd rather add an macro ala REG_S to asm.h and distinguish between the
> xlen variants there:
>
> #define REG_SC __REG_SEL(sc.d, sc.w)
Ya, I guess I was just being lazy. I'll put that in a v2.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] RISC-V: Break load reservations during switch_to
2019-06-06 9:05 ` Christoph Hellwig
2019-06-06 19:10 ` Palmer Dabbelt
@ 2019-06-06 19:32 ` Andreas Schwab
2019-06-07 22:12 ` Palmer Dabbelt
1 sibling, 1 reply; 6+ messages in thread
From: Andreas Schwab @ 2019-06-06 19:32 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Palmer Dabbelt, linux-riscv, Paul Walmsley, marco, me, joel,
linux-kernel
On Jun 06 2019, Christoph Hellwig <hch@infradead.org> wrote:
> On Wed, Jun 05, 2019 at 04:17:35PM -0700, Palmer Dabbelt wrote:
>> REG_S ra, TASK_THREAD_RA_RA(a3)
>> + /*
>> + * The Linux ABI allows programs to depend on load reservations being
>> + * broken on context switches, but the ISA doesn't require that the
>> + * hardware ever breaks a load reservation. The only way to break a
>> + * load reservation is with a store conditional, so we emit one here.
>> + * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
>> + * know this will always fail, but just to be on the safe side this
>> + * writes the same value that was unconditionally written by the
>> + * previous instruction.
>> + */
>> +#if (TASK_THREAD_RA_RA != 0)
>
> I don't think this check works as intended. TASK_THREAD_RA_RA is a
> parameterized macro,
Is it? Just because it is used before an open paren doesn't mean that
the macro takes a parameter.
Andreas.
--
Andreas Schwab, schwab@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] RISC-V: Break load reservations during switch_to
2019-06-06 19:32 ` Andreas Schwab
@ 2019-06-07 22:12 ` Palmer Dabbelt
0 siblings, 0 replies; 6+ messages in thread
From: Palmer Dabbelt @ 2019-06-07 22:12 UTC (permalink / raw)
To: schwab
Cc: Christoph Hellwig, linux-riscv, Paul Walmsley, marco, me, joel,
linux-kernel
On Thu, 06 Jun 2019 12:32:01 PDT (-0700), schwab@linux-m68k.org wrote:
> On Jun 06 2019, Christoph Hellwig <hch@infradead.org> wrote:
>
>> On Wed, Jun 05, 2019 at 04:17:35PM -0700, Palmer Dabbelt wrote:
>>> REG_S ra, TASK_THREAD_RA_RA(a3)
>>> + /*
>>> + * The Linux ABI allows programs to depend on load reservations being
>>> + * broken on context switches, but the ISA doesn't require that the
>>> + * hardware ever breaks a load reservation. The only way to break a
>>> + * load reservation is with a store conditional, so we emit one here.
>>> + * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
>>> + * know this will always fail, but just to be on the safe side this
>>> + * writes the same value that was unconditionally written by the
>>> + * previous instruction.
>>> + */
>>> +#if (TASK_THREAD_RA_RA != 0)
>>
>> I don't think this check works as intended. TASK_THREAD_RA_RA is a
>> parameterized macro,
>
> Is it? Just because it is used before an open paren doesn't mean that
> the macro takes a parameter.
Yes, you're right -- the parens there aren't a macro parameter, they're the
assembly syntax for constant-offset loads. I guess I'd just assumed Christoph
was referring to some magic in how asm-offsets gets generated, as I've never
actually looked inside that stuff. I went ahead and just tested this
$ git diff | cat
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
index 578bb5efc085..e3f06495dbf8 100644
--- a/arch/riscv/kernel/asm-offsets.c
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -125,6 +125,7 @@ void asm_offsets(void)
DEFINE(TASK_THREAD_RA_RA,
offsetof(struct task_struct, thread.ra)
- offsetof(struct task_struct, thread.ra)
+ + 1
);
DEFINE(TASK_THREAD_SP_RA,
offsetof(struct task_struct, thread.sp)
and it causes the expected failure
$ make.cross ARCH=riscv -j1
make CROSS_COMPILE=/home/palmer/.local/opt/gcc-8.1.0-nolibc/riscv64-linux/bin/riscv64-linux- ARCH=riscv -j1
CALL scripts/checksyscalls.sh
CALL scripts/atomic/check-atomics.sh
CHK include/generated/compile.h
AS arch/riscv/kernel/entry.o
arch/riscv/kernel/entry.S:344:3: error: #error "The offset between ra and ra is non-zero"
# error "The offset between ra and ra is non-zero"
^~~~~
make[1]: *** [scripts/Makefile.build:369: arch/riscv/kernel/entry.o] Error 1
make: *** [Makefile:1071: arch/riscv/kernel] Error 2
so I'm going to leave it alone. I'll submit a v2 with a better error message
and a cleaner sc.w/sc.d.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2019-06-07 22:12 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-06-05 23:17 [PATCH] RISC-V: Break load reservations during switch_to Palmer Dabbelt
2019-06-06 8:37 ` Marco Peereboom
2019-06-06 9:05 ` Christoph Hellwig
2019-06-06 19:10 ` Palmer Dabbelt
2019-06-06 19:32 ` Andreas Schwab
2019-06-07 22:12 ` Palmer Dabbelt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox