* [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32
@ 2026-02-03 7:30 Christophe Leroy (CS GROUP)
2026-02-03 20:55 ` Nathan Chancellor
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Christophe Leroy (CS GROUP) @ 2026-02-03 7:30 UTC (permalink / raw)
To: Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
Segher Boessenkool
Cc: Christophe Leroy (CS GROUP), linux-kernel, linuxppc-dev, llvm,
kernel test robot
Test robot reports the following error with clang-16.0.6:
In file included from kernel/rseq.c:75:
include/linux/rseq_entry.h:141:3: error: invalid operand for instruction
unsafe_get_user(offset, &ucs->post_commit_offset, efault);
^
include/linux/uaccess.h:608:2: note: expanded from macro 'unsafe_get_user'
arch_unsafe_get_user(x, ptr, local_label); \
^
arch/powerpc/include/asm/uaccess.h:518:2: note: expanded from macro 'arch_unsafe_get_user'
__get_user_size_goto(__gu_val, __gu_addr, sizeof(*(p)), e); \
^
arch/powerpc/include/asm/uaccess.h:284:2: note: expanded from macro '__get_user_size_goto'
__get_user_size_allowed(x, ptr, size, __gus_retval); \
^
arch/powerpc/include/asm/uaccess.h:275:10: note: expanded from macro '__get_user_size_allowed'
case 8: __get_user_asm2(x, (u64 __user *)ptr, retval); break; \
^
arch/powerpc/include/asm/uaccess.h:258:4: note: expanded from macro '__get_user_asm2'
" li %1+1,0\n" \
^
<inline asm>:7:5: note: instantiated into assembly here
li 31+1,0
^
1 error generated.
On PPC32, for 64 bits vars a pair of registers is used. Usually the
lower register in the pair is the high part and the higher register is
the low part. GCC uses r3/r4 ... r11/r12 ... r14/r15 ... r30/r31
In older kernel code inline assembly was using %1 and %1+1 to represent
64 bits values. However here it looks like clang uses r31 as high part,
allthough r32 doesn't exist hence the error.
Allthoug %1+1 should work, most places now use %L1 instead of %1+1, so
let's do the same here.
With that change, the build doesn't fail anymore and a disassembly shows
clang uses r17/r18 and r31/r14 pair when GCC would have used r16/r17 and
r30/r31:
Disassembly of section .fixup:
00000000 <.fixup>:
0: 38 a0 ff f2 li r5,-14
4: 3a 20 00 00 li r17,0
8: 3a 40 00 00 li r18,0
c: 48 00 00 00 b c <.fixup+0xc>
c: R_PPC_REL24 .text+0xbc
10: 38 a0 ff f2 li r5,-14
14: 3b e0 00 00 li r31,0
18: 39 c0 00 00 li r14,0
1c: 48 00 00 00 b 1c <.fixup+0x1c>
1c: R_PPC_REL24 .text+0x144
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202602021825.otcItxGi-lkp@intel.com/
Fixes: c20beffeec3c ("powerpc/uaccess: Use flexible addressing with __put_user()/__get_user()")
Signed-off-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
---
I set Fixes: tag to the commit that recently replaced %1+1 by %L1 in the main part of the macro as the fix would be uncomplete otherwise but the problem has been there since commit 2df5e8bcca53 ("powerpc: merge uaccess.h")
---
arch/powerpc/include/asm/uaccess.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index ba1d878c3f404..570b3d91e2e40 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -255,7 +255,7 @@ __gus_failed: \
".section .fixup,\"ax\"\n" \
"4: li %0,%3\n" \
" li %1,0\n" \
- " li %1+1,0\n" \
+ " li %L1,0\n" \
" b 3b\n" \
".previous\n" \
EX_TABLE(1b, 4b) \
--
2.49.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32
2026-02-03 7:30 [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32 Christophe Leroy (CS GROUP)
@ 2026-02-03 20:55 ` Nathan Chancellor
2026-02-03 22:19 ` David Laight
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Nathan Chancellor @ 2026-02-03 20:55 UTC (permalink / raw)
To: Christophe Leroy (CS GROUP)
Cc: Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Nick Desaulniers, Bill Wendling, Justin Stitt, Segher Boessenkool,
linux-kernel, linuxppc-dev, llvm, kernel test robot
On Tue, Feb 03, 2026 at 08:30:41AM +0100, Christophe Leroy (CS GROUP) wrote:
> Test robot reports the following error with clang-16.0.6:
>
> In file included from kernel/rseq.c:75:
> include/linux/rseq_entry.h:141:3: error: invalid operand for instruction
> unsafe_get_user(offset, &ucs->post_commit_offset, efault);
> ^
> include/linux/uaccess.h:608:2: note: expanded from macro 'unsafe_get_user'
> arch_unsafe_get_user(x, ptr, local_label); \
> ^
> arch/powerpc/include/asm/uaccess.h:518:2: note: expanded from macro 'arch_unsafe_get_user'
> __get_user_size_goto(__gu_val, __gu_addr, sizeof(*(p)), e); \
> ^
> arch/powerpc/include/asm/uaccess.h:284:2: note: expanded from macro '__get_user_size_goto'
> __get_user_size_allowed(x, ptr, size, __gus_retval); \
> ^
> arch/powerpc/include/asm/uaccess.h:275:10: note: expanded from macro '__get_user_size_allowed'
> case 8: __get_user_asm2(x, (u64 __user *)ptr, retval); break; \
> ^
> arch/powerpc/include/asm/uaccess.h:258:4: note: expanded from macro '__get_user_asm2'
> " li %1+1,0\n" \
> ^
> <inline asm>:7:5: note: instantiated into assembly here
> li 31+1,0
> ^
> 1 error generated.
>
> On PPC32, for 64 bits vars a pair of registers is used. Usually the
> lower register in the pair is the high part and the higher register is
> the low part. GCC uses r3/r4 ... r11/r12 ... r14/r15 ... r30/r31
>
> In older kernel code inline assembly was using %1 and %1+1 to represent
> 64 bits values. However here it looks like clang uses r31 as high part,
> allthough r32 doesn't exist hence the error.
>
> Allthoug %1+1 should work, most places now use %L1 instead of %1+1, so
> let's do the same here.
>
> With that change, the build doesn't fail anymore and a disassembly shows
> clang uses r17/r18 and r31/r14 pair when GCC would have used r16/r17 and
> r30/r31:
>
> Disassembly of section .fixup:
>
> 00000000 <.fixup>:
> 0: 38 a0 ff f2 li r5,-14
> 4: 3a 20 00 00 li r17,0
> 8: 3a 40 00 00 li r18,0
> c: 48 00 00 00 b c <.fixup+0xc>
> c: R_PPC_REL24 .text+0xbc
> 10: 38 a0 ff f2 li r5,-14
> 14: 3b e0 00 00 li r31,0
> 18: 39 c0 00 00 li r14,0
> 1c: 48 00 00 00 b 1c <.fixup+0x1c>
> 1c: R_PPC_REL24 .text+0x144
>
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202602021825.otcItxGi-lkp@intel.com/
> Fixes: c20beffeec3c ("powerpc/uaccess: Use flexible addressing with __put_user()/__get_user()")
> Signed-off-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
Acked-by: Nathan Chancellor <nathan@kernel.org>
> ---
> I set Fixes: tag to the commit that recently replaced %1+1 by %L1 in the main part of the macro as the fix would be uncomplete otherwise but the problem has been there since commit 2df5e8bcca53 ("powerpc: merge uaccess.h")
> ---
> arch/powerpc/include/asm/uaccess.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> index ba1d878c3f404..570b3d91e2e40 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -255,7 +255,7 @@ __gus_failed: \
> ".section .fixup,\"ax\"\n" \
> "4: li %0,%3\n" \
> " li %1,0\n" \
> - " li %1+1,0\n" \
> + " li %L1,0\n" \
> " b 3b\n" \
> ".previous\n" \
> EX_TABLE(1b, 4b) \
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32
2026-02-03 7:30 [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32 Christophe Leroy (CS GROUP)
2026-02-03 20:55 ` Nathan Chancellor
@ 2026-02-03 22:19 ` David Laight
2026-02-04 0:13 ` Segher Boessenkool
2026-02-03 22:28 ` Segher Boessenkool
2026-03-11 2:13 ` Madhavan Srinivasan
3 siblings, 1 reply; 6+ messages in thread
From: David Laight @ 2026-02-03 22:19 UTC (permalink / raw)
To: Christophe Leroy (CS GROUP)
Cc: Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
Segher Boessenkool, linux-kernel, linuxppc-dev, llvm,
kernel test robot
On Tue, 3 Feb 2026 08:30:41 +0100
"Christophe Leroy (CS GROUP)" <chleroy@kernel.org> wrote:
> Test robot reports the following error with clang-16.0.6:
>
> In file included from kernel/rseq.c:75:
> include/linux/rseq_entry.h:141:3: error: invalid operand for instruction
> unsafe_get_user(offset, &ucs->post_commit_offset, efault);
> ^
> include/linux/uaccess.h:608:2: note: expanded from macro 'unsafe_get_user'
> arch_unsafe_get_user(x, ptr, local_label); \
> ^
> arch/powerpc/include/asm/uaccess.h:518:2: note: expanded from macro 'arch_unsafe_get_user'
> __get_user_size_goto(__gu_val, __gu_addr, sizeof(*(p)), e); \
> ^
> arch/powerpc/include/asm/uaccess.h:284:2: note: expanded from macro '__get_user_size_goto'
> __get_user_size_allowed(x, ptr, size, __gus_retval); \
> ^
> arch/powerpc/include/asm/uaccess.h:275:10: note: expanded from macro '__get_user_size_allowed'
> case 8: __get_user_asm2(x, (u64 __user *)ptr, retval); break; \
> ^
> arch/powerpc/include/asm/uaccess.h:258:4: note: expanded from macro '__get_user_asm2'
> " li %1+1,0\n" \
> ^
> <inline asm>:7:5: note: instantiated into assembly here
> li 31+1,0
> ^
> 1 error generated.
>
> On PPC32, for 64 bits vars a pair of registers is used. Usually the
> lower register in the pair is the high part and the higher register is
> the low part. GCC uses r3/r4 ... r11/r12 ... r14/r15 ... r30/r31
>
> In older kernel code inline assembly was using %1 and %1+1 to represent
> 64 bits values. However here it looks like clang uses r31 as high part,
> allthough r32 doesn't exist hence the error.
>
> Allthoug %1+1 should work, most places now use %L1 instead of %1+1, so
> let's do the same here.
>
> With that change, the build doesn't fail anymore and a disassembly shows
> clang uses r17/r18 and r31/r14 pair when GCC would have used r16/r17 and
> r30/r31:
Isn't it all horribly worse than that?
It only failed because clang picked r31, but if can pick two non-adjacent
registers might it not pick any pair.
In which case there could easily be a 64bit get_user() that reads an incorrect
value and corrupts another register.
Find one and you might have a privilege escalation.
David
>
> Disassembly of section .fixup:
>
> 00000000 <.fixup>:
> 0: 38 a0 ff f2 li r5,-14
> 4: 3a 20 00 00 li r17,0
> 8: 3a 40 00 00 li r18,0
> c: 48 00 00 00 b c <.fixup+0xc>
> c: R_PPC_REL24 .text+0xbc
> 10: 38 a0 ff f2 li r5,-14
> 14: 3b e0 00 00 li r31,0
> 18: 39 c0 00 00 li r14,0
> 1c: 48 00 00 00 b 1c <.fixup+0x1c>
> 1c: R_PPC_REL24 .text+0x144
>
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202602021825.otcItxGi-lkp@intel.com/
> Fixes: c20beffeec3c ("powerpc/uaccess: Use flexible addressing with __put_user()/__get_user()")
> Signed-off-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
> ---
> I set Fixes: tag to the commit that recently replaced %1+1 by %L1 in the main part of the macro as the fix would be uncomplete otherwise but the problem has been there since commit 2df5e8bcca53 ("powerpc: merge uaccess.h")
> ---
> arch/powerpc/include/asm/uaccess.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
> index ba1d878c3f404..570b3d91e2e40 100644
> --- a/arch/powerpc/include/asm/uaccess.h
> +++ b/arch/powerpc/include/asm/uaccess.h
> @@ -255,7 +255,7 @@ __gus_failed: \
> ".section .fixup,\"ax\"\n" \
> "4: li %0,%3\n" \
> " li %1,0\n" \
> - " li %1+1,0\n" \
> + " li %L1,0\n" \
> " b 3b\n" \
> ".previous\n" \
> EX_TABLE(1b, 4b) \
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32
2026-02-03 7:30 [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32 Christophe Leroy (CS GROUP)
2026-02-03 20:55 ` Nathan Chancellor
2026-02-03 22:19 ` David Laight
@ 2026-02-03 22:28 ` Segher Boessenkool
2026-03-11 2:13 ` Madhavan Srinivasan
3 siblings, 0 replies; 6+ messages in thread
From: Segher Boessenkool @ 2026-02-03 22:28 UTC (permalink / raw)
To: Christophe Leroy (CS GROUP)
Cc: Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
linux-kernel, linuxppc-dev, llvm, kernel test robot
Hi!
On Tue, Feb 03, 2026 at 08:30:41AM +0100, Christophe Leroy (CS GROUP) wrote:
> Test robot reports the following error with clang-16.0.6:
>
> In file included from kernel/rseq.c:75:
> include/linux/rseq_entry.h:141:3: error: invalid operand for instruction
> unsafe_get_user(offset, &ucs->post_commit_offset, efault);
> ^
> include/linux/uaccess.h:608:2: note: expanded from macro 'unsafe_get_user'
> arch_unsafe_get_user(x, ptr, local_label); \
> ^
> arch/powerpc/include/asm/uaccess.h:518:2: note: expanded from macro 'arch_unsafe_get_user'
> __get_user_size_goto(__gu_val, __gu_addr, sizeof(*(p)), e); \
> ^
> arch/powerpc/include/asm/uaccess.h:284:2: note: expanded from macro '__get_user_size_goto'
> __get_user_size_allowed(x, ptr, size, __gus_retval); \
> ^
> arch/powerpc/include/asm/uaccess.h:275:10: note: expanded from macro '__get_user_size_allowed'
> case 8: __get_user_asm2(x, (u64 __user *)ptr, retval); break; \
> ^
> arch/powerpc/include/asm/uaccess.h:258:4: note: expanded from macro '__get_user_asm2'
> " li %1+1,0\n" \
> ^
> <inline asm>:7:5: note: instantiated into assembly here
> li 31+1,0
> ^
> 1 error generated.
>
> On PPC32, for 64 bits vars a pair of registers is used. Usually the
> lower register in the pair is the high part and the higher register is
> the low part. GCC uses r3/r4 ... r11/r12 ... r14/r15 ... r30/r31
>
> In older kernel code inline assembly was using %1 and %1+1 to represent
> 64 bits values. However here it looks like clang uses r31 as high part,
> allthough r32 doesn't exist hence the error.
>
> Allthoug %1+1 should work, most places now use %L1 instead of %1+1, so
> let's do the same here.
>
> With that change, the build doesn't fail anymore and a disassembly shows
> clang uses r17/r18 and r31/r14 pair when GCC would have used r16/r17 and
> r30/r31:
This does not fix the problem that somehow LLVM thinks that GPR31/FPR0
is a valid pair for two-register integer things (well, 31+1 in
assembler is not actually valid at all). Quite worrying.
Maybe you can fix this in a more fundamental way? In LLVM itself?
(The kernel patch of course is a nice workaround, if it in fact works
reliably, but a big fat comment here would be useful. Pointing to the
LLVM problem report where this is tracked, etc.)
Segher
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32
2026-02-03 22:19 ` David Laight
@ 2026-02-04 0:13 ` Segher Boessenkool
0 siblings, 0 replies; 6+ messages in thread
From: Segher Boessenkool @ 2026-02-04 0:13 UTC (permalink / raw)
To: David Laight
Cc: Christophe Leroy (CS GROUP), Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Nathan Chancellor,
Nick Desaulniers, Bill Wendling, Justin Stitt, linux-kernel,
linuxppc-dev, llvm, kernel test robot
Hi!
On Tue, Feb 03, 2026 at 10:19:39PM +0000, David Laight wrote:
> On Tue, 3 Feb 2026 08:30:41 +0100
> "Christophe Leroy (CS GROUP)" <chleroy@kernel.org> wrote:
>
> > Test robot reports the following error with clang-16.0.6:
> >
> > In file included from kernel/rseq.c:75:
> > include/linux/rseq_entry.h:141:3: error: invalid operand for instruction
> > unsafe_get_user(offset, &ucs->post_commit_offset, efault);
> > ^
> > include/linux/uaccess.h:608:2: note: expanded from macro 'unsafe_get_user'
> > arch_unsafe_get_user(x, ptr, local_label); \
> > ^
> > arch/powerpc/include/asm/uaccess.h:518:2: note: expanded from macro 'arch_unsafe_get_user'
> > __get_user_size_goto(__gu_val, __gu_addr, sizeof(*(p)), e); \
> > ^
> > arch/powerpc/include/asm/uaccess.h:284:2: note: expanded from macro '__get_user_size_goto'
> > __get_user_size_allowed(x, ptr, size, __gus_retval); \
> > ^
> > arch/powerpc/include/asm/uaccess.h:275:10: note: expanded from macro '__get_user_size_allowed'
> > case 8: __get_user_asm2(x, (u64 __user *)ptr, retval); break; \
> > ^
> > arch/powerpc/include/asm/uaccess.h:258:4: note: expanded from macro '__get_user_asm2'
> > " li %1+1,0\n" \
> > ^
> > <inline asm>:7:5: note: instantiated into assembly here
> > li 31+1,0
> > ^
> > 1 error generated.
> >
> > On PPC32, for 64 bits vars a pair of registers is used. Usually the
> > lower register in the pair is the high part and the higher register is
> > the low part. GCC uses r3/r4 ... r11/r12 ... r14/r15 ... r30/r31
> >
> > In older kernel code inline assembly was using %1 and %1+1 to represent
> > 64 bits values. However here it looks like clang uses r31 as high part,
> > allthough r32 doesn't exist hence the error.
> >
> > Allthoug %1+1 should work, most places now use %L1 instead of %1+1, so
> > let's do the same here.
> >
> > With that change, the build doesn't fail anymore and a disassembly shows
> > clang uses r17/r18 and r31/r14 pair when GCC would have used r16/r17 and
> > r30/r31:
>
> Isn't it all horribly worse than that?
> It only failed because clang picked r31, but if can pick two non-adjacent
> registers might it not pick any pair.
> In which case there could easily be a 64bit get_user() that reads an incorrect
> value and corrupts another register.
> Find one and you might have a privilege escalation.
I don't think LLVM is that broken, it only has problems for some edge
cases. Yes, I might expect too much. But without proof to the contrary
let's assume things are okay :-)
And, worrying. But what can we do against it! Other than never ever
use LLVM for anything serious, of course.
Segher
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32
2026-02-03 7:30 [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32 Christophe Leroy (CS GROUP)
` (2 preceding siblings ...)
2026-02-03 22:28 ` Segher Boessenkool
@ 2026-03-11 2:13 ` Madhavan Srinivasan
3 siblings, 0 replies; 6+ messages in thread
From: Madhavan Srinivasan @ 2026-03-11 2:13 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Nathan Chancellor,
Nick Desaulniers, Bill Wendling, Justin Stitt, Segher Boessenkool,
Christophe Leroy (CS GROUP)
Cc: linux-kernel, linuxppc-dev, llvm, kernel test robot
On Tue, 03 Feb 2026 08:30:41 +0100, Christophe Leroy (CS GROUP) wrote:
> Test robot reports the following error with clang-16.0.6:
>
> In file included from kernel/rseq.c:75:
> include/linux/rseq_entry.h:141:3: error: invalid operand for instruction
> unsafe_get_user(offset, &ucs->post_commit_offset, efault);
> ^
> include/linux/uaccess.h:608:2: note: expanded from macro 'unsafe_get_user'
> arch_unsafe_get_user(x, ptr, local_label); \
> ^
> arch/powerpc/include/asm/uaccess.h:518:2: note: expanded from macro 'arch_unsafe_get_user'
> __get_user_size_goto(__gu_val, __gu_addr, sizeof(*(p)), e); \
> ^
> arch/powerpc/include/asm/uaccess.h:284:2: note: expanded from macro '__get_user_size_goto'
> __get_user_size_allowed(x, ptr, size, __gus_retval); \
> ^
> arch/powerpc/include/asm/uaccess.h:275:10: note: expanded from macro '__get_user_size_allowed'
> case 8: __get_user_asm2(x, (u64 __user *)ptr, retval); break; \
> ^
> arch/powerpc/include/asm/uaccess.h:258:4: note: expanded from macro '__get_user_asm2'
> " li %1+1,0\n" \
> ^
> <inline asm>:7:5: note: instantiated into assembly here
> li 31+1,0
> ^
> 1 error generated.
>
> [...]
Applied to powerpc/fixes.
[1/1] powerpc/uaccess: Fix inline assembly for clang build on PPC32
https://git.kernel.org/powerpc/c/0ee95a1d458630272d0415d0ffa9424fcb606c90
cheers
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-03-11 2:14 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-03 7:30 [PATCH] powerpc/uaccess: Fix inline assembly for clang build on PPC32 Christophe Leroy (CS GROUP)
2026-02-03 20:55 ` Nathan Chancellor
2026-02-03 22:19 ` David Laight
2026-02-04 0:13 ` Segher Boessenkool
2026-02-03 22:28 ` Segher Boessenkool
2026-03-11 2:13 ` Madhavan Srinivasan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox