* [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2)
@ 2023-06-22 10:54 Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 01/14] powerpc/kuap: Avoid unnecessary reads of MD_AP Christophe Leroy
` (13 more replies)
0 siblings, 14 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
This series adds UACCESS validation for PPC32. It includes half
a dozen of changes to objtool core.
It is almost mature, performs code analysis for all PPC32, only
missing marking of UACCESS enable/disable for book3s/32.
Most object files are correctly decoded, only a few
'unreachable instruction' warnings remain due to more complex
switch table cases:
- Loading of table address after the dynamic jump
- Nested switches
It allowed to detect some UACCESS mess in a few files. They've been
fixed through other patches.
Christophe Leroy (14):
powerpc/kuap: Avoid unnecessary reads of MD_AP
powerpc/kuap: Avoid useless jump_label on empty function
powerpc/kuap: Refactor static branch for disabling kuap
powerpc/kuap: Make disabling KUAP at boottime impossible except on
book3s/64
powerpc/kuap: KUAP enabling/disabling functions must be
__always_inline
Revert "powerpc/bug: Provide better flexibility to
WARN_ON/__WARN_FLAGS() with asm goto"
objtool: Allow an architecture to disable objtool on ASM files
objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc
objtool: Add INSN_RETURN_CONDITIONAL
objtool: Add support for relative switch tables
objtool: Remove too strict constraint in jump table search
objtool: Add support for more complex UACCESS control
powerpc/bug: Annotate reachable after warning trap
powerpc: Implement UACCESS validation on PPC32
arch/Kconfig | 5 ++
arch/powerpc/Kconfig | 2 +
arch/powerpc/include/asm/book3s/32/kup.h | 39 ++++-----
arch/powerpc/include/asm/book3s/64/kup.h | 36 ++++----
arch/powerpc/include/asm/bug.h | 77 ++++-------------
arch/powerpc/include/asm/kup.h | 66 ++++++++++-----
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 52 +++++-------
arch/powerpc/include/asm/nohash/kup-booke.h | 49 +++++------
arch/powerpc/include/asm/uaccess.h | 6 +-
arch/powerpc/kernel/misc_32.S | 2 +-
arch/powerpc/kernel/traps.c | 9 +-
arch/powerpc/kexec/core_32.c | 4 +-
arch/powerpc/mm/book3s32/kuap.c | 18 ++--
arch/powerpc/mm/init-common.c | 3 +
arch/powerpc/mm/nohash/kup.c | 11 +--
include/linux/objtool.h | 14 ++++
scripts/Makefile.build | 4 +
tools/objtool/arch/powerpc/decode.c | 82 +++++++++++++++++--
.../arch/powerpc/include/arch/special.h | 2 +-
tools/objtool/arch/powerpc/special.c | 44 +++++++++-
tools/objtool/arch/x86/special.c | 3 +-
tools/objtool/check.c | 79 ++++++++++++++----
tools/objtool/include/objtool/arch.h | 1 +
tools/objtool/include/objtool/elf.h | 1 +
tools/objtool/include/objtool/special.h | 2 +-
tools/objtool/special.c | 55 ++++++-------
26 files changed, 395 insertions(+), 271 deletions(-)
--
2.40.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v2 01/14] powerpc/kuap: Avoid unnecessary reads of MD_AP
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 02/14] powerpc/kuap: Avoid useless jump_label on empty function Christophe Leroy
` (12 subsequent siblings)
13 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
A disassembly of interrupt_exit_kernel_prepare() shows a useless read
of MD_AP register. This is shown by r9 being re-used immediately without
doing anything with the value read.
c000e0e0: 60 00 00 00 nop
c000e0e4: ===> 7d 3a c2 a6 mfmd_ap r9 <====
c000e0e8: 7d 20 00 a6 mfmsr r9
c000e0ec: 7c 51 13 a6 mtspr 81,r2
c000e0f0: 81 3f 00 84 lwz r9,132(r31)
c000e0f4: 71 29 80 00 andi. r9,r9,32768
kuap_get_and_assert_locked() is paired with kuap_kernel_restore()
and are only used in interrupt_exit_kernel_prepare(). The value
returned by kuap_get_and_assert_locked() is only used by
kuap_kernel_restore().
On 8xx, kuap_kernel_restore() doesn't use the value read by
kuap_get_and_assert_locked() so modify kuap_get_and_assert_locked()
to not perform the read of MD_AP and return 0 instead.
The same applies on BOOKE.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 8 ++------
arch/powerpc/include/asm/nohash/kup-booke.h | 6 ++----
2 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index c44d97751723..8579210f2a6a 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -41,14 +41,10 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
static inline unsigned long __kuap_get_and_assert_locked(void)
{
- unsigned long kuap;
-
- kuap = mfspr(SPRN_MD_AP);
-
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
- WARN_ON_ONCE(kuap >> 16 != MD_APG_KUAP >> 16);
+ WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16);
- return kuap;
+ return 0;
}
static inline void __allow_user_access(void __user *to, const void __user *from,
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 49bb41ed0816..823c5a3a96d8 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -58,12 +58,10 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
static inline unsigned long __kuap_get_and_assert_locked(void)
{
- unsigned long kuap = mfspr(SPRN_PID);
-
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
- WARN_ON_ONCE(kuap);
+ WARN_ON_ONCE(mfspr(SPRN_PID));
- return kuap;
+ return 0;
}
static inline void __allow_user_access(void __user *to, const void __user *from,
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 02/14] powerpc/kuap: Avoid useless jump_label on empty function
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 01/14] powerpc/kuap: Avoid unnecessary reads of MD_AP Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 03/14] powerpc/kuap: Refactor static branch for disabling kuap Christophe Leroy
` (11 subsequent siblings)
13 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
Disassembly of interrupt_enter_prepare() shows a pointless nop
before the mftb
c000abf0 <interrupt_enter_prepare>:
c000abf0: 81 23 00 84 lwz r9,132(r3)
c000abf4: 71 29 40 00 andi. r9,r9,16384
c000abf8: 41 82 00 28 beq- c000ac20 <interrupt_enter_prepare+0x30>
c000abfc: ===> 60 00 00 00 nop <====
c000ac00: 7d 0c 42 e6 mftb r8
c000ac04: 80 e2 00 08 lwz r7,8(r2)
c000ac08: 81 22 00 28 lwz r9,40(r2)
c000ac0c: 91 02 00 24 stw r8,36(r2)
c000ac10: 7d 29 38 50 subf r9,r9,r7
c000ac14: 7d 29 42 14 add r9,r9,r8
c000ac18: 91 22 00 08 stw r9,8(r2)
c000ac1c: 4e 80 00 20 blr
c000ac20: 60 00 00 00 nop
c000ac24: 7d 5a c2 a6 mfmd_ap r10
c000ac28: 3d 20 de 00 lis r9,-8704
c000ac2c: 91 43 00 b0 stw r10,176(r3)
c000ac30: 7d 3a c3 a6 mtspr 794,r9
c000ac34: 4e 80 00 20 blr
That comes from the call to kuap_loc(), allthough __kuap_lock() is an
empty function on the 8xx.
To avoid that, only perform kuap_is_disabled() check when there is
something to do with __kuap_lock().
Do the same with __kuap_save_and_lock() and
__kuap_get_and_assert_locked().
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
---
v2: Add back comment about __kupa_lock() not needed on 64s
---
arch/powerpc/include/asm/book3s/32/kup.h | 6 ++---
arch/powerpc/include/asm/book3s/64/kup.h | 10 ++------
arch/powerpc/include/asm/kup.h | 25 ++++++++++----------
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 11 ++++-----
arch/powerpc/include/asm/nohash/kup-booke.h | 8 +++++--
5 files changed, 27 insertions(+), 33 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index 678f9c9d89b6..466a19cfb4df 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -77,10 +77,6 @@ static inline void kuap_unlock(unsigned long addr, bool ool)
kuap_unlock_all_ool();
}
-static inline void __kuap_lock(void)
-{
-}
-
static inline void __kuap_save_and_lock(struct pt_regs *regs)
{
unsigned long kuap = current->thread.kuap;
@@ -92,6 +88,7 @@ static inline void __kuap_save_and_lock(struct pt_regs *regs)
current->thread.kuap = KUAP_NONE;
kuap_lock_addr(kuap, false);
}
+#define __kuap_save_and_lock __kuap_save_and_lock
static inline void kuap_user_restore(struct pt_regs *regs)
{
@@ -120,6 +117,7 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
return kuap;
}
+#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
static __always_inline void __allow_user_access(void __user *to, const void __user *from,
u32 size, unsigned long dir)
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 84c09e546115..2a7bd3ecc556 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -298,15 +298,9 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED);
return amr;
}
+#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
-/* Do nothing, book3s/64 does that in ASM */
-static inline void __kuap_lock(void)
-{
-}
-
-static inline void __kuap_save_and_lock(struct pt_regs *regs)
-{
-}
+/* __kuap_lock() not required, book3s/64 does that in ASM */
/*
* We support individually allowing read or write, but we don't support nesting
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index d751ddd08110..0a4e07175612 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -52,16 +52,9 @@ __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
return false;
}
-static inline void __kuap_lock(void) { }
-static inline void __kuap_save_and_lock(struct pt_regs *regs) { }
static inline void kuap_user_restore(struct pt_regs *regs) { }
static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
-static inline unsigned long __kuap_get_and_assert_locked(void)
-{
- return 0;
-}
-
/*
* book3s/64/kup-radix.h defines these functions for the !KUAP case to flush
* the L1D cache after user accesses. Only include the empty stubs for other
@@ -87,27 +80,32 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
static __always_inline void kuap_assert_locked(void)
{
+#if defined(CONFIG_PPC_KUAP_DEBUG) && defined(__kuap_get_and_assert_locked)
if (kuap_is_disabled())
return;
- if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
- __kuap_get_and_assert_locked();
+ __kuap_get_and_assert_locked();
+#endif
}
static __always_inline void kuap_lock(void)
{
+#ifdef __kuap_lock
if (kuap_is_disabled())
return;
__kuap_lock();
+#endif
}
static __always_inline void kuap_save_and_lock(struct pt_regs *regs)
{
+#ifdef __kuap_save_and_lock
if (kuap_is_disabled())
return;
__kuap_save_and_lock(regs);
+#endif
}
static __always_inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
@@ -120,10 +118,11 @@ static __always_inline void kuap_kernel_restore(struct pt_regs *regs, unsigned l
static __always_inline unsigned long kuap_get_and_assert_locked(void)
{
- if (kuap_is_disabled())
- return 0;
-
- return __kuap_get_and_assert_locked();
+#ifdef __kuap_get_and_assert_locked
+ if (!kuap_is_disabled())
+ return __kuap_get_and_assert_locked();
+#endif
+ return 0;
}
#ifndef CONFIG_PPC_BOOK3S_64
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 8579210f2a6a..a372cd822887 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -20,15 +20,12 @@ static __always_inline bool kuap_is_disabled(void)
return static_branch_unlikely(&disable_kuap_key);
}
-static inline void __kuap_lock(void)
-{
-}
-
static inline void __kuap_save_and_lock(struct pt_regs *regs)
{
regs->kuap = mfspr(SPRN_MD_AP);
mtspr(SPRN_MD_AP, MD_APG_KUAP);
}
+#define __kuap_save_and_lock __kuap_save_and_lock
static inline void kuap_user_restore(struct pt_regs *regs)
{
@@ -39,13 +36,15 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
mtspr(SPRN_MD_AP, regs->kuap);
}
+#ifdef CONFIG_PPC_KUAP_DEBUG
static inline unsigned long __kuap_get_and_assert_locked(void)
{
- if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
- WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16);
+ WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16);
return 0;
}
+#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
+#endif
static inline void __allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 823c5a3a96d8..71182cbe20c3 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -30,6 +30,7 @@ static inline void __kuap_lock(void)
mtspr(SPRN_PID, 0);
isync();
}
+#define __kuap_lock __kuap_lock
static inline void __kuap_save_and_lock(struct pt_regs *regs)
{
@@ -37,6 +38,7 @@ static inline void __kuap_save_and_lock(struct pt_regs *regs)
mtspr(SPRN_PID, 0);
isync();
}
+#define __kuap_save_and_lock __kuap_save_and_lock
static inline void kuap_user_restore(struct pt_regs *regs)
{
@@ -56,13 +58,15 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
/* Context synchronisation is performed by rfi */
}
+#ifdef CONFIG_PPC_KUAP_DEBUG
static inline unsigned long __kuap_get_and_assert_locked(void)
{
- if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
- WARN_ON_ONCE(mfspr(SPRN_PID));
+ WARN_ON_ONCE(mfspr(SPRN_PID));
return 0;
}
+#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
+#endif
static inline void __allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 03/14] powerpc/kuap: Refactor static branch for disabling kuap
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 01/14] powerpc/kuap: Avoid unnecessary reads of MD_AP Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 02/14] powerpc/kuap: Avoid useless jump_label on empty function Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 04/14] powerpc/kuap: Make disabling KUAP at boottime impossible except on book3s/64 Christophe Leroy
` (10 subsequent siblings)
13 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
All but book3s/64 use a static branch key for disabling kuap.
book3s/64 uses a memory feature.
Refactor all targets except book3s/64.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/include/asm/book3s/32/kup.h | 7 -------
arch/powerpc/include/asm/book3s/64/kup.h | 1 +
arch/powerpc/include/asm/kup.h | 15 +++++++++++++++
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 7 -------
arch/powerpc/include/asm/nohash/kup-booke.h | 7 -------
arch/powerpc/mm/book3s32/kuap.c | 3 ---
arch/powerpc/mm/init-common.c | 3 +++
arch/powerpc/mm/nohash/kup.c | 3 ---
8 files changed, 19 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index 466a19cfb4df..8da9997a67ba 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -11,8 +11,6 @@
#include <linux/jump_label.h>
-extern struct static_key_false disable_kuap_key;
-
static __always_inline bool kuep_is_disabled(void)
{
return !IS_ENABLED(CONFIG_PPC_KUEP);
@@ -25,11 +23,6 @@ static __always_inline bool kuep_is_disabled(void)
#define KUAP_NONE (~0UL)
#define KUAP_ALL (~1UL)
-static __always_inline bool kuap_is_disabled(void)
-{
- return static_branch_unlikely(&disable_kuap_key);
-}
-
static inline void kuap_lock_one(unsigned long addr)
{
mtsr(mfsr(addr) | SR_KS, addr);
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 2a7bd3ecc556..d44c4ee2e8c3 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -234,6 +234,7 @@ static __always_inline bool kuap_is_disabled(void)
{
return !mmu_has_feature(MMU_FTR_BOOK3S_KUAP);
}
+#define kuap_is_disabled kuap_is_disabled
static inline void kuap_user_restore(struct pt_regs *regs)
{
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 0a4e07175612..74b7f4cee2ed 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -6,6 +6,12 @@
#define KUAP_WRITE 2
#define KUAP_READ_WRITE (KUAP_READ | KUAP_WRITE)
+#ifndef __ASSEMBLY__
+#include <linux/types.h>
+
+static __always_inline bool kuap_is_disabled(void);
+#endif
+
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/book3s/64/kup.h>
#endif
@@ -41,6 +47,15 @@ void setup_kuep(bool disabled);
#ifdef CONFIG_PPC_KUAP
void setup_kuap(bool disabled);
+
+#ifndef kuap_is_disabled
+extern struct static_key_false disable_kuap_key;
+
+static __always_inline bool kuap_is_disabled(void)
+{
+ return static_branch_unlikely(&disable_kuap_key);
+}
+#endif
#else
static inline void setup_kuap(bool disabled) { }
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index a372cd822887..1d53f38c5cd5 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -13,13 +13,6 @@
#include <asm/reg.h>
-extern struct static_key_false disable_kuap_key;
-
-static __always_inline bool kuap_is_disabled(void)
-{
- return static_branch_unlikely(&disable_kuap_key);
-}
-
static inline void __kuap_save_and_lock(struct pt_regs *regs)
{
regs->kuap = mfspr(SPRN_MD_AP);
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 71182cbe20c3..07759ae9117b 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -18,13 +18,6 @@
#include <asm/reg.h>
-extern struct static_key_false disable_kuap_key;
-
-static __always_inline bool kuap_is_disabled(void)
-{
- return static_branch_unlikely(&disable_kuap_key);
-}
-
static inline void __kuap_lock(void)
{
mtspr(SPRN_PID, 0);
diff --git a/arch/powerpc/mm/book3s32/kuap.c b/arch/powerpc/mm/book3s32/kuap.c
index 28676cabb005..c5484729b595 100644
--- a/arch/powerpc/mm/book3s32/kuap.c
+++ b/arch/powerpc/mm/book3s32/kuap.c
@@ -3,9 +3,6 @@
#include <asm/kup.h>
#include <asm/smp.h>
-struct static_key_false disable_kuap_key;
-EXPORT_SYMBOL(disable_kuap_key);
-
void kuap_lock_all_ool(void)
{
kuap_lock_all();
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 119ef491f797..74e140b1efef 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -32,6 +32,9 @@ EXPORT_SYMBOL_GPL(kernstart_virt_addr);
bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
+struct static_key_false disable_kuap_key;
+EXPORT_SYMBOL(disable_kuap_key);
+
static int __init parse_nosmep(char *p)
{
if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64))
diff --git a/arch/powerpc/mm/nohash/kup.c b/arch/powerpc/mm/nohash/kup.c
index 552becf90e97..4e22adfa2aa8 100644
--- a/arch/powerpc/mm/nohash/kup.c
+++ b/arch/powerpc/mm/nohash/kup.c
@@ -13,9 +13,6 @@
#include <asm/smp.h>
#ifdef CONFIG_PPC_KUAP
-struct static_key_false disable_kuap_key;
-EXPORT_SYMBOL(disable_kuap_key);
-
void setup_kuap(bool disabled)
{
if (disabled) {
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 04/14] powerpc/kuap: Make disabling KUAP at boottime impossible except on book3s/64
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (2 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 03/14] powerpc/kuap: Refactor static branch for disabling kuap Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 05/14] powerpc/kuap: KUAP enabling/disabling functions must be __always_inline Christophe Leroy
` (9 subsequent siblings)
13 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
It is possible to disable KUAP at boottime with 'nosmap' parameter.
That is implemented with jump_label hence adds a 'nop' in front
of each open/close of userspace access.
From a security point of view it makes sence to disallow disabling
KUAP. And on processors like the 8xx where 'nop' is not seamless,
it saves a few cycles.
In addition, objtool UACCESS validation cannot cope with that
feature because it visits every static branches independentely and
is not able to see that when a UACCESS enable is skipped, the
matching UACCESS disable is skipped as well.
So make KUAP always active when selected in kernel config.
In the future it may be re-implemented by noping the instructions
instead of using static branches.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/include/asm/kup.h | 4 +---
arch/powerpc/mm/book3s32/kuap.c | 15 ++++++---------
arch/powerpc/mm/init-common.c | 6 +++---
arch/powerpc/mm/nohash/kup.c | 8 +-------
4 files changed, 11 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 74b7f4cee2ed..a02340535efa 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -49,11 +49,9 @@ void setup_kuep(bool disabled);
void setup_kuap(bool disabled);
#ifndef kuap_is_disabled
-extern struct static_key_false disable_kuap_key;
-
static __always_inline bool kuap_is_disabled(void)
{
- return static_branch_unlikely(&disable_kuap_key);
+ return false;
}
#endif
#else
diff --git a/arch/powerpc/mm/book3s32/kuap.c b/arch/powerpc/mm/book3s32/kuap.c
index c5484729b595..639219d0a821 100644
--- a/arch/powerpc/mm/book3s32/kuap.c
+++ b/arch/powerpc/mm/book3s32/kuap.c
@@ -17,17 +17,14 @@ EXPORT_SYMBOL(kuap_unlock_all_ool);
void setup_kuap(bool disabled)
{
- if (!disabled) {
- kuap_lock_all_ool();
- init_mm.context.sr0 |= SR_KS;
- current->thread.sr0 |= SR_KS;
- }
+ WARN_ON(disabled);
+
+ kuap_lock_all_ool();
+ init_mm.context.sr0 |= SR_KS;
+ current->thread.sr0 |= SR_KS;
if (smp_processor_id() != boot_cpuid)
return;
- if (disabled)
- static_branch_enable(&disable_kuap_key);
- else
- pr_info("Activating Kernel Userspace Access Protection\n");
+ pr_info("Activating Kernel Userspace Access Protection\n");
}
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 74e140b1efef..237086742ec7 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -32,9 +32,6 @@ EXPORT_SYMBOL_GPL(kernstart_virt_addr);
bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
-struct static_key_false disable_kuap_key;
-EXPORT_SYMBOL(disable_kuap_key);
-
static int __init parse_nosmep(char *p)
{
if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64))
@@ -48,6 +45,9 @@ early_param("nosmep", parse_nosmep);
static int __init parse_nosmap(char *p)
{
+ if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64))
+ return 0;
+
disable_kuap = true;
pr_warn("Disabling Kernel Userspace Access Protection\n");
return 0;
diff --git a/arch/powerpc/mm/nohash/kup.c b/arch/powerpc/mm/nohash/kup.c
index 4e22adfa2aa8..3bb79f89de06 100644
--- a/arch/powerpc/mm/nohash/kup.c
+++ b/arch/powerpc/mm/nohash/kup.c
@@ -15,13 +15,7 @@
#ifdef CONFIG_PPC_KUAP
void setup_kuap(bool disabled)
{
- if (disabled) {
- if (IS_ENABLED(CONFIG_40x))
- disable_kuep = true;
- if (smp_processor_id() == boot_cpuid)
- static_branch_enable(&disable_kuap_key);
- return;
- }
+ WARN_ON(disabled);
pr_info("Activating Kernel Userspace Access Protection\n");
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 05/14] powerpc/kuap: KUAP enabling/disabling functions must be __always_inline
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (3 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 04/14] powerpc/kuap: Make disabling KUAP at boottime impossible except on book3s/64 Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 06/14] Revert "powerpc/bug: Provide better flexibility to WARN_ON/__WARN_FLAGS() with asm goto" Christophe Leroy
` (8 subsequent siblings)
13 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
Objtool reports following warnings:
arch/powerpc/kernel/signal_32.o: warning: objtool:
__prevent_user_access.constprop.0+0x4 (.text+0x4):
redundant UACCESS disable
arch/powerpc/kernel/signal_32.o: warning: objtool: user_access_begin+0x2c
(.text+0x4c): return with UACCESS enabled
arch/powerpc/kernel/signal_32.o: warning: objtool: handle_rt_signal32+0x188
(.text+0x360): call to __prevent_user_access.constprop.0() with UACCESS enabled
arch/powerpc/kernel/signal_32.o: warning: objtool: handle_signal32+0x150
(.text+0x4d4): call to __prevent_user_access.constprop.0() with UACCESS enabled
This is due to some KUAP enabling/disabling functions being outline
allthough they are marked inline. Use __always_inline instead.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/include/asm/book3s/32/kup.h | 26 ++++++++++----------
arch/powerpc/include/asm/book3s/64/kup.h | 23 ++++++++---------
arch/powerpc/include/asm/kup.h | 16 ++++++------
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 20 +++++++--------
arch/powerpc/include/asm/nohash/kup-booke.h | 22 ++++++++---------
arch/powerpc/include/asm/uaccess.h | 6 ++---
6 files changed, 57 insertions(+), 56 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index 8da9997a67ba..41746a22c650 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -23,25 +23,25 @@ static __always_inline bool kuep_is_disabled(void)
#define KUAP_NONE (~0UL)
#define KUAP_ALL (~1UL)
-static inline void kuap_lock_one(unsigned long addr)
+static __always_inline void kuap_lock_one(unsigned long addr)
{
mtsr(mfsr(addr) | SR_KS, addr);
isync(); /* Context sync required after mtsr() */
}
-static inline void kuap_unlock_one(unsigned long addr)
+static __always_inline void kuap_unlock_one(unsigned long addr)
{
mtsr(mfsr(addr) & ~SR_KS, addr);
isync(); /* Context sync required after mtsr() */
}
-static inline void kuap_lock_all(void)
+static __always_inline void kuap_lock_all(void)
{
update_user_segments(mfsr(0) | SR_KS);
isync(); /* Context sync required after mtsr() */
}
-static inline void kuap_unlock_all(void)
+static __always_inline void kuap_unlock_all(void)
{
update_user_segments(mfsr(0) & ~SR_KS);
isync(); /* Context sync required after mtsr() */
@@ -50,7 +50,7 @@ static inline void kuap_unlock_all(void)
void kuap_lock_all_ool(void);
void kuap_unlock_all_ool(void);
-static inline void kuap_lock_addr(unsigned long addr, bool ool)
+static __always_inline void kuap_lock_addr(unsigned long addr, bool ool)
{
if (likely(addr != KUAP_ALL))
kuap_lock_one(addr);
@@ -60,7 +60,7 @@ static inline void kuap_lock_addr(unsigned long addr, bool ool)
kuap_lock_all_ool();
}
-static inline void kuap_unlock(unsigned long addr, bool ool)
+static __always_inline void kuap_unlock(unsigned long addr, bool ool)
{
if (likely(addr != KUAP_ALL))
kuap_unlock_one(addr);
@@ -70,7 +70,7 @@ static inline void kuap_unlock(unsigned long addr, bool ool)
kuap_unlock_all_ool();
}
-static inline void __kuap_save_and_lock(struct pt_regs *regs)
+static __always_inline void __kuap_save_and_lock(struct pt_regs *regs)
{
unsigned long kuap = current->thread.kuap;
@@ -83,11 +83,11 @@ static inline void __kuap_save_and_lock(struct pt_regs *regs)
}
#define __kuap_save_and_lock __kuap_save_and_lock
-static inline void kuap_user_restore(struct pt_regs *regs)
+static __always_inline void kuap_user_restore(struct pt_regs *regs)
{
}
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
{
if (unlikely(kuap != KUAP_NONE)) {
current->thread.kuap = KUAP_NONE;
@@ -102,7 +102,7 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
kuap_unlock(regs->kuap, false);
}
-static inline unsigned long __kuap_get_and_assert_locked(void)
+static __always_inline unsigned long __kuap_get_and_assert_locked(void)
{
unsigned long kuap = current->thread.kuap;
@@ -137,7 +137,7 @@ static __always_inline void __prevent_user_access(unsigned long dir)
kuap_lock_addr(kuap, true);
}
-static inline unsigned long __prevent_user_access_return(void)
+static __always_inline unsigned long __prevent_user_access_return(void)
{
unsigned long flags = current->thread.kuap;
@@ -149,7 +149,7 @@ static inline unsigned long __prevent_user_access_return(void)
return flags;
}
-static inline void __restore_user_access(unsigned long flags)
+static __always_inline void __restore_user_access(unsigned long flags)
{
if (flags != KUAP_NONE) {
current->thread.kuap = flags;
@@ -157,7 +157,7 @@ static inline void __restore_user_access(unsigned long flags)
}
}
-static inline bool
+static __always_inline bool
__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
unsigned long kuap = regs->kuap;
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index d44c4ee2e8c3..f33e064b9f5f 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -213,14 +213,14 @@ extern u64 __ro_after_init default_iamr;
* access restrictions. Because of this ignore AMR value when accessing
* userspace via kernel thread.
*/
-static inline u64 current_thread_amr(void)
+static __always_inline u64 current_thread_amr(void)
{
if (current->thread.regs)
return current->thread.regs->amr;
return default_amr;
}
-static inline u64 current_thread_iamr(void)
+static __always_inline u64 current_thread_iamr(void)
{
if (current->thread.regs)
return current->thread.regs->iamr;
@@ -236,7 +236,7 @@ static __always_inline bool kuap_is_disabled(void)
}
#define kuap_is_disabled kuap_is_disabled
-static inline void kuap_user_restore(struct pt_regs *regs)
+static __always_inline void kuap_user_restore(struct pt_regs *regs)
{
bool restore_amr = false, restore_iamr = false;
unsigned long amr, iamr;
@@ -275,7 +275,7 @@ static inline void kuap_user_restore(struct pt_regs *regs)
*/
}
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
{
if (likely(regs->amr == amr))
return;
@@ -291,7 +291,7 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr
*/
}
-static inline unsigned long __kuap_get_and_assert_locked(void)
+static __always_inline unsigned long __kuap_get_and_assert_locked(void)
{
unsigned long amr = mfspr(SPRN_AMR);
@@ -308,7 +308,7 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
* because that would require an expensive read/modify write of the AMR.
*/
-static inline unsigned long get_kuap(void)
+static __always_inline unsigned long get_kuap(void)
{
/*
* We return AMR_KUAP_BLOCKED when we don't support KUAP because
@@ -338,7 +338,8 @@ static __always_inline void set_kuap(unsigned long value)
isync();
}
-static inline bool __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+static __always_inline bool
+__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
/*
* For radix this will be a storage protection fault (DSISR_PROTFAULT).
@@ -381,12 +382,12 @@ static __always_inline void allow_user_access(void __user *to, const void __user
#else /* CONFIG_PPC_KUAP */
-static inline unsigned long get_kuap(void)
+static __always_inline unsigned long get_kuap(void)
{
return AMR_KUAP_BLOCKED;
}
-static inline void set_kuap(unsigned long value) { }
+static __always_inline void set_kuap(unsigned long value) { }
static __always_inline void allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
@@ -401,7 +402,7 @@ static __always_inline void prevent_user_access(unsigned long dir)
do_uaccess_flush();
}
-static inline unsigned long prevent_user_access_return(void)
+static __always_inline unsigned long prevent_user_access_return(void)
{
unsigned long flags = get_kuap();
@@ -412,7 +413,7 @@ static inline unsigned long prevent_user_access_return(void)
return flags;
}
-static inline void restore_user_access(unsigned long flags)
+static __always_inline void restore_user_access(unsigned long flags)
{
set_kuap(flags);
if (static_branch_unlikely(&uaccess_flush_key) && flags == AMR_KUAP_BLOCKED)
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index a02340535efa..132f1c7e1064 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -59,14 +59,14 @@ static inline void setup_kuap(bool disabled) { }
static __always_inline bool kuap_is_disabled(void) { return true; }
-static inline bool
+static __always_inline bool
__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
return false;
}
-static inline void kuap_user_restore(struct pt_regs *regs) { }
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
+static __always_inline void kuap_user_restore(struct pt_regs *regs) { }
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
/*
* book3s/64/kup-radix.h defines these functions for the !KUAP case to flush
@@ -74,11 +74,11 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr
* platforms.
*/
#ifndef CONFIG_PPC_BOOK3S_64
-static inline void __allow_user_access(void __user *to, const void __user *from,
- unsigned long size, unsigned long dir) { }
-static inline void __prevent_user_access(unsigned long dir) { }
-static inline unsigned long __prevent_user_access_return(void) { return 0UL; }
-static inline void __restore_user_access(unsigned long flags) { }
+static __always_inline void __allow_user_access(void __user *to, const void __user *from,
+ unsigned long size, unsigned long dir) { }
+static __always_inline void __prevent_user_access(unsigned long dir) { }
+static __always_inline unsigned long __prevent_user_access_return(void) { return 0UL; }
+static __always_inline void __restore_user_access(unsigned long flags) { }
#endif /* CONFIG_PPC_BOOK3S_64 */
#endif /* CONFIG_PPC_KUAP */
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 1d53f38c5cd5..61067e4c8f22 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -13,24 +13,24 @@
#include <asm/reg.h>
-static inline void __kuap_save_and_lock(struct pt_regs *regs)
+static __always_inline void __kuap_save_and_lock(struct pt_regs *regs)
{
regs->kuap = mfspr(SPRN_MD_AP);
mtspr(SPRN_MD_AP, MD_APG_KUAP);
}
#define __kuap_save_and_lock __kuap_save_and_lock
-static inline void kuap_user_restore(struct pt_regs *regs)
+static __always_inline void kuap_user_restore(struct pt_regs *regs)
{
}
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
{
mtspr(SPRN_MD_AP, regs->kuap);
}
#ifdef CONFIG_PPC_KUAP_DEBUG
-static inline unsigned long __kuap_get_and_assert_locked(void)
+static __always_inline unsigned long __kuap_get_and_assert_locked(void)
{
WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16);
@@ -39,18 +39,18 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
#endif
-static inline void __allow_user_access(void __user *to, const void __user *from,
- unsigned long size, unsigned long dir)
+static __always_inline void __allow_user_access(void __user *to, const void __user *from,
+ unsigned long size, unsigned long dir)
{
mtspr(SPRN_MD_AP, MD_APG_INIT);
}
-static inline void __prevent_user_access(unsigned long dir)
+static __always_inline void __prevent_user_access(unsigned long dir)
{
mtspr(SPRN_MD_AP, MD_APG_KUAP);
}
-static inline unsigned long __prevent_user_access_return(void)
+static __always_inline unsigned long __prevent_user_access_return(void)
{
unsigned long flags;
@@ -61,12 +61,12 @@ static inline unsigned long __prevent_user_access_return(void)
return flags;
}
-static inline void __restore_user_access(unsigned long flags)
+static __always_inline void __restore_user_access(unsigned long flags)
{
mtspr(SPRN_MD_AP, flags);
}
-static inline bool
+static __always_inline bool
__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
return !((regs->kuap ^ MD_APG_KUAP) & 0xff000000);
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 07759ae9117b..416f3e0897d5 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -18,14 +18,14 @@
#include <asm/reg.h>
-static inline void __kuap_lock(void)
+static __always_inline void __kuap_lock(void)
{
mtspr(SPRN_PID, 0);
isync();
}
#define __kuap_lock __kuap_lock
-static inline void __kuap_save_and_lock(struct pt_regs *regs)
+static __always_inline void __kuap_save_and_lock(struct pt_regs *regs)
{
regs->kuap = mfspr(SPRN_PID);
mtspr(SPRN_PID, 0);
@@ -33,7 +33,7 @@ static inline void __kuap_save_and_lock(struct pt_regs *regs)
}
#define __kuap_save_and_lock __kuap_save_and_lock
-static inline void kuap_user_restore(struct pt_regs *regs)
+static __always_inline void kuap_user_restore(struct pt_regs *regs)
{
if (kuap_is_disabled())
return;
@@ -43,7 +43,7 @@ static inline void kuap_user_restore(struct pt_regs *regs)
/* Context synchronisation is performed by rfi */
}
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
{
if (regs->kuap)
mtspr(SPRN_PID, current->thread.pid);
@@ -52,7 +52,7 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
}
#ifdef CONFIG_PPC_KUAP_DEBUG
-static inline unsigned long __kuap_get_and_assert_locked(void)
+static __always_inline unsigned long __kuap_get_and_assert_locked(void)
{
WARN_ON_ONCE(mfspr(SPRN_PID));
@@ -61,20 +61,20 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
#endif
-static inline void __allow_user_access(void __user *to, const void __user *from,
- unsigned long size, unsigned long dir)
+static __always_inline void __allow_user_access(void __user *to, const void __user *from,
+ unsigned long size, unsigned long dir)
{
mtspr(SPRN_PID, current->thread.pid);
isync();
}
-static inline void __prevent_user_access(unsigned long dir)
+static __always_inline void __prevent_user_access(unsigned long dir)
{
mtspr(SPRN_PID, 0);
isync();
}
-static inline unsigned long __prevent_user_access_return(void)
+static __always_inline unsigned long __prevent_user_access_return(void)
{
unsigned long flags = mfspr(SPRN_PID);
@@ -84,7 +84,7 @@ static inline unsigned long __prevent_user_access_return(void)
return flags;
}
-static inline void __restore_user_access(unsigned long flags)
+static __always_inline void __restore_user_access(unsigned long flags)
{
if (flags) {
mtspr(SPRN_PID, current->thread.pid);
@@ -92,7 +92,7 @@ static inline void __restore_user_access(unsigned long flags)
}
}
-static inline bool
+static __always_inline bool
__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
return !regs->kuap;
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index a2d255aa9627..fb725ec77926 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -386,7 +386,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
extern long __copy_from_user_flushcache(void *dst, const void __user *src,
unsigned size);
-static __must_check inline bool user_access_begin(const void __user *ptr, size_t len)
+static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len)
{
if (unlikely(!access_ok(ptr, len)))
return false;
@@ -401,7 +401,7 @@ static __must_check inline bool user_access_begin(const void __user *ptr, size_t
#define user_access_save prevent_user_access_return
#define user_access_restore restore_user_access
-static __must_check inline bool
+static __must_check __always_inline bool
user_read_access_begin(const void __user *ptr, size_t len)
{
if (unlikely(!access_ok(ptr, len)))
@@ -415,7 +415,7 @@ user_read_access_begin(const void __user *ptr, size_t len)
#define user_read_access_begin user_read_access_begin
#define user_read_access_end prevent_current_read_from_user
-static __must_check inline bool
+static __must_check __always_inline bool
user_write_access_begin(const void __user *ptr, size_t len)
{
if (unlikely(!access_ok(ptr, len)))
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 06/14] Revert "powerpc/bug: Provide better flexibility to WARN_ON/__WARN_FLAGS() with asm goto"
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (4 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 05/14] powerpc/kuap: KUAP enabling/disabling functions must be __always_inline Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 07/14] objtool: Allow an architecture to disable objtool on ASM files Christophe Leroy
` (7 subsequent siblings)
13 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
This reverts commit 1e688dd2a3d6759d416616ff07afc4bb836c4213.
That commit aimed at optimising the code around generation of
WARN_ON/BUG_ON but this leads to a lot of dead code erroneously
generated by GCC.
That dead code becomes a problem when we start using objtool validation
because objtool will abort validation with a warning as soon as it
detects unreachable code. This is because unreachable code might
be the indication that objtool doesn't properly decode object text.
text data bss dec hex filename
9551585 3627834 224376 13403795 cc8693 vmlinux.before
9535281 3628358 224376 13388015 cc48ef vmlinux.after
Once this change is reverted, in a standard configuration (pmac32 +
function tracer) the text is reduced by 16k which is around 1.7%
Taking into account that other problems are encountered with that
'asm goto' in WARN_ON(), including build failures, keeping that
change is not worth it allthough it is primarily a compiler bug.
So revert it for now.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Naveen N Rao <naveen@kernel.org>
---
v2: Do not remove tools/testing/selftests/powerpc/primitives/asm/extable.h and leave EX_TABLE in asm/extable.h
---
arch/powerpc/include/asm/book3s/64/kup.h | 2 +-
arch/powerpc/include/asm/bug.h | 67 ++++--------------------
arch/powerpc/kernel/misc_32.S | 2 +-
arch/powerpc/kernel/traps.c | 9 +---
4 files changed, 15 insertions(+), 65 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index f33e064b9f5f..7c8cc0f096b1 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -90,7 +90,7 @@
/* Prevent access to userspace using any key values */
LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
999: tdne \gpr1, \gpr2
- EMIT_WARN_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
+ EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 67)
#endif
.endm
diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index ef42adb44aa3..a565995fb742 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -4,14 +4,13 @@
#ifdef __KERNEL__
#include <asm/asm-compat.h>
-#include <asm/extable.h>
#ifdef CONFIG_BUG
#ifdef __ASSEMBLY__
#include <asm/asm-offsets.h>
#ifdef CONFIG_DEBUG_BUGVERBOSE
-.macro __EMIT_BUG_ENTRY addr,file,line,flags
+.macro EMIT_BUG_ENTRY addr,file,line,flags
.section __bug_table,"aw"
5001: .4byte \addr - .
.4byte 5002f - .
@@ -23,7 +22,7 @@
.previous
.endm
#else
-.macro __EMIT_BUG_ENTRY addr,file,line,flags
+.macro EMIT_BUG_ENTRY addr,file,line,flags
.section __bug_table,"aw"
5001: .4byte \addr - .
.short \flags
@@ -32,18 +31,6 @@
.endm
#endif /* verbose */
-.macro EMIT_WARN_ENTRY addr,file,line,flags
- EX_TABLE(\addr,\addr+4)
- __EMIT_BUG_ENTRY \addr,\file,\line,\flags
-.endm
-
-.macro EMIT_BUG_ENTRY addr,file,line,flags
- .if \flags & 1 /* BUGFLAG_WARNING */
- .err /* Use EMIT_WARN_ENTRY for warnings */
- .endif
- __EMIT_BUG_ENTRY \addr,\file,\line,\flags
-.endm
-
#else /* !__ASSEMBLY__ */
/* _EMIT_BUG_ENTRY expects args %0,%1,%2,%3 to be FILE, LINE, flags and
sizeof(struct bug_entry), respectively */
@@ -73,16 +60,6 @@
"i" (sizeof(struct bug_entry)), \
##__VA_ARGS__)
-#define WARN_ENTRY(insn, flags, label, ...) \
- asm_volatile_goto( \
- "1: " insn "\n" \
- EX_TABLE(1b, %l[label]) \
- _EMIT_BUG_ENTRY \
- : : "i" (__FILE__), "i" (__LINE__), \
- "i" (flags), \
- "i" (sizeof(struct bug_entry)), \
- ##__VA_ARGS__ : : label)
-
/*
* BUG_ON() and WARN_ON() do their best to cooperate with compile-time
* optimisations. However depending on the complexity of the condition
@@ -95,16 +72,7 @@
} while (0)
#define HAVE_ARCH_BUG
-#define __WARN_FLAGS(flags) do { \
- __label__ __label_warn_on; \
- \
- WARN_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags), __label_warn_on); \
- barrier_before_unreachable(); \
- __builtin_unreachable(); \
- \
-__label_warn_on: \
- break; \
-} while (0)
+#define __WARN_FLAGS(flags) BUG_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags))
#ifdef CONFIG_PPC64
#define BUG_ON(x) do { \
@@ -117,25 +85,15 @@ __label_warn_on: \
} while (0)
#define WARN_ON(x) ({ \
- bool __ret_warn_on = false; \
- do { \
- if (__builtin_constant_p((x))) { \
- if (!(x)) \
- break; \
+ int __ret_warn_on = !!(x); \
+ if (__builtin_constant_p(__ret_warn_on)) { \
+ if (__ret_warn_on) \
__WARN(); \
- __ret_warn_on = true; \
- } else { \
- __label__ __label_warn_on; \
- \
- WARN_ENTRY(PPC_TLNEI " %4, 0", \
- BUGFLAG_WARNING | BUGFLAG_TAINT(TAINT_WARN), \
- __label_warn_on, \
- "r" ((__force long)(x))); \
- break; \
-__label_warn_on: \
- __ret_warn_on = true; \
- } \
- } while (0); \
+ } else { \
+ BUG_ENTRY(PPC_TLNEI " %4, 0", \
+ BUGFLAG_WARNING | BUGFLAG_TAINT(TAINT_WARN), \
+ "r" (__ret_warn_on)); \
+ } \
unlikely(__ret_warn_on); \
})
@@ -148,11 +106,8 @@ __label_warn_on: \
#ifdef __ASSEMBLY__
.macro EMIT_BUG_ENTRY addr,file,line,flags
.endm
-.macro EMIT_WARN_ENTRY addr,file,line,flags
-.endm
#else /* !__ASSEMBLY__ */
#define _EMIT_BUG_ENTRY
-#define _EMIT_WARN_ENTRY
#endif
#endif /* CONFIG_BUG */
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index daf8f87d2372..fd11ec42df89 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -237,7 +237,7 @@ _GLOBAL(copy_page)
addi r3,r3,-4
0: twnei r5, 0 /* WARN if r3 is not cache aligned */
- EMIT_WARN_ENTRY 0b,__FILE__,__LINE__, BUGFLAG_WARNING
+ EMIT_BUG_ENTRY 0b,__FILE__,__LINE__, BUGFLAG_WARNING
addi r4,r4,-4
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index e59ec6d32d37..7ef147e2a20d 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1508,13 +1508,8 @@ static void do_program_check(struct pt_regs *regs)
if (!(regs->msr & MSR_PR) && /* not user-mode */
report_bug(bugaddr, regs) == BUG_TRAP_TYPE_WARN) {
- const struct exception_table_entry *entry;
-
- entry = search_exception_tables(bugaddr);
- if (entry) {
- regs_set_return_ip(regs, extable_fixup(entry) + regs->nip - bugaddr);
- return;
- }
+ regs_add_return_ip(regs, 4);
+ return;
}
if (cpu_has_feature(CPU_FTR_DEXCR_NPHIE) && user_mode(regs)) {
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 07/14] objtool: Allow an architecture to disable objtool on ASM files
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (5 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 06/14] Revert "powerpc/bug: Provide better flexibility to WARN_ON/__WARN_FLAGS() with asm goto" Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 08/14] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc Christophe Leroy
` (6 subsequent siblings)
13 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
Supporting objtool on ASM files requires quite an effort.
Features like UACCESS validation don't require ASM files validation.
In order to allow architectures to enable objtool validation
without spending unnecessary effort on cleaning up ASM files,
provide an option to disable objtool validation on ASM files.
Suggested-by: Naveen N Rao <naveen@kernel.org>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/Kconfig | 5 +++++
scripts/Makefile.build | 4 ++++
2 files changed, 9 insertions(+)
diff --git a/arch/Kconfig b/arch/Kconfig
index 205fd23e0cad..dd66b884bcfb 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1091,6 +1091,11 @@ config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
config HAVE_OBJTOOL
bool
+config ARCH_OBJTOOL_SKIP_ASM
+ bool
+ help
+ Architecture doesn't support objtool on ASM files
+
config HAVE_JUMP_LABEL_HACK
bool
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 9f94fc83f086..878027cf4faf 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -359,7 +359,11 @@ $(obj)/%.s: $(src)/%.S FORCE
$(call if_changed_dep,cpp_s_S)
quiet_cmd_as_o_S = AS $(quiet_modtag) $@
+ifndef CONFIG_ARCH_OBJTOOL_SKIP_ASM
cmd_as_o_S = $(CC) $(a_flags) -c -o $@ $< $(cmd_objtool)
+else
+ cmd_as_o_S = $(CC) $(a_flags) -c -o $@ $<
+endif
ifdef CONFIG_ASM_MODVERSIONS
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 08/14] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (6 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 07/14] objtool: Allow an architecture to disable objtool on ASM files Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 11:44 ` Peter Zijlstra
2023-06-22 10:54 ` [PATCH v2 09/14] objtool: Add INSN_RETURN_CONDITIONAL Christophe Leroy
` (5 subsequent siblings)
13 siblings, 1 reply; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
struct jump_entry {
s32 code;
s32 target;
long key;
};
It means that the size of the third argument depends on
whether we are building a 32 bits or 64 bits kernel.
Therefore JUMP_ENTRY_SIZE must depend on elf_class_addrsize(elf).
To allow that, entries[] table must be initialised at runtime. This is
easily done by moving it into its only user which is special_get_alts().
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
.../arch/powerpc/include/arch/special.h | 2 +-
tools/objtool/special.c | 55 +++++++++----------
2 files changed, 28 insertions(+), 29 deletions(-)
diff --git a/tools/objtool/arch/powerpc/include/arch/special.h b/tools/objtool/arch/powerpc/include/arch/special.h
index ffef9ada7133..ede05633c2e4 100644
--- a/tools/objtool/arch/powerpc/include/arch/special.h
+++ b/tools/objtool/arch/powerpc/include/arch/special.h
@@ -6,7 +6,7 @@
#define EX_ORIG_OFFSET 0
#define EX_NEW_OFFSET 4
-#define JUMP_ENTRY_SIZE 16
+#define JUMP_ENTRY_SIZE (8 + elf_class_addrsize(elf)) /* 12 on PPC32, 16 on PPC64 */
#define JUMP_ORIG_OFFSET 0
#define JUMP_NEW_OFFSET 4
#define JUMP_KEY_OFFSET 8
diff --git a/tools/objtool/special.c b/tools/objtool/special.c
index baa85c31526b..4015c1cd0fe1 100644
--- a/tools/objtool/special.c
+++ b/tools/objtool/special.c
@@ -26,34 +26,6 @@ struct special_entry {
unsigned char key; /* jump_label key */
};
-static const struct special_entry entries[] = {
- {
- .sec = ".altinstructions",
- .group = true,
- .size = ALT_ENTRY_SIZE,
- .orig = ALT_ORIG_OFFSET,
- .orig_len = ALT_ORIG_LEN_OFFSET,
- .new = ALT_NEW_OFFSET,
- .new_len = ALT_NEW_LEN_OFFSET,
- .feature = ALT_FEATURE_OFFSET,
- },
- {
- .sec = "__jump_table",
- .jump_or_nop = true,
- .size = JUMP_ENTRY_SIZE,
- .orig = JUMP_ORIG_OFFSET,
- .new = JUMP_NEW_OFFSET,
- .key = JUMP_KEY_OFFSET,
- },
- {
- .sec = "__ex_table",
- .size = EX_ENTRY_SIZE,
- .orig = EX_ORIG_OFFSET,
- .new = EX_NEW_OFFSET,
- },
- {},
-};
-
void __weak arch_handle_alternative(unsigned short feature, struct special_alt *alt)
{
}
@@ -144,6 +116,33 @@ int special_get_alts(struct elf *elf, struct list_head *alts)
unsigned int nr_entries;
struct special_alt *alt;
int idx, ret;
+ const struct special_entry entries[] = {
+ {
+ .sec = ".altinstructions",
+ .group = true,
+ .size = ALT_ENTRY_SIZE,
+ .orig = ALT_ORIG_OFFSET,
+ .orig_len = ALT_ORIG_LEN_OFFSET,
+ .new = ALT_NEW_OFFSET,
+ .new_len = ALT_NEW_LEN_OFFSET,
+ .feature = ALT_FEATURE_OFFSET,
+ },
+ {
+ .sec = "__jump_table",
+ .jump_or_nop = true,
+ .size = JUMP_ENTRY_SIZE,
+ .orig = JUMP_ORIG_OFFSET,
+ .new = JUMP_NEW_OFFSET,
+ .key = JUMP_KEY_OFFSET,
+ },
+ {
+ .sec = "__ex_table",
+ .size = EX_ENTRY_SIZE,
+ .orig = EX_ORIG_OFFSET,
+ .new = EX_NEW_OFFSET,
+ },
+ {},
+ };
INIT_LIST_HEAD(alts);
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 09/14] objtool: Add INSN_RETURN_CONDITIONAL
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (7 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 08/14] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 11:45 ` Peter Zijlstra
2023-06-22 10:54 ` [PATCH v2 10/14] objtool: Add support for relative switch tables Christophe Leroy
` (4 subsequent siblings)
13 siblings, 1 reply; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
Most functions have an unconditional return at the end, like
this one:
00000000 <is_exec_fault>:
0: 81 22 04 d0 lwz r9,1232(r2)
4: 38 60 00 00 li r3,0
8: 2c 09 00 00 cmpwi r9,0
c: 4d 82 00 20 beqlr <== Conditional return
10: 80 69 00 a0 lwz r3,160(r9)
14: 54 63 00 36 clrrwi r3,r3,4
18: 68 63 04 00 xori r3,r3,1024
1c: 7c 63 00 34 cntlzw r3,r3
20: 54 63 d9 7e srwi r3,r3,5
24: 4e 80 00 20 blr <== Unconditional return
But other functions like this other one below only have
conditional returns:
00000028 <pte_update.isra.0>:
28: 81 25 00 00 lwz r9,0(r5)
2c: 2c 08 00 00 cmpwi r8,0
30: 7d 29 30 78 andc r9,r9,r6
34: 7d 27 3b 78 or r7,r9,r7
38: 54 84 65 3a rlwinm r4,r4,12,20,29
3c: 81 23 00 18 lwz r9,24(r3)
40: 41 82 00 58 beq 98 <pte_update.isra.0+0x70>
44: 7d 29 20 2e lwzx r9,r9,r4
48: 55 29 07 3a rlwinm r9,r9,0,28,29
4c: 2c 09 00 0c cmpwi r9,12
50: 41 82 00 08 beq 58 <pte_update.isra.0+0x30>
54: 39 00 00 80 li r8,128
58: 2c 08 00 01 cmpwi r8,1
5c: 90 e5 00 00 stw r7,0(r5)
60: 4d a2 00 20 beqlr+ <== Conditional return
64: 7c e9 3b 78 mr r9,r7
68: 39 40 00 00 li r10,0
6c: 39 4a 00 04 addi r10,r10,4
70: 7c 0a 40 00 cmpw r10,r8
74: 91 25 00 04 stw r9,4(r5)
78: 91 25 00 08 stw r9,8(r5)
7c: 38 a5 00 10 addi r5,r5,16
80: 91 25 ff fc stw r9,-4(r5)
84: 4c 80 00 20 bgelr <== Conditional return
88: 55 49 60 26 slwi r9,r10,12
8c: 7d 29 3a 14 add r9,r9,r7
90: 91 25 00 00 stw r9,0(r5)
94: 4b ff ff d8 b 6c <pte_update.isra.0+0x44>
98: 39 00 00 04 li r8,4
9c: 4b ff ff bc b 58 <pte_update.isra.0+0x30>
If conditional returns are decoded as INSN_OTHER, objtool considers
that the second function never returns.
If conditional returns are decoded as INSN_RETURN, objtool considers
that code after that conditional return is dead.
To overcome this situation, introduce INSN_RETURN_CONDITIONAL which
is taken as a confirmation that a function is not noreturn but still
sees following code as reachable.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
tools/objtool/check.c | 2 +-
tools/objtool/include/objtool/arch.h | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 0fcf99c91400..8977cdf93f54 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -259,7 +259,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
func_for_each_insn(file, func, insn) {
empty = false;
- if (insn->type == INSN_RETURN)
+ if (insn->type == INSN_RETURN || insn->type == INSN_RETURN_CONDITIONAL)
return false;
}
diff --git a/tools/objtool/include/objtool/arch.h b/tools/objtool/include/objtool/arch.h
index 2b6d2ce4f9a5..84ba75112934 100644
--- a/tools/objtool/include/objtool/arch.h
+++ b/tools/objtool/include/objtool/arch.h
@@ -19,6 +19,7 @@ enum insn_type {
INSN_CALL,
INSN_CALL_DYNAMIC,
INSN_RETURN,
+ INSN_RETURN_CONDITIONAL,
INSN_CONTEXT_SWITCH,
INSN_BUG,
INSN_NOP,
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 10/14] objtool: Add support for relative switch tables
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (8 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 09/14] objtool: Add INSN_RETURN_CONDITIONAL Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 11:48 ` Peter Zijlstra
2023-06-22 10:54 ` [PATCH v2 11/14] objtool: Remove too strict constraint in jump table search Christophe Leroy
` (3 subsequent siblings)
13 siblings, 1 reply; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
On powerpc, switch tables are relative, than means the address of the
table is added to the value of the entry in order to get the pointed
address: (r10 is the table address, r4 the index in the table)
lis r10,0 <== Load r10 with upper part of .rodata address
R_PPC_ADDR16_HA .rodata
addi r10,r10,0 <== Add lower part of .rodata address
R_PPC_ADDR16_LO .rodata
lwzx r8,r10,r4 <== Read table entry at r10 + r4 into r8
add r10,r8,r10 <== Add table address to read value
mtctr r10 <== Save calculated address in CTR
bctr <== Branch to address in CTR
But for c_jump_tables it is not the case, they contain the
pointed address directly:
lis r28,0 <== Load r28 with upper .rodata..c_jump_table
R_PPC_ADDR16_HA .rodata..c_jump_table
addi r28,r28,0 <== Add lower part of .rodata..c_jump_table
R_PPC_ADDR16_LO .rodata..c_jump_table
lwzx r10,r28,r10 <== Read table entry at r10 + r28 into r10
mtctr r10 <== Save read value in CTR
bctr <== Branch to address in CTR
Add support to objtool for relative tables, with a flag in order to
tell table by table if that table uses relative or absolute addressing.
And use correct size for 'long' instead of hard coding a size of '8'.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
tools/objtool/arch/powerpc/special.c | 2 +-
tools/objtool/arch/x86/special.c | 3 ++-
tools/objtool/check.c | 21 ++++++++++++++-------
tools/objtool/include/objtool/elf.h | 1 +
tools/objtool/include/objtool/special.h | 2 +-
5 files changed, 19 insertions(+), 10 deletions(-)
diff --git a/tools/objtool/arch/powerpc/special.c b/tools/objtool/arch/powerpc/special.c
index d33868147196..979b555b16ea 100644
--- a/tools/objtool/arch/powerpc/special.c
+++ b/tools/objtool/arch/powerpc/special.c
@@ -13,7 +13,7 @@ bool arch_support_alt_relocation(struct special_alt *special_alt,
}
struct reloc *arch_find_switch_table(struct objtool_file *file,
- struct instruction *insn)
+ struct instruction *insn, bool *is_rel)
{
exit(-1);
}
diff --git a/tools/objtool/arch/x86/special.c b/tools/objtool/arch/x86/special.c
index 7c97b7391279..e7e1775f313a 100644
--- a/tools/objtool/arch/x86/special.c
+++ b/tools/objtool/arch/x86/special.c
@@ -92,7 +92,7 @@ bool arch_support_alt_relocation(struct special_alt *special_alt,
* NOTE: RETPOLINE made it harder still to decode dynamic jumps.
*/
struct reloc *arch_find_switch_table(struct objtool_file *file,
- struct instruction *insn)
+ struct instruction *insn, bool *is_rel)
{
struct reloc *text_reloc, *rodata_reloc;
struct section *table_sec;
@@ -141,5 +141,6 @@ struct reloc *arch_find_switch_table(struct objtool_file *file,
if (text_reloc->type == R_X86_64_PC32)
file->ignore_unreachables = true;
+ *is_rel = false;
return rodata_reloc;
}
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 8977cdf93f54..b810be087d7c 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -2053,7 +2053,7 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
struct instruction *dest_insn;
struct alternative *alt;
struct symbol *pfunc = insn_func(insn)->pfunc;
- unsigned int prev_offset = 0;
+ unsigned int offset, prev_offset = 0;
/*
* Each @reloc is a switch table relocation which points to the target
@@ -2066,7 +2066,7 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
break;
/* Make sure the table entries are consecutive: */
- if (prev_offset && reloc->offset != prev_offset + 8)
+ if (prev_offset && reloc->offset != prev_offset + elf_class_addrsize(file->elf))
break;
/* Detect function pointers from contiguous objects: */
@@ -2074,7 +2074,12 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
reloc->addend == pfunc->offset)
break;
- dest_insn = find_insn(file, reloc->sym->sec, reloc->addend);
+ if (table->jump_table_is_rel)
+ offset = reloc->addend + table->offset - reloc->offset;
+ else
+ offset = reloc->addend;
+
+ dest_insn = find_insn(file, reloc->sym->sec, offset);
if (!dest_insn)
break;
@@ -2107,8 +2112,8 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
* associated with it.
*/
static struct reloc *find_jump_table(struct objtool_file *file,
- struct symbol *func,
- struct instruction *insn)
+ struct symbol *func,
+ struct instruction *insn, bool *is_rel)
{
struct reloc *table_reloc;
struct instruction *dest_insn, *orig_insn = insn;
@@ -2132,7 +2137,7 @@ static struct reloc *find_jump_table(struct objtool_file *file,
insn->jump_dest->offset > orig_insn->offset))
break;
- table_reloc = arch_find_switch_table(file, insn);
+ table_reloc = arch_find_switch_table(file, insn, is_rel);
if (!table_reloc)
continue;
dest_insn = find_insn(file, table_reloc->sym->sec, table_reloc->addend);
@@ -2154,6 +2159,7 @@ static void mark_func_jump_tables(struct objtool_file *file,
{
struct instruction *insn, *last = NULL;
struct reloc *reloc;
+ bool is_rel;
func_for_each_insn(file, func, insn) {
if (!last)
@@ -2176,9 +2182,10 @@ static void mark_func_jump_tables(struct objtool_file *file,
if (insn->type != INSN_JUMP_DYNAMIC)
continue;
- reloc = find_jump_table(file, func, insn);
+ reloc = find_jump_table(file, func, insn, &is_rel);
if (reloc) {
reloc->jump_table_start = true;
+ reloc->jump_table_is_rel = is_rel;
insn->_jump_table = reloc;
}
}
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index e1ca588eb69d..64aac87a4825 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -80,6 +80,7 @@ struct reloc {
s64 addend;
int idx;
bool jump_table_start;
+ bool jump_table_is_rel;
};
#define ELF_HASH_BITS 20
diff --git a/tools/objtool/include/objtool/special.h b/tools/objtool/include/objtool/special.h
index 86d4af9c5aa9..5edfc4c3e582 100644
--- a/tools/objtool/include/objtool/special.h
+++ b/tools/objtool/include/objtool/special.h
@@ -38,5 +38,5 @@ bool arch_support_alt_relocation(struct special_alt *special_alt,
struct instruction *insn,
struct reloc *reloc);
struct reloc *arch_find_switch_table(struct objtool_file *file,
- struct instruction *insn);
+ struct instruction *insn, bool *is_rel);
#endif /* _SPECIAL_H */
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 11/14] objtool: Remove too strict constraint in jump table search
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (9 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 10/14] objtool: Add support for relative switch tables Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 11:48 ` Peter Zijlstra
2023-06-22 10:54 ` [PATCH v2 12/14] objtool: Add support for more complex UACCESS control Christophe Leroy
` (2 subsequent siblings)
13 siblings, 1 reply; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
In code there is often a pattern like:
load jump table address
do some test
conditional jump to label1:
do something
unconditional jump to label2:
label1:
do something else
read jump table
dynamic jump
label2:
do other job here ....
find_jump_table() contains a constraint that stops the backsearch
of the table address loading when a jump is found in-between.
Remove that constraint.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
tools/objtool/check.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index b810be087d7c..1911de0e1008 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -2130,13 +2130,6 @@ static struct reloc *find_jump_table(struct objtool_file *file,
if (insn != orig_insn && insn->type == INSN_JUMP_DYNAMIC)
break;
- /* allow small jumps within the range */
- if (insn->type == INSN_JUMP_UNCONDITIONAL &&
- insn->jump_dest &&
- (insn->jump_dest->offset <= insn->offset ||
- insn->jump_dest->offset > orig_insn->offset))
- break;
-
table_reloc = arch_find_switch_table(file, insn, is_rel);
if (!table_reloc)
continue;
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 12/14] objtool: Add support for more complex UACCESS control
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (10 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 11/14] objtool: Remove too strict constraint in jump table search Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 11:49 ` Peter Zijlstra
2023-06-22 10:54 ` [PATCH v2 13/14] powerpc/bug: Annotate reachable after warning trap Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32 Christophe Leroy
13 siblings, 1 reply; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
On x86, UACCESS is controlled by two instructions: STAC and CLAC.
STAC instruction enables UACCESS while CLAC disables UACCESS.
This is simple enough for objtool to locate UACCESS enable and
disable.
But on powerpc it is a bit more complex, the same instruction is
used for enabling and disabling UACCESS, and the same instruction
can be used for many other things. It would be too complex to use
exclusively instruction decoding.
To help objtool, mark such instruction into .discard.uaccess_begin
and .discard.uaccess_end sections, on the same principle as for
reachable/unreachable instructions. And add ASM_UACCESS_BEGIN
and ASM_UACCESS_END macros to be used in inline assembly code to
annotate UACCESS enable and UACCESS disable instructions.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
include/linux/objtool.h | 14 +++++++++++++
tools/objtool/check.c | 44 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 58 insertions(+)
diff --git a/include/linux/objtool.h b/include/linux/objtool.h
index 03f82c2c2ebf..d8fde4158a40 100644
--- a/include/linux/objtool.h
+++ b/include/linux/objtool.h
@@ -57,6 +57,18 @@
".long 998b - .\n\t" \
".popsection\n\t"
+#define ASM_UACCESS_BEGIN \
+ "998:\n\t" \
+ ".pushsection .discard.uaccess_begin\n\t" \
+ ".long 998b - .\n\t" \
+ ".popsection\n\t"
+
+#define ASM_UACCESS_END \
+ "998:\n\t" \
+ ".pushsection .discard.uaccess_end\n\t" \
+ ".long 998b - .\n\t" \
+ ".popsection\n\t"
+
#else /* __ASSEMBLY__ */
/*
@@ -156,6 +168,8 @@
#define STACK_FRAME_NON_STANDARD_FP(func)
#define ANNOTATE_NOENDBR
#define ASM_REACHABLE
+#define ASM_UACCESS_BEGIN
+#define ASM_UACCESS_END
#else
#define ANNOTATE_INTRA_FUNCTION_CALL
.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 signal=0
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 1911de0e1008..f850ab892ad5 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1110,6 +1110,49 @@ static void add_ignores(struct objtool_file *file)
}
}
+static void add_uaccess(struct objtool_file *file)
+{
+ struct section *sec;
+ struct reloc *reloc;
+ struct instruction *insn;
+
+ sec = find_section_by_name(file->elf, ".rela.discard.uaccess_begin");
+ if (!sec)
+ return;
+
+ list_for_each_entry(reloc, &sec->reloc_list, list) {
+ if (reloc->sym->type != STT_SECTION) {
+ WARN("unexpected relocation symbol type in %s", sec->name);
+ continue;
+ }
+ insn = find_insn(file, reloc->sym->sec, reloc->addend);
+ if (!insn) {
+ WARN("can't find UACCESS enable insn at %s+0x%" PRIx64,
+ reloc->sym->sec->name, reloc->addend);
+ continue;
+ }
+ insn->type = INSN_STAC;
+ }
+
+ sec = find_section_by_name(file->elf, ".rela.discard.uaccess_end");
+ if (!sec)
+ return;
+
+ list_for_each_entry(reloc, &sec->reloc_list, list) {
+ if (reloc->sym->type != STT_SECTION) {
+ WARN("unexpected relocation symbol type in %s", sec->name);
+ continue;
+ }
+ insn = find_insn(file, reloc->sym->sec, reloc->addend);
+ if (!insn) {
+ WARN("can't find UACCESS disable insn at %s+0x%" PRIx64,
+ reloc->sym->sec->name, reloc->addend);
+ continue;
+ }
+ insn->type = INSN_CLAC;
+ }
+}
+
/*
* This is a whitelist of functions that is allowed to be called with AC set.
* The list is meant to be minimal and only contains compiler instrumentation
@@ -2608,6 +2651,7 @@ static int decode_sections(struct objtool_file *file)
return ret;
add_ignores(file);
+ add_uaccess(file);
add_uaccess_safe(file);
ret = add_ignore_alternatives(file);
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 13/14] powerpc/bug: Annotate reachable after warning trap
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (11 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 12/14] objtool: Add support for more complex UACCESS control Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32 Christophe Leroy
13 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
This commit is copied from commit bfb1a7c91fb7 ("x86/bug: Merge
annotate_reachable() into _BUG_FLAGS() asm")
'twi 31,0,0' is a BUG instruction, which is by default a dead end.
But the same instruction is used for WARNINGs and the execution
resumes with the following instruction. Mark it reachable so
that objtool knows that it is not a dead end in that case.
Also change the unreachable() annotation by __builtin_unreachable()
since objtool already knows that a BUG instruction is a dead end.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/include/asm/bug.h | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index a565995fb742..5550ebffb146 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -4,6 +4,7 @@
#ifdef __KERNEL__
#include <asm/asm-compat.h>
+#include <linux/objtool.h>
#ifdef CONFIG_BUG
@@ -51,10 +52,11 @@
".previous\n"
#endif
-#define BUG_ENTRY(insn, flags, ...) \
+#define BUG_ENTRY(insn, flags, extra, ...) \
__asm__ __volatile__( \
"1: " insn "\n" \
_EMIT_BUG_ENTRY \
+ extra \
: : "i" (__FILE__), "i" (__LINE__), \
"i" (flags), \
"i" (sizeof(struct bug_entry)), \
@@ -67,12 +69,12 @@
*/
#define BUG() do { \
- BUG_ENTRY("twi 31, 0, 0", 0); \
- unreachable(); \
+ BUG_ENTRY("twi 31, 0, 0", 0, ""); \
+ __builtin_unreachable(); \
} while (0)
#define HAVE_ARCH_BUG
-#define __WARN_FLAGS(flags) BUG_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags))
+#define __WARN_FLAGS(flags) BUG_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags), ASM_REACHABLE)
#ifdef CONFIG_PPC64
#define BUG_ON(x) do { \
@@ -80,7 +82,7 @@
if (x) \
BUG(); \
} else { \
- BUG_ENTRY(PPC_TLNEI " %4, 0", 0, "r" ((__force long)(x))); \
+ BUG_ENTRY(PPC_TLNEI " %4, 0", 0, "", "r" ((__force long)(x))); \
} \
} while (0)
@@ -92,7 +94,7 @@
} else { \
BUG_ENTRY(PPC_TLNEI " %4, 0", \
BUGFLAG_WARNING | BUGFLAG_TAINT(TAINT_WARN), \
- "r" (__ret_warn_on)); \
+ "", "r" (__ret_warn_on)); \
} \
unlikely(__ret_warn_on); \
})
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
` (12 preceding siblings ...)
2023-06-22 10:54 ` [PATCH v2 13/14] powerpc/bug: Annotate reachable after warning trap Christophe Leroy
@ 2023-06-22 10:54 ` Christophe Leroy
2023-06-22 11:56 ` Peter Zijlstra
` (2 more replies)
13 siblings, 3 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-22 10:54 UTC (permalink / raw)
To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
Sathvika Vasireddy, Naveen N Rao
Cc: Christophe Leroy, linux-kernel, linuxppc-dev
In order to implement UACCESS validation, objtool support
for powerpc needs to be enhanced to decode more instructions.
It also requires implementation of switch tables finding.
On PPC32 it is similar to x86, switch tables are anonymous in .rodata,
the difference is that the value is relative to its index in the table.
Then comes the UACCESS enabling/disabling instructions. On booke and
8xx it is done with a mtspr instruction. For 8xx that's in SPRN_MD_AP,
for booke that's in SPRN_PID. Annotate those instructions.
For book3s/32 that's a bit more complex and left aside for the moment.
No work has been done for ASM files, they are not used for UACCESS
so for the moment just tell objtool to ignore ASM files.
For relocable code, the .got2 relocation preceding each global
function needs to be marked as ignored because some versions of GCC
do this:
120: R_PPC_REL32 .got2+0x7ff0
00000124 <tohex>:
124: 94 21 ff f0 stwu r1,-16(r1)
128: 7c 08 02 a6 mflr r0
12c: 42 9f 00 05 bcl 20,4*cr7+so,130 <tohex+0xc>
130: 39 00 00 00 li r8,0
134: 39 20 00 08 li r9,8
138: 93 c1 00 08 stw r30,8(r1)
13c: 7f c8 02 a6 mflr r30
140: 90 01 00 14 stw r0,20(r1)
144: 80 1e ff f0 lwz r0,-16(r30)
148: 7f c0 f2 14 add r30,r0,r30
14c: 81 5e 80 00 lwz r10,-32768(r30)
150: 80 fe 80 04 lwz r7,-32764(r30)
While other versions do that:
00000120 <tohex>:
120: 94 21 ff f0 stwu r1,-16(r1)
124: 7c 08 02 a6 mflr r0
128: 42 9f 00 05 bcl 20,4*cr7+so,12c <tohex+0xc>
12c: 39 00 00 00 li r8,0
130: 39 20 00 08 li r9,8
134: 93 c1 00 08 stw r30,8(r1)
138: 7f c8 02 a6 mflr r30
13c: 3f de 00 00 addis r30,r30,0
13e: R_PPC_REL16_HA .got2+0x8012
140: 90 01 00 14 stw r0,20(r1)
144: 3b de 00 00 addi r30,r30,0
146: R_PPC_REL16_LO .got2+0x801a
148: 81 5e 80 00 lwz r10,-32768(r30)
14c: 80 fe 80 04 lwz r7,-32764(r30)
Also declare longjmp() and start_secondary_resume() as global noreturn
functions, and declare __copy_tofrom_user() and __arch_clear_user()
as UACCESS safe.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/Kconfig | 2 +
arch/powerpc/include/asm/kup.h | 12 +++
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 8 +-
arch/powerpc/include/asm/nohash/kup-booke.h | 8 +-
arch/powerpc/kexec/core_32.c | 4 +-
tools/objtool/arch/powerpc/decode.c | 82 ++++++++++++++++++--
tools/objtool/arch/powerpc/special.c | 42 +++++++++-
tools/objtool/check.c | 5 ++
8 files changed, 148 insertions(+), 15 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8b955bc7b59f..8b613e520143 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -160,6 +160,7 @@ config PPC
select ARCH_KEEP_MEMBLOCK
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
+ select ARCH_OBJTOOL_SKIP_ASM
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
select ARCH_SPLIT_ARG64 if PPC32
@@ -258,6 +259,7 @@ config PPC
select HAVE_OPTPROBES
select HAVE_OBJTOOL if PPC32 || MPROFILE_KERNEL
select HAVE_OBJTOOL_MCOUNT if HAVE_OBJTOOL
+ select HAVE_UACCESS_VALIDATION if HAVE_OBJTOOL && PPC_KUAP && PPC32
select HAVE_PERF_EVENTS
select HAVE_PERF_EVENTS_NMI if PPC64
select HAVE_PERF_REGS
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 132f1c7e1064..9f824cb93d8a 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -10,6 +10,8 @@
#include <linux/types.h>
static __always_inline bool kuap_is_disabled(void);
+static __always_inline void mtspr_uaccess_begin(int rn, unsigned long val);
+static __always_inline void mtspr_uaccess_end(int rn, unsigned long val);
#endif
#ifdef CONFIG_PPC_BOOK3S_64
@@ -222,6 +224,16 @@ static __always_inline void prevent_current_write_to_user(void)
prevent_user_access(KUAP_WRITE);
}
+static __always_inline void mtspr_uaccess_begin(int rn, unsigned long val)
+{
+ asm(ASM_UACCESS_BEGIN "mtspr %0, %1\n\t" : : "i"(rn), "r"(val) : "memory");
+}
+
+static __always_inline void mtspr_uaccess_end(int rn, unsigned long val)
+{
+ asm(ASM_UACCESS_END "mtspr %0, %1\n\t" : : "i"(rn), "r"(val) : "memory");
+}
+
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_KUAP_H_ */
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 61067e4c8f22..9b59231d87c9 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -42,12 +42,12 @@ static __always_inline unsigned long __kuap_get_and_assert_locked(void)
static __always_inline void __allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{
- mtspr(SPRN_MD_AP, MD_APG_INIT);
+ mtspr_uaccess_begin(SPRN_MD_AP, MD_APG_INIT);
}
static __always_inline void __prevent_user_access(unsigned long dir)
{
- mtspr(SPRN_MD_AP, MD_APG_KUAP);
+ mtspr_uaccess_end(SPRN_MD_AP, MD_APG_KUAP);
}
static __always_inline unsigned long __prevent_user_access_return(void)
@@ -56,14 +56,14 @@ static __always_inline unsigned long __prevent_user_access_return(void)
flags = mfspr(SPRN_MD_AP);
- mtspr(SPRN_MD_AP, MD_APG_KUAP);
+ mtspr_uaccess_end(SPRN_MD_AP, MD_APG_KUAP);
return flags;
}
static __always_inline void __restore_user_access(unsigned long flags)
{
- mtspr(SPRN_MD_AP, flags);
+ mtspr_uaccess_begin(SPRN_MD_AP, flags);
}
static __always_inline bool
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 416f3e0897d5..2967501c434e 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -64,13 +64,13 @@ static __always_inline unsigned long __kuap_get_and_assert_locked(void)
static __always_inline void __allow_user_access(void __user *to, const void __user *from,
unsigned long size, unsigned long dir)
{
- mtspr(SPRN_PID, current->thread.pid);
+ mtspr_uaccess_begin(SPRN_PID, current->thread.pid);
isync();
}
static __always_inline void __prevent_user_access(unsigned long dir)
{
- mtspr(SPRN_PID, 0);
+ mtspr_uaccess_end(SPRN_PID, 0);
isync();
}
@@ -78,7 +78,7 @@ static __always_inline unsigned long __prevent_user_access_return(void)
{
unsigned long flags = mfspr(SPRN_PID);
- mtspr(SPRN_PID, 0);
+ mtspr_uaccess_end(SPRN_PID, 0);
isync();
return flags;
@@ -87,7 +87,7 @@ static __always_inline unsigned long __prevent_user_access_return(void)
static __always_inline void __restore_user_access(unsigned long flags)
{
if (flags) {
- mtspr(SPRN_PID, current->thread.pid);
+ mtspr_uaccess_begin(SPRN_PID, current->thread.pid);
isync();
}
}
diff --git a/arch/powerpc/kexec/core_32.c b/arch/powerpc/kexec/core_32.c
index c95f96850c9e..6e955f32e7c3 100644
--- a/arch/powerpc/kexec/core_32.c
+++ b/arch/powerpc/kexec/core_32.c
@@ -17,7 +17,7 @@
typedef void (*relocate_new_kernel_t)(
unsigned long indirection_page,
unsigned long reboot_code_buffer,
- unsigned long start_address) __noreturn;
+ unsigned long start_address);
/*
* This is a generic machine_kexec function suitable at least for
@@ -61,6 +61,8 @@ void default_machine_kexec(struct kimage *image)
/* now call it */
rnk = (relocate_new_kernel_t) reboot_code_buffer;
(*rnk)(page_list, reboot_code_buffer_phys, image->start);
+
+ unreachable(); /* For objtool */
}
int machine_kexec_prepare(struct kimage *image)
diff --git a/tools/objtool/arch/powerpc/decode.c b/tools/objtool/arch/powerpc/decode.c
index 53b55690f320..2ed5241eb7be 100644
--- a/tools/objtool/arch/powerpc/decode.c
+++ b/tools/objtool/arch/powerpc/decode.c
@@ -43,24 +43,94 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
unsigned long offset, unsigned int maxlen,
struct instruction *insn)
{
- unsigned int opcode;
+ unsigned int opcode, xop;
+ unsigned int rs, ra, rb, bo, bi, to, uimm, simm, lk, aa;
enum insn_type typ;
unsigned long imm;
u32 ins;
+ if (file->elf->ehdr.e_flags & EF_PPC_RELOCATABLE_LIB) {
+ struct reloc *reloc;
+
+ reloc = find_reloc_by_dest_range(file->elf, insn->sec, insn->offset, 4);
+
+ if (reloc && reloc->type == R_PPC_REL32 &&
+ !strncmp(reloc->sym->sec->name, ".got2", 5)) {
+ insn->type = INSN_OTHER;
+ insn->ignore = true;
+ insn->len = 4;
+
+ return 0;
+ }
+ }
+
ins = bswap_if_needed(file->elf, *(u32 *)(sec->data->d_buf + offset));
opcode = ins >> 26;
- typ = INSN_OTHER;
- imm = 0;
+ xop = (ins >> 1) & 0x3ff;
+ rs = bo = to = (ins >> 21) & 0x1f;
+ ra = bi = (ins >> 16) & 0x1f;
+ rb = (ins >> 11) & 0x1f;
+ uimm = simm = (ins >> 0) & 0xffff;
+ aa = ins & 2;
+ lk = ins & 1;
switch (opcode) {
+ case 3:
+ if (to == 31 && ra == 0 && simm == 0) /* twi 31, r0, 0 */
+ typ = INSN_BUG;
+ else
+ typ = INSN_OTHER;
+ break;
+ case 16: /* bc[l][a] */
+ if (lk) /* bcl[a] */
+ typ = INSN_OTHER;
+ else /* bc[a] */
+ typ = INSN_JUMP_CONDITIONAL;
+
+ imm = ins & 0xfffc;
+ if (imm & 0x8000)
+ imm -= 0x10000;
+ insn->immediate = imm | aa;
+ break;
case 18: /* b[l][a] */
- if ((ins & 3) == 1) /* bl */
+ if (lk) /* bl[a] */
typ = INSN_CALL;
+ else /* b[a] */
+ typ = INSN_JUMP_UNCONDITIONAL;
imm = ins & 0x3fffffc;
if (imm & 0x2000000)
imm -= 0x4000000;
+ insn->immediate = imm | aa;
+ break;
+ case 19:
+ if (xop == 16 && bo == 20 && bi == 0) /* blr */
+ typ = INSN_RETURN;
+ else if (xop == 16) /* bclr */
+ typ = INSN_RETURN_CONDITIONAL;
+ else if (xop == 50) /* rfi */
+ typ = INSN_JUMP_DYNAMIC;
+ else if (xop == 528 && bo == 20 && bi == 0 && !lk) /* bctr */
+ typ = INSN_JUMP_DYNAMIC;
+ else if (xop == 528 && bo == 20 && bi == 0 && lk) /* bctrl */
+ typ = INSN_CALL_DYNAMIC;
+ else
+ typ = INSN_OTHER;
+ break;
+ case 24:
+ if (rs == 0 && ra == 0 && uimm == 0)
+ typ = INSN_NOP;
+ else
+ typ = INSN_OTHER;
+ break;
+ case 31:
+ if (xop == 4 && to == 31 && ra == 0 && rb == 0) /* trap */
+ typ = INSN_BUG;
+ else
+ typ = INSN_OTHER;
+ break;
+ default:
+ typ = INSN_OTHER;
break;
}
@@ -70,13 +140,15 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
insn->len = 4;
insn->type = typ;
- insn->immediate = imm;
return 0;
}
unsigned long arch_jump_destination(struct instruction *insn)
{
+ if (insn->immediate & 2)
+ return insn->immediate & ~2;
+
return insn->offset + insn->immediate;
}
diff --git a/tools/objtool/arch/powerpc/special.c b/tools/objtool/arch/powerpc/special.c
index 979b555b16ea..be37f4b455dc 100644
--- a/tools/objtool/arch/powerpc/special.c
+++ b/tools/objtool/arch/powerpc/special.c
@@ -15,5 +15,45 @@ bool arch_support_alt_relocation(struct special_alt *special_alt,
struct reloc *arch_find_switch_table(struct objtool_file *file,
struct instruction *insn, bool *is_rel)
{
- exit(-1);
+ struct reloc *text_reloc, *rodata_reloc;
+ struct section *table_sec;
+ unsigned long table_offset;
+
+ /* look for a relocation which references .rodata */
+ text_reloc = find_reloc_by_dest_range(file->elf, insn->sec,
+ insn->offset, insn->len);
+ if (!text_reloc || text_reloc->sym->type != STT_SECTION ||
+ !text_reloc->sym->sec->rodata)
+ return NULL;
+
+ table_offset = text_reloc->addend;
+ table_sec = text_reloc->sym->sec;
+
+ /*
+ * Make sure the .rodata address isn't associated with a
+ * symbol. GCC jump tables are anonymous data.
+ *
+ * Also support C jump tables which are in the same format as
+ * switch jump tables. For objtool to recognize them, they
+ * need to be placed in the C_JUMP_TABLE_SECTION section. They
+ * have symbols associated with them.
+ */
+ if (find_symbol_containing(table_sec, table_offset)) {
+ if (strcmp(table_sec->name, C_JUMP_TABLE_SECTION))
+ return NULL;
+ *is_rel = false;
+ } else {
+ *is_rel = true;
+ }
+
+ /*
+ * Each table entry has a rela associated with it. The rela
+ * should reference text in the same function as the original
+ * instruction.
+ */
+ rodata_reloc = find_reloc_by_dest(file->elf, table_sec, table_offset);
+ if (!rodata_reloc)
+ return NULL;
+
+ return rodata_reloc;
}
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index f850ab892ad5..8ac5711a055f 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -218,6 +218,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
"kthread_exit",
"kunit_try_catch_throw",
"lbug_with_loc",
+ "longjmp",
"machine_real_restart",
"make_task_dead",
"mpt_halt_firmware",
@@ -230,7 +231,9 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
"sev_es_terminate",
"snp_abort",
"start_kernel",
+ "start_secondary_resume",
"stop_this_cpu",
+ "unrecoverable_exception",
"usercopy_abort",
"x86_64_start_kernel",
"x86_64_start_reservations",
@@ -1335,6 +1338,8 @@ static const char *uaccess_safe_builtin[] = {
"rep_stos_alternative",
"rep_movs_alternative",
"__copy_user_nocache",
+ "__copy_tofrom_user",
+ "__arch_clear_user",
NULL
};
--
2.40.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v2 08/14] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc
2023-06-22 10:54 ` [PATCH v2 08/14] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc Christophe Leroy
@ 2023-06-22 11:44 ` Peter Zijlstra
0 siblings, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2023-06-22 11:44 UTC (permalink / raw)
To: Christophe Leroy
Cc: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf,
Sathvika Vasireddy, Naveen N Rao, linux-kernel, linuxppc-dev
On Thu, Jun 22, 2023 at 12:54:30PM +0200, Christophe Leroy wrote:
> struct jump_entry {
> s32 code;
> s32 target;
> long key;
> };
>
> It means that the size of the third argument depends on
> whether we are building a 32 bits or 64 bits kernel.
>
> Therefore JUMP_ENTRY_SIZE must depend on elf_class_addrsize(elf).
>
> To allow that, entries[] table must be initialised at runtime. This is
> easily done by moving it into its only user which is special_get_alts().
>
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
However, it doesn't apply to tip/objtool/core because Josh went and
renamed a bunch of things including elf_class_addrsize(). It's now
elf_addr_siize().
> ---
> .../arch/powerpc/include/arch/special.h | 2 +-
> tools/objtool/special.c | 55 +++++++++----------
> 2 files changed, 28 insertions(+), 29 deletions(-)
>
> diff --git a/tools/objtool/arch/powerpc/include/arch/special.h b/tools/objtool/arch/powerpc/include/arch/special.h
> index ffef9ada7133..ede05633c2e4 100644
> --- a/tools/objtool/arch/powerpc/include/arch/special.h
> +++ b/tools/objtool/arch/powerpc/include/arch/special.h
> @@ -6,7 +6,7 @@
> #define EX_ORIG_OFFSET 0
> #define EX_NEW_OFFSET 4
>
> -#define JUMP_ENTRY_SIZE 16
> +#define JUMP_ENTRY_SIZE (8 + elf_class_addrsize(elf)) /* 12 on PPC32, 16 on PPC64 */
> #define JUMP_ORIG_OFFSET 0
> #define JUMP_NEW_OFFSET 4
> #define JUMP_KEY_OFFSET 8
> diff --git a/tools/objtool/special.c b/tools/objtool/special.c
> index baa85c31526b..4015c1cd0fe1 100644
> --- a/tools/objtool/special.c
> +++ b/tools/objtool/special.c
> @@ -26,34 +26,6 @@ struct special_entry {
> unsigned char key; /* jump_label key */
> };
>
> -static const struct special_entry entries[] = {
> - {
> - .sec = ".altinstructions",
> - .group = true,
> - .size = ALT_ENTRY_SIZE,
> - .orig = ALT_ORIG_OFFSET,
> - .orig_len = ALT_ORIG_LEN_OFFSET,
> - .new = ALT_NEW_OFFSET,
> - .new_len = ALT_NEW_LEN_OFFSET,
> - .feature = ALT_FEATURE_OFFSET,
> - },
> - {
> - .sec = "__jump_table",
> - .jump_or_nop = true,
> - .size = JUMP_ENTRY_SIZE,
> - .orig = JUMP_ORIG_OFFSET,
> - .new = JUMP_NEW_OFFSET,
> - .key = JUMP_KEY_OFFSET,
> - },
> - {
> - .sec = "__ex_table",
> - .size = EX_ENTRY_SIZE,
> - .orig = EX_ORIG_OFFSET,
> - .new = EX_NEW_OFFSET,
> - },
> - {},
> -};
> -
> void __weak arch_handle_alternative(unsigned short feature, struct special_alt *alt)
> {
> }
> @@ -144,6 +116,33 @@ int special_get_alts(struct elf *elf, struct list_head *alts)
> unsigned int nr_entries;
> struct special_alt *alt;
> int idx, ret;
> + const struct special_entry entries[] = {
> + {
> + .sec = ".altinstructions",
> + .group = true,
> + .size = ALT_ENTRY_SIZE,
> + .orig = ALT_ORIG_OFFSET,
> + .orig_len = ALT_ORIG_LEN_OFFSET,
> + .new = ALT_NEW_OFFSET,
> + .new_len = ALT_NEW_LEN_OFFSET,
> + .feature = ALT_FEATURE_OFFSET,
> + },
> + {
> + .sec = "__jump_table",
> + .jump_or_nop = true,
> + .size = JUMP_ENTRY_SIZE,
> + .orig = JUMP_ORIG_OFFSET,
> + .new = JUMP_NEW_OFFSET,
> + .key = JUMP_KEY_OFFSET,
> + },
> + {
> + .sec = "__ex_table",
> + .size = EX_ENTRY_SIZE,
> + .orig = EX_ORIG_OFFSET,
> + .new = EX_NEW_OFFSET,
> + },
> + {},
> + };
>
> INIT_LIST_HEAD(alts);
>
> --
> 2.40.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 09/14] objtool: Add INSN_RETURN_CONDITIONAL
2023-06-22 10:54 ` [PATCH v2 09/14] objtool: Add INSN_RETURN_CONDITIONAL Christophe Leroy
@ 2023-06-22 11:45 ` Peter Zijlstra
0 siblings, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2023-06-22 11:45 UTC (permalink / raw)
To: Christophe Leroy
Cc: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf,
Sathvika Vasireddy, Naveen N Rao, linux-kernel, linuxppc-dev
On Thu, Jun 22, 2023 at 12:54:31PM +0200, Christophe Leroy wrote:
> Most functions have an unconditional return at the end, like
> this one:
>
> 00000000 <is_exec_fault>:
> 0: 81 22 04 d0 lwz r9,1232(r2)
> 4: 38 60 00 00 li r3,0
> 8: 2c 09 00 00 cmpwi r9,0
> c: 4d 82 00 20 beqlr <== Conditional return
> 10: 80 69 00 a0 lwz r3,160(r9)
> 14: 54 63 00 36 clrrwi r3,r3,4
> 18: 68 63 04 00 xori r3,r3,1024
> 1c: 7c 63 00 34 cntlzw r3,r3
> 20: 54 63 d9 7e srwi r3,r3,5
> 24: 4e 80 00 20 blr <== Unconditional return
>
> But other functions like this other one below only have
> conditional returns:
>
> 00000028 <pte_update.isra.0>:
> 28: 81 25 00 00 lwz r9,0(r5)
> 2c: 2c 08 00 00 cmpwi r8,0
> 30: 7d 29 30 78 andc r9,r9,r6
> 34: 7d 27 3b 78 or r7,r9,r7
> 38: 54 84 65 3a rlwinm r4,r4,12,20,29
> 3c: 81 23 00 18 lwz r9,24(r3)
> 40: 41 82 00 58 beq 98 <pte_update.isra.0+0x70>
> 44: 7d 29 20 2e lwzx r9,r9,r4
> 48: 55 29 07 3a rlwinm r9,r9,0,28,29
> 4c: 2c 09 00 0c cmpwi r9,12
> 50: 41 82 00 08 beq 58 <pte_update.isra.0+0x30>
> 54: 39 00 00 80 li r8,128
> 58: 2c 08 00 01 cmpwi r8,1
> 5c: 90 e5 00 00 stw r7,0(r5)
> 60: 4d a2 00 20 beqlr+ <== Conditional return
> 64: 7c e9 3b 78 mr r9,r7
> 68: 39 40 00 00 li r10,0
> 6c: 39 4a 00 04 addi r10,r10,4
> 70: 7c 0a 40 00 cmpw r10,r8
> 74: 91 25 00 04 stw r9,4(r5)
> 78: 91 25 00 08 stw r9,8(r5)
> 7c: 38 a5 00 10 addi r5,r5,16
> 80: 91 25 ff fc stw r9,-4(r5)
> 84: 4c 80 00 20 bgelr <== Conditional return
> 88: 55 49 60 26 slwi r9,r10,12
> 8c: 7d 29 3a 14 add r9,r9,r7
> 90: 91 25 00 00 stw r9,0(r5)
> 94: 4b ff ff d8 b 6c <pte_update.isra.0+0x44>
> 98: 39 00 00 04 li r8,4
> 9c: 4b ff ff bc b 58 <pte_update.isra.0+0x30>
>
> If conditional returns are decoded as INSN_OTHER, objtool considers
> that the second function never returns.
>
> If conditional returns are decoded as INSN_RETURN, objtool considers
> that code after that conditional return is dead.
>
> To overcome this situation, introduce INSN_RETURN_CONDITIONAL which
> is taken as a confirmation that a function is not noreturn but still
> sees following code as reachable.
>
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> tools/objtool/check.c | 2 +-
> tools/objtool/include/objtool/arch.h | 1 +
> 2 files changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/tools/objtool/check.c b/tools/objtool/check.c
> index 0fcf99c91400..8977cdf93f54 100644
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -259,7 +259,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
> func_for_each_insn(file, func, insn) {
> empty = false;
>
> - if (insn->type == INSN_RETURN)
> + if (insn->type == INSN_RETURN || insn->type == INSN_RETURN_CONDITIONAL)
> return false;
> }
>
> diff --git a/tools/objtool/include/objtool/arch.h b/tools/objtool/include/objtool/arch.h
> index 2b6d2ce4f9a5..84ba75112934 100644
> --- a/tools/objtool/include/objtool/arch.h
> +++ b/tools/objtool/include/objtool/arch.h
> @@ -19,6 +19,7 @@ enum insn_type {
> INSN_CALL,
> INSN_CALL_DYNAMIC,
> INSN_RETURN,
> + INSN_RETURN_CONDITIONAL,
> INSN_CONTEXT_SWITCH,
> INSN_BUG,
> INSN_NOP,
> --
> 2.40.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 10/14] objtool: Add support for relative switch tables
2023-06-22 10:54 ` [PATCH v2 10/14] objtool: Add support for relative switch tables Christophe Leroy
@ 2023-06-22 11:48 ` Peter Zijlstra
2023-06-23 15:09 ` Christophe Leroy
0 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2023-06-22 11:48 UTC (permalink / raw)
To: Christophe Leroy
Cc: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf,
Sathvika Vasireddy, Naveen N Rao, linux-kernel, linuxppc-dev
On Thu, Jun 22, 2023 at 12:54:32PM +0200, Christophe Leroy wrote:
> On powerpc, switch tables are relative, than means the address of the
> table is added to the value of the entry in order to get the pointed
> address: (r10 is the table address, r4 the index in the table)
>
> lis r10,0 <== Load r10 with upper part of .rodata address
> R_PPC_ADDR16_HA .rodata
> addi r10,r10,0 <== Add lower part of .rodata address
> R_PPC_ADDR16_LO .rodata
> lwzx r8,r10,r4 <== Read table entry at r10 + r4 into r8
> add r10,r8,r10 <== Add table address to read value
> mtctr r10 <== Save calculated address in CTR
> bctr <== Branch to address in CTR
>
> But for c_jump_tables it is not the case, they contain the
> pointed address directly:
>
> lis r28,0 <== Load r28 with upper .rodata..c_jump_table
> R_PPC_ADDR16_HA .rodata..c_jump_table
> addi r28,r28,0 <== Add lower part of .rodata..c_jump_table
> R_PPC_ADDR16_LO .rodata..c_jump_table
> lwzx r10,r28,r10 <== Read table entry at r10 + r28 into r10
> mtctr r10 <== Save read value in CTR
> bctr <== Branch to address in CTR
>
> Add support to objtool for relative tables, with a flag in order to
> tell table by table if that table uses relative or absolute addressing.
>
> And use correct size for 'long' instead of hard coding a size of '8'.
Again, see tip/objtool/core...
This one is going to be a little hard. It would be very good if you can
avoid growing struct reloc; Josh went through a ton of effort to shrink
it.
Would it work to use reloc->sym->is_rel_jumptable ? That still has a few
spare bits in it's bitfield.
> diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
> index e1ca588eb69d..64aac87a4825 100644
> --- a/tools/objtool/include/objtool/elf.h
> +++ b/tools/objtool/include/objtool/elf.h
> @@ -80,6 +80,7 @@ struct reloc {
> s64 addend;
> int idx;
> bool jump_table_start;
> + bool jump_table_is_rel;
> };
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 11/14] objtool: Remove too strict constraint in jump table search
2023-06-22 10:54 ` [PATCH v2 11/14] objtool: Remove too strict constraint in jump table search Christophe Leroy
@ 2023-06-22 11:48 ` Peter Zijlstra
0 siblings, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2023-06-22 11:48 UTC (permalink / raw)
To: Christophe Leroy
Cc: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf,
Sathvika Vasireddy, Naveen N Rao, linux-kernel, linuxppc-dev
On Thu, Jun 22, 2023 at 12:54:33PM +0200, Christophe Leroy wrote:
> In code there is often a pattern like:
>
> load jump table address
> do some test
> conditional jump to label1:
> do something
> unconditional jump to label2:
> label1:
> do something else
> read jump table
> dynamic jump
> label2:
> do other job here ....
>
> find_jump_table() contains a constraint that stops the backsearch
> of the table address loading when a jump is found in-between.
>
> Remove that constraint.
Josh, happen to remember why this code exists ?
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> ---
> tools/objtool/check.c | 7 -------
> 1 file changed, 7 deletions(-)
>
> diff --git a/tools/objtool/check.c b/tools/objtool/check.c
> index b810be087d7c..1911de0e1008 100644
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -2130,13 +2130,6 @@ static struct reloc *find_jump_table(struct objtool_file *file,
> if (insn != orig_insn && insn->type == INSN_JUMP_DYNAMIC)
> break;
>
> - /* allow small jumps within the range */
> - if (insn->type == INSN_JUMP_UNCONDITIONAL &&
> - insn->jump_dest &&
> - (insn->jump_dest->offset <= insn->offset ||
> - insn->jump_dest->offset > orig_insn->offset))
> - break;
> -
> table_reloc = arch_find_switch_table(file, insn, is_rel);
> if (!table_reloc)
> continue;
> --
> 2.40.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 12/14] objtool: Add support for more complex UACCESS control
2023-06-22 10:54 ` [PATCH v2 12/14] objtool: Add support for more complex UACCESS control Christophe Leroy
@ 2023-06-22 11:49 ` Peter Zijlstra
0 siblings, 0 replies; 25+ messages in thread
From: Peter Zijlstra @ 2023-06-22 11:49 UTC (permalink / raw)
To: Christophe Leroy
Cc: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf,
Sathvika Vasireddy, Naveen N Rao, linux-kernel, linuxppc-dev
On Thu, Jun 22, 2023 at 12:54:34PM +0200, Christophe Leroy wrote:
> On x86, UACCESS is controlled by two instructions: STAC and CLAC.
> STAC instruction enables UACCESS while CLAC disables UACCESS.
> This is simple enough for objtool to locate UACCESS enable and
> disable.
>
> But on powerpc it is a bit more complex, the same instruction is
> used for enabling and disabling UACCESS, and the same instruction
> can be used for many other things. It would be too complex to use
> exclusively instruction decoding.
>
> To help objtool, mark such instruction into .discard.uaccess_begin
> and .discard.uaccess_end sections, on the same principle as for
> reachable/unreachable instructions. And add ASM_UACCESS_BEGIN
> and ASM_UACCESS_END macros to be used in inline assembly code to
> annotate UACCESS enable and UACCESS disable instructions.
>
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> ---
> include/linux/objtool.h | 14 +++++++++++++
> tools/objtool/check.c | 44 +++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 58 insertions(+)
>
> diff --git a/include/linux/objtool.h b/include/linux/objtool.h
> index 03f82c2c2ebf..d8fde4158a40 100644
> --- a/include/linux/objtool.h
> +++ b/include/linux/objtool.h
> @@ -57,6 +57,18 @@
> ".long 998b - .\n\t" \
> ".popsection\n\t"
>
> +#define ASM_UACCESS_BEGIN \
> + "998:\n\t" \
> + ".pushsection .discard.uaccess_begin\n\t" \
> + ".long 998b - .\n\t" \
> + ".popsection\n\t"
> +
> +#define ASM_UACCESS_END \
> + "998:\n\t" \
> + ".pushsection .discard.uaccess_end\n\t" \
> + ".long 998b - .\n\t" \
> + ".popsection\n\t"
> +
> #else /* __ASSEMBLY__ */
Yeah, this can work. Josh wanted a more generic hints infra, but I'm not
sure we should make you do that work now.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32
2023-06-22 10:54 ` [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32 Christophe Leroy
@ 2023-06-22 11:56 ` Peter Zijlstra
2023-06-23 16:03 ` Christophe Leroy
2023-06-22 19:16 ` kernel test robot
2023-06-22 20:07 ` kernel test robot
2 siblings, 1 reply; 25+ messages in thread
From: Peter Zijlstra @ 2023-06-22 11:56 UTC (permalink / raw)
To: Christophe Leroy
Cc: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf,
Sathvika Vasireddy, Naveen N Rao, linux-kernel, linuxppc-dev
On Thu, Jun 22, 2023 at 12:54:36PM +0200, Christophe Leroy wrote:
> diff --git a/tools/objtool/check.c b/tools/objtool/check.c
> index f850ab892ad5..8ac5711a055f 100644
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -218,6 +218,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
> "kthread_exit",
> "kunit_try_catch_throw",
> "lbug_with_loc",
> + "longjmp",
> "machine_real_restart",
> "make_task_dead",
> "mpt_halt_firmware",
> @@ -230,7 +231,9 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
> "sev_es_terminate",
> "snp_abort",
> "start_kernel",
> + "start_secondary_resume",
> "stop_this_cpu",
> + "unrecoverable_exception",
> "usercopy_abort",
> "x86_64_start_kernel",
> "x86_64_start_reservations",
Someone went and changed all that in tip/objtool/core :-)
But perhaps, like the uaccess_safe_builtins[] array below, should we
start marking sections so we can remember where stuff comes from later?
> @@ -1335,6 +1338,8 @@ static const char *uaccess_safe_builtin[] = {
> "rep_stos_alternative",
> "rep_movs_alternative",
> "__copy_user_nocache",
> + "__copy_tofrom_user",
> + "__arch_clear_user",
> NULL
> };
Do we want to rename the 'misc' sectino to 'x86' and start a 'ppc32'
section there?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32
2023-06-22 10:54 ` [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32 Christophe Leroy
2023-06-22 11:56 ` Peter Zijlstra
@ 2023-06-22 19:16 ` kernel test robot
2023-06-22 20:07 ` kernel test robot
2 siblings, 0 replies; 25+ messages in thread
From: kernel test robot @ 2023-06-22 19:16 UTC (permalink / raw)
To: Christophe Leroy, Michael Ellerman, Nicholas Piggin,
Josh Poimboeuf, Peter Zijlstra, Sathvika Vasireddy, Naveen N Rao
Cc: oe-kbuild-all, Christophe Leroy, linux-kernel, linuxppc-dev
Hi Christophe,
kernel test robot noticed the following build warnings:
[auto build test WARNING on powerpc/next]
[also build test WARNING on powerpc/fixes masahiroy-kbuild/for-next masahiroy-kbuild/fixes linus/master v6.4-rc7]
[cannot apply to next-20230622]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Christophe-Leroy/powerpc-kuap-Avoid-unnecessary-reads-of-MD_AP/20230622-185950
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
patch link: https://lore.kernel.org/r/be282f27ad808418c7475b51a00b4cb035f89a95.1687430631.git.christophe.leroy%40csgroup.eu
patch subject: [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32
config: powerpc-randconfig-r025-20230622 (https://download.01.org/0day-ci/archive/20230623/202306230353.iPqv57lK-lkp@intel.com/config)
compiler: powerpc-linux-gcc (GCC) 12.3.0
reproduce: (https://download.01.org/0day-ci/archive/20230623/202306230353.iPqv57lK-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202306230353.iPqv57lK-lkp@intel.com/
All warnings (new ones prefixed by >>):
lib/ubsan.c:307:6: warning: no previous prototype for '__ubsan_handle_type_mismatch' [-Wmissing-prototypes]
307 | void __ubsan_handle_type_mismatch(struct type_mismatch_data *data,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> lib/ubsan.o: warning: objtool: ubsan_type_mismatch_common+0x24: UACCESS-safe disables UACCESS
lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch+0x54: call to _restgpr_30_x() with UACCESS enabled
lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch_v1+0x5c: call to _restgpr_30_x() with UACCESS enabled
lib/ubsan.o: warning: objtool: __ubsan_handle_load_invalid_value+0x28: call to memset() with UACCESS enabled
lib/ubsan.o: warning: objtool: __ubsan_handle_shift_out_of_bounds+0x34: call to memset() with UACCESS enabled
--
>> arch/powerpc/kernel/ptrace/ptrace-view.o: warning: objtool: gpr32_set_common+0x2b8: redundant UACCESS disable
objdump-func vmlinux.o ubsan_type_mismatch_common:
0000 00000000 <ubsan_type_mismatch_common>:
0000 0: 94 21 ff e0 stwu r1,-32(r1)
0004 4: 7c 08 02 a6 mflr r0
0008 8: bf 61 00 0c stmw r27,12(r1)
000c c: 7c 7f 1b 78 mr r31,r3
0010 10: 90 01 00 24 stw r0,36(r1)
0014 14: 7c 9e 23 78 mr r30,r4
0018 18: 48 00 00 01 bl 18 <ubsan_type_mismatch_common+0x18> 18: R_PPC_REL24 __sanitizer_cov_trace_pc
001c 1c: 7f 90 0a a6 mfpid r28
0020 20: 39 20 00 00 li r9,0
0024 24: 7d 30 0b a6 mtpid r9
0028 28: 4c 00 01 2c isync
002c 2c: 2c 1e 00 00 cmpwi r30,0
0030 30: 40 e2 00 58 bne+ 88 <ubsan_type_mismatch_common+0x88>
0034 34: 48 00 00 01 bl 34 <ubsan_type_mismatch_common+0x34> 34: R_PPC_REL24 __sanitizer_cov_trace_pc
0038 38: 80 7f 00 00 lwz r3,0(r31)
003c 3c: 48 00 00 01 bl 3c <ubsan_type_mismatch_common+0x3c> 3c: R_PPC_REL24 .text.suppress_report
0040 40: 2c 03 00 00 cmpwi r3,0
0044 44: 40 e2 01 30 bne+ 174 <ubsan_type_mismatch_common+0x174>
0048 48: 48 00 00 01 bl 48 <ubsan_type_mismatch_common+0x48> 48: R_PPC_REL24 __sanitizer_cov_trace_pc
004c 4c: 80 7f 00 00 lwz r3,0(r31)
0050 50: 3c 80 00 00 lis r4,0 52: R_PPC_ADDR16_HA .rodata.ubsan_type_mismatch_common.str1.1
0054 54: 38 84 00 00 addi r4,r4,0 56: R_PPC_ADDR16_LO .rodata.ubsan_type_mismatch_common.str1.1
0058 58: 48 00 00 01 bl 58 <ubsan_type_mismatch_common+0x58> 58: R_PPC_REL24 .text.unlikely.ubsan_prologue
005c 5c: 89 3f 00 0c lbz r9,12(r31)
0060 60: 3d 40 00 00 lis r10,0 62: R_PPC_ADDR16_HA .rodata.type_check_kinds
0064 64: 80 bf 00 04 lwz r5,4(r31)
0068 68: 39 4a 00 00 addi r10,r10,0 6a: R_PPC_ADDR16_LO .rodata.type_check_kinds
006c 6c: 55 29 10 3a slwi r9,r9,2
0070 70: 7c 8a 48 2e lwzx r4,r10,r9
0074 74: 3c 60 00 00 lis r3,0 76: R_PPC_ADDR16_HA .rodata.ubsan_type_mismatch_common.str1.1+0xf
0078 78: 38 a5 00 04 addi r5,r5,4
007c 7c: 38 63 00 00 addi r3,r3,0 7e: R_PPC_ADDR16_LO .rodata.ubsan_type_mismatch_common.str1.1+0xf
0080 80: 48 00 00 01 bl 80 <ubsan_type_mismatch_common+0x80> 80: R_PPC_REL24 _printk
0084 84: 48 00 00 ec b 170 <ubsan_type_mismatch_common+0x170>
0088 88: 48 00 00 01 bl 88 <ubsan_type_mismatch_common+0x88> 88: R_PPC_REL24 __sanitizer_cov_trace_pc
008c 8c: 83 bf 00 08 lwz r29,8(r31)
0090 90: 83 7f 00 00 lwz r27,0(r31)
0094 94: 2c 1d 00 00 cmpwi r29,0
0098 98: 41 c2 00 78 beq- 110 <ubsan_type_mismatch_common+0x110>
009c 9c: 3b bd ff ff addi r29,r29,-1
00a0 a0: 48 00 00 01 bl a0 <ubsan_type_mismatch_common+0xa0> a0: R_PPC_REL24 __sanitizer_cov_trace_pc
00a4 a4: 7f bd f0 39 and. r29,r29,r30
00a8 a8: 41 e2 00 68 beq+ 110 <ubsan_type_mismatch_common+0x110>
00ac ac: 48 00 00 01 bl ac <ubsan_type_mismatch_common+0xac> ac: R_PPC_REL24 __sanitizer_cov_trace_pc
00b0 b0: 7f 63 db 78 mr r3,r27
00b4 b4: 48 00 00 01 bl b4 <ubsan_type_mismatch_common+0xb4> b4: R_PPC_REL24 .text.suppress_report
00b8 b8: 2c 03 00 00 cmpwi r3,0
00bc bc: 40 e2 00 b8 bne+ 174 <ubsan_type_mismatch_common+0x174>
00c0 c0: 48 00 00 01 bl c0 <ubsan_type_mismatch_common+0xc0> c0: R_PPC_REL24 __sanitizer_cov_trace_pc
00c4 c4: 80 7f 00 00 lwz r3,0(r31)
00c8 c8: 3c 80 00 00 lis r4,0 ca: R_PPC_ADDR16_HA .rodata.ubsan_type_mismatch_common.str1.1+0x2d
00cc cc: 38 84 00 00 addi r4,r4,0 ce: R_PPC_ADDR16_LO .rodata.ubsan_type_mismatch_common.str1.1+0x2d
00d0 d0: 48 00 00 01 bl d0 <ubsan_type_mismatch_common+0xd0> d0: R_PPC_REL24 .text.unlikely.ubsan_prologue
00d4 d4: 89 3f 00 0c lbz r9,12(r31)
00d8 d8: 3d 40 00 00 lis r10,0 da: R_PPC_ADDR16_HA .rodata.type_check_kinds
00dc dc: 80 df 00 04 lwz r6,4(r31)
00e0 e0: 39 4a 00 00 addi r10,r10,0 e2: R_PPC_ADDR16_LO .rodata.type_check_kinds
00e4 e4: 55 29 10 3a slwi r9,r9,2
00e8 e8: 7c 8a 48 2e lwzx r4,r10,r9
00ec ec: 3c 60 00 00 lis r3,0 ee: R_PPC_ADDR16_HA .rodata.ubsan_type_mismatch_common.str1.1+0x3f
00f0 f0: 38 c6 00 04 addi r6,r6,4
00f4 f4: 7f c5 f3 78 mr r5,r30
00f8 f8: 38 63 00 00 addi r3,r3,0 fa: R_PPC_ADDR16_LO .rodata.ubsan_type_mismatch_common.str1.1+0x3f
00fc fc: 48 00 00 01 bl fc <ubsan_type_mismatch_common+0xfc> fc: R_PPC_REL24 _printk
0100 100: 3c 60 00 00 lis r3,0 102: R_PPC_ADDR16_HA .rodata.ubsan_type_mismatch_common.str1.1+0x67
0104 104: 80 9f 00 08 lwz r4,8(r31)
0108 108: 38 63 00 00 addi r3,r3,0 10a: R_PPC_ADDR16_LO .rodata.ubsan_type_mismatch_common.str1.1+0x67
010c 10c: 48 00 00 60 b 16c <ubsan_type_mismatch_common+0x16c>
0110 110: 48 00 00 01 bl 110 <ubsan_type_mismatch_common+0x110> 110: R_PPC_REL24 __sanitizer_cov_trace_pc
0114 114: 7f 63 db 78 mr r3,r27
0118 118: 48 00 00 01 bl 118 <ubsan_type_mismatch_common+0x118> 118: R_PPC_REL24 .text.suppress_report
011c 11c: 2c 03 00 00 cmpwi r3,0
0120 120: 40 e2 00 54 bne+ 174 <ubsan_type_mismatch_common+0x174>
0124 124: 48 00 00 01 bl 124 <ubsan_type_mismatch_common+0x124> 124: R_PPC_REL24 __sanitizer_cov_trace_pc
0128 128: 80 7f 00 00 lwz r3,0(r31)
012c 12c: 3c 80 00 00 lis r4,0 12e: R_PPC_ADDR16_HA .rodata.ubsan_type_mismatch_common.str1.1+0x8c
0130 130: 38 84 00 00 addi r4,r4,0 132: R_PPC_ADDR16_LO .rodata.ubsan_type_mismatch_common.str1.1+0x8c
0134 134: 48 00 00 01 bl 134 <ubsan_type_mismatch_common+0x134> 134: R_PPC_REL24 .text.unlikely.ubsan_prologue
0138 138: 89 3f 00 0c lbz r9,12(r31)
013c 13c: 3d 40 00 00 lis r10,0 13e: R_PPC_ADDR16_HA .rodata.type_check_kinds
0140 140: 39 4a 00 00 addi r10,r10,0 142: R_PPC_ADDR16_LO .rodata.type_check_kinds
0144 144: 55 29 10 3a slwi r9,r9,2
0148 148: 3c 60 00 00 lis r3,0 14a: R_PPC_ADDR16_HA .rodata.ubsan_type_mismatch_common.str1.1+0xa1
014c 14c: 7c 8a 48 2e lwzx r4,r10,r9
0150 150: 7f c5 f3 78 mr r5,r30
0154 154: 38 63 00 00 addi r3,r3,0 156: R_PPC_ADDR16_LO .rodata.ubsan_type_mismatch_common.str1.1+0xa1
0158 158: 48 00 00 01 bl 158 <ubsan_type_mismatch_common+0x158> 158: R_PPC_REL24 _printk
015c 15c: 80 9f 00 04 lwz r4,4(r31)
0160 160: 3c 60 00 00 lis r3,0 162: R_PPC_ADDR16_HA .rodata.ubsan_type_mismatch_common.str1.1+0xca
0164 164: 38 63 00 00 addi r3,r3,0 166: R_PPC_ADDR16_LO .rodata.ubsan_type_mismatch_common.str1.1+0xca
0168 168: 38 84 00 04 addi r4,r4,4
016c 16c: 48 00 00 01 bl 16c <ubsan_type_mismatch_common+0x16c> 16c: R_PPC_REL24 _printk
0170 170: 48 00 00 01 bl 170 <ubsan_type_mismatch_common+0x170> 170: R_PPC_REL24 .text.unlikely.ubsan_epilogue
0174 174: 48 00 00 01 bl 174 <ubsan_type_mismatch_common+0x174> 174: R_PPC_REL24 __sanitizer_cov_trace_pc
0178 178: 2c 1c 00 00 cmpwi r28,0
017c 17c: 41 e2 00 14 beq+ 190 <ubsan_type_mismatch_common+0x190>
0180 180: 48 00 00 01 bl 180 <ubsan_type_mismatch_common+0x180> 180: R_PPC_REL24 __sanitizer_cov_trace_pc
0184 184: 81 22 0e b4 lwz r9,3764(r2)
0188 188: 7d 30 0b a6 mtpid r9
018c 18c: 4c 00 01 2c isync
0190 190: 80 01 00 24 lwz r0,36(r1)
0194 194: 83 61 00 0c lwz r27,12(r1)
0198 198: 83 81 00 10 lwz r28,16(r1)
019c 19c: 83 a1 00 14 lwz r29,20(r1)
01a0 1a0: 7c 08 03 a6 mtlr r0
01a4 1a4: 83 c1 00 18 lwz r30,24(r1)
01a8 1a8: 83 e1 00 1c lwz r31,28(r1)
01ac 1ac: 38 21 00 20 addi r1,r1,32
01b0 1b0: 48 00 00 00 b 1b0 <ubsan_type_mismatch_common+0x1b0> 1b0: R_PPC_REL24 __sanitizer_cov_trace_pc
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32
2023-06-22 10:54 ` [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32 Christophe Leroy
2023-06-22 11:56 ` Peter Zijlstra
2023-06-22 19:16 ` kernel test robot
@ 2023-06-22 20:07 ` kernel test robot
2 siblings, 0 replies; 25+ messages in thread
From: kernel test robot @ 2023-06-22 20:07 UTC (permalink / raw)
To: Christophe Leroy, Michael Ellerman, Nicholas Piggin,
Josh Poimboeuf, Peter Zijlstra, Sathvika Vasireddy, Naveen N Rao
Cc: oe-kbuild-all, Christophe Leroy, linux-kernel, linuxppc-dev
Hi Christophe,
kernel test robot noticed the following build warnings:
[auto build test WARNING on powerpc/next]
[also build test WARNING on powerpc/fixes masahiroy-kbuild/for-next masahiroy-kbuild/fixes linus/master v6.4-rc7]
[cannot apply to next-20230622]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Christophe-Leroy/powerpc-kuap-Avoid-unnecessary-reads-of-MD_AP/20230622-185950
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
patch link: https://lore.kernel.org/r/be282f27ad808418c7475b51a00b4cb035f89a95.1687430631.git.christophe.leroy%40csgroup.eu
patch subject: [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32
config: powerpc-allmodconfig (https://download.01.org/0day-ci/archive/20230623/202306230351.SpwvWyz3-lkp@intel.com/config)
compiler: powerpc-linux-gcc (GCC) 12.3.0
reproduce: (https://download.01.org/0day-ci/archive/20230623/202306230351.SpwvWyz3-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202306230351.SpwvWyz3-lkp@intel.com/
All warnings (new ones prefixed by >>):
arch/powerpc/platforms/powermac/smp.c:416:13: warning: no previous prototype for 'smp_psurge_take_timebase' [-Wmissing-prototypes]
416 | void __init smp_psurge_take_timebase(void)
| ^~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/platforms/powermac/smp.c:432:13: warning: no previous prototype for 'smp_psurge_give_timebase' [-Wmissing-prototypes]
432 | void __init smp_psurge_give_timebase(void)
| ^~~~~~~~~~~~~~~~~~~~~~~~
>> arch/powerpc/platforms/powermac/smp.o: warning: objtool: .text.pmac_cpu_offline_self: unexpected end of section
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v2 10/14] objtool: Add support for relative switch tables
2023-06-22 11:48 ` Peter Zijlstra
@ 2023-06-23 15:09 ` Christophe Leroy
0 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-23 15:09 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf,
Sathvika Vasireddy, Naveen N Rao, linux-kernel@vger.kernel.org,
linuxppc-dev@lists.ozlabs.org
Le 22/06/2023 à 13:48, Peter Zijlstra a écrit :
> On Thu, Jun 22, 2023 at 12:54:32PM +0200, Christophe Leroy wrote:
>> On powerpc, switch tables are relative, than means the address of the
>> table is added to the value of the entry in order to get the pointed
>> address: (r10 is the table address, r4 the index in the table)
>>
>> lis r10,0 <== Load r10 with upper part of .rodata address
>> R_PPC_ADDR16_HA .rodata
>> addi r10,r10,0 <== Add lower part of .rodata address
>> R_PPC_ADDR16_LO .rodata
>> lwzx r8,r10,r4 <== Read table entry at r10 + r4 into r8
>> add r10,r8,r10 <== Add table address to read value
>> mtctr r10 <== Save calculated address in CTR
>> bctr <== Branch to address in CTR
>>
>> But for c_jump_tables it is not the case, they contain the
>> pointed address directly:
>>
>> lis r28,0 <== Load r28 with upper .rodata..c_jump_table
>> R_PPC_ADDR16_HA .rodata..c_jump_table
>> addi r28,r28,0 <== Add lower part of .rodata..c_jump_table
>> R_PPC_ADDR16_LO .rodata..c_jump_table
>> lwzx r10,r28,r10 <== Read table entry at r10 + r28 into r10
>> mtctr r10 <== Save read value in CTR
>> bctr <== Branch to address in CTR
>>
>> Add support to objtool for relative tables, with a flag in order to
>> tell table by table if that table uses relative or absolute addressing.
>>
>> And use correct size for 'long' instead of hard coding a size of '8'.
>
> Again, see tip/objtool/core...
>
> This one is going to be a little hard. It would be very good if you can
> avoid growing struct reloc; Josh went through a ton of effort to shrink
> it.
>
> Would it work to use reloc->sym->is_rel_jumptable ? That still has a few
> spare bits in it's bitfield.
>
I noticed that the relocation types are different, so the flag is indeed
not needed. I have now done the following, will it work for x86 ?
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index ae4a1608d97e..05d789e6d3b6 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1988,7 +1988,7 @@ static int add_jump_table(struct objtool_file
*file, struct instruction *insn,
struct symbol *pfunc = insn_func(insn)->pfunc;
struct reloc *table = insn_jump_table(insn);
struct instruction *dest_insn;
- unsigned int prev_offset = 0;
+ unsigned int offset, prev_offset = 0;
struct reloc *reloc = table;
struct alternative *alt;
@@ -2003,7 +2003,7 @@ static int add_jump_table(struct objtool_file
*file, struct instruction *insn,
break;
/* Make sure the table entries are consecutive: */
- if (prev_offset && reloc_offset(reloc) != prev_offset + 8)
+ if (prev_offset && reloc_offset(reloc) != prev_offset +
elf_addr_size(file->elf))
break;
/* Detect function pointers from contiguous objects: */
@@ -2011,7 +2011,12 @@ static int add_jump_table(struct objtool_file
*file, struct instruction *insn,
reloc_addend(reloc) == pfunc->offset)
break;
- dest_insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
+ if (reloc_type(reloc) == R_ABS32 || reloc_type(reloc) == R_ABS64)
+ offset = reloc_addend(reloc);
+ else
+ offset = reloc_addend(reloc) + reloc_offset(table) -
reloc_offset(reloc);
+
+ dest_insn = find_insn(file, reloc->sym->sec, offset);
if (!dest_insn)
break;
Christophe
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32
2023-06-22 11:56 ` Peter Zijlstra
@ 2023-06-23 16:03 ` Christophe Leroy
0 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2023-06-23 16:03 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf,
Sathvika Vasireddy, Naveen N Rao, linux-kernel@vger.kernel.org,
linuxppc-dev@lists.ozlabs.org
Le 22/06/2023 à 13:56, Peter Zijlstra a écrit :
> On Thu, Jun 22, 2023 at 12:54:36PM +0200, Christophe Leroy wrote:
>
>> diff --git a/tools/objtool/check.c b/tools/objtool/check.c
>> index f850ab892ad5..8ac5711a055f 100644
>> --- a/tools/objtool/check.c
>> +++ b/tools/objtool/check.c
>> @@ -218,6 +218,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
>> "kthread_exit",
>> "kunit_try_catch_throw",
>> "lbug_with_loc",
>> + "longjmp",
>> "machine_real_restart",
>> "make_task_dead",
>> "mpt_halt_firmware",
>> @@ -230,7 +231,9 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
>> "sev_es_terminate",
>> "snp_abort",
>> "start_kernel",
>> + "start_secondary_resume",
>> "stop_this_cpu",
>> + "unrecoverable_exception",
>> "usercopy_abort",
>> "x86_64_start_kernel",
>> "x86_64_start_reservations",
>
> Someone went and changed all that in tip/objtool/core :-)
>
> But perhaps, like the uaccess_safe_builtins[] array below, should we
> start marking sections so we can remember where stuff comes from later?
Or, now that it is a H file, maybe each arch could have its own H file
for arch specific functions ? Then we'd get:
diff --git a/tools/objtool/arch/powerpc/include/arch/noreturns.h
b/tools/objtool/arch/powerpc/include/arch/noreturns.h
new file mode 100644
index 000000000000..664f17d39026
--- /dev/null
+++ b/tools/objtool/arch/powerpc/include/arch/noreturns.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * This is a (sorted!) list of all known __noreturn functions in
arch/powerpc.
+ * It's needed for objtool to properly reverse-engineer the control
flow graph.
+ *
+ * Yes, this is unfortunate. A better solution is in the works.
+ */
+NORETURN(longjmp)
+NORETURN(start_secondary_resume)
+NORETURN(unrecoverable_exception)
diff --git a/tools/objtool/noreturns.h b/tools/objtool/noreturns.h
index 1514e84d5cc4..f725ed37532d 100644
--- a/tools/objtool/noreturns.h
+++ b/tools/objtool/noreturns.h
@@ -1,5 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
+#include <arch/noreturns.h>
+
/*
* This is a (sorted!) list of all known __noreturn functions in the
kernel.
* It's needed for objtool to properly reverse-engineer the control
flow graph.
>
>> @@ -1335,6 +1338,8 @@ static const char *uaccess_safe_builtin[] = {
>> "rep_stos_alternative",
>> "rep_movs_alternative",
>> "__copy_user_nocache",
>> + "__copy_tofrom_user",
>> + "__arch_clear_user",
>> NULL
>> };
>
> Do we want to rename the 'misc' sectino to 'x86' and start a 'ppc32'
> section there?
>
Sure.
Then that would look like:
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 2b61f8180bea..2d564d0e2ae1 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1259,13 +1259,15 @@ static const char *uaccess_safe_builtin[] = {
"stackleak_track_stack",
/* misc */
"csum_partial_copy_generic",
+ "ftrace_likely_update", /* CONFIG_TRACE_BRANCH_PROFILING */
+ /* misc x86 */
"copy_mc_fragile",
"copy_mc_fragile_handle_tail",
"copy_mc_enhanced_fast_string",
- "ftrace_likely_update", /* CONFIG_TRACE_BRANCH_PROFILING */
"rep_stos_alternative",
"rep_movs_alternative",
"__copy_user_nocache",
+ /* misc powerpc */
"__copy_tofrom_user",
"__arch_clear_user",
NULL
^ permalink raw reply related [flat|nested] 25+ messages in thread
end of thread, other threads:[~2023-06-23 16:03 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-22 10:54 [PATCH v2 00/14] powerpc/objtool: uaccess validation for PPC32 (v2) Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 01/14] powerpc/kuap: Avoid unnecessary reads of MD_AP Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 02/14] powerpc/kuap: Avoid useless jump_label on empty function Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 03/14] powerpc/kuap: Refactor static branch for disabling kuap Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 04/14] powerpc/kuap: Make disabling KUAP at boottime impossible except on book3s/64 Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 05/14] powerpc/kuap: KUAP enabling/disabling functions must be __always_inline Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 06/14] Revert "powerpc/bug: Provide better flexibility to WARN_ON/__WARN_FLAGS() with asm goto" Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 07/14] objtool: Allow an architecture to disable objtool on ASM files Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 08/14] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc Christophe Leroy
2023-06-22 11:44 ` Peter Zijlstra
2023-06-22 10:54 ` [PATCH v2 09/14] objtool: Add INSN_RETURN_CONDITIONAL Christophe Leroy
2023-06-22 11:45 ` Peter Zijlstra
2023-06-22 10:54 ` [PATCH v2 10/14] objtool: Add support for relative switch tables Christophe Leroy
2023-06-22 11:48 ` Peter Zijlstra
2023-06-23 15:09 ` Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 11/14] objtool: Remove too strict constraint in jump table search Christophe Leroy
2023-06-22 11:48 ` Peter Zijlstra
2023-06-22 10:54 ` [PATCH v2 12/14] objtool: Add support for more complex UACCESS control Christophe Leroy
2023-06-22 11:49 ` Peter Zijlstra
2023-06-22 10:54 ` [PATCH v2 13/14] powerpc/bug: Annotate reachable after warning trap Christophe Leroy
2023-06-22 10:54 ` [PATCH v2 14/14] powerpc: Implement UACCESS validation on PPC32 Christophe Leroy
2023-06-22 11:56 ` Peter Zijlstra
2023-06-23 16:03 ` Christophe Leroy
2023-06-22 19:16 ` kernel test robot
2023-06-22 20:07 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox