linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3)
@ 2023-06-23 17:05 Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 01/15] powerpc/kuap: Avoid unnecessary reads of MD_AP Christophe Leroy
                   ` (14 more replies)
  0 siblings, 15 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

This series adds UACCESS validation for PPC32. It includes half
a dozen of changes to objtool core.

It is almost mature, performs code analysis for all PPC32, only
missing marking of UACCESS enable/disable for book3s/32.

Most object files are correctly decoded, only a few
'unreachable instruction' warnings remain due to more complex
switch table cases:
- Loading of table address after the dynamic jump
- Nested switches

It allowed to detect some UACCESS mess in a few files. They've been
fixed through other patches.

Changes in v3:
- Rebased on top of a merge of powerpc tree and tip/objtool/core tree
- Simplified support for relative switch tables based on relocation type
- Taken comments from Peter

Christophe Leroy (15):
  powerpc/kuap: Avoid unnecessary reads of MD_AP
  powerpc/kuap: Avoid useless jump_label on empty function
  powerpc/kuap: Refactor static branch for disabling kuap
  powerpc/kuap: Make disabling KUAP at boottime impossible except on
    book3s/64
  powerpc/kuap: KUAP enabling/disabling functions must be
    __always_inline
  Revert "powerpc/bug: Provide better flexibility to
    WARN_ON/__WARN_FLAGS() with asm goto"
  objtool: Allow an architecture to disable objtool on ASM files
  objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc
  objtool: Add INSN_RETURN_CONDITIONAL
  objtool: Add support for relative switch tables
  objtool: Remove too strict constraint in jump table search
  objtool: Add support for more complex UACCESS control
  objtool: Prepare noreturns.h for more architectures
  powerpc/bug: Annotate reachable after warning trap
  powerpc: Implement UACCESS validation on PPC32

 arch/Kconfig                                  |  5 ++
 arch/powerpc/Kconfig                          |  2 +
 arch/powerpc/include/asm/book3s/32/kup.h      | 39 ++++-----
 arch/powerpc/include/asm/book3s/64/kup.h      | 36 ++++----
 arch/powerpc/include/asm/bug.h                | 77 ++++-------------
 arch/powerpc/include/asm/kup.h                | 66 ++++++++++-----
 arch/powerpc/include/asm/nohash/32/kup-8xx.h  | 52 +++++-------
 arch/powerpc/include/asm/nohash/kup-booke.h   | 49 +++++------
 arch/powerpc/include/asm/uaccess.h            |  6 +-
 arch/powerpc/kernel/misc_32.S                 |  2 +-
 arch/powerpc/kernel/traps.c                   |  9 +-
 arch/powerpc/kexec/core_32.c                  |  4 +-
 arch/powerpc/mm/book3s32/kuap.c               | 18 ++--
 arch/powerpc/mm/init-common.c                 |  3 +
 arch/powerpc/mm/nohash/kup.c                  | 11 +--
 include/linux/objtool.h                       | 14 ++++
 scripts/Makefile.build                        |  4 +
 tools/objtool/arch/powerpc/decode.c           | 83 +++++++++++++++++--
 .../arch/powerpc/include/arch/noreturns.h     | 11 +++
 .../arch/powerpc/include/arch/special.h       |  2 +-
 tools/objtool/arch/powerpc/special.c          | 34 +++++++-
 .../objtool/arch/x86/include/arch/noreturns.h | 20 +++++
 tools/objtool/check.c                         | 59 ++++++++++---
 tools/objtool/include/objtool/arch.h          |  1 +
 tools/objtool/noreturns.h                     | 14 +---
 tools/objtool/special.c                       | 55 ++++++------
 26 files changed, 399 insertions(+), 277 deletions(-)
 create mode 100644 tools/objtool/arch/powerpc/include/arch/noreturns.h
 create mode 100644 tools/objtool/arch/x86/include/arch/noreturns.h

-- 
2.40.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 01/15] powerpc/kuap: Avoid unnecessary reads of MD_AP
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 02/15] powerpc/kuap: Avoid useless jump_label on empty function Christophe Leroy
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

A disassembly of interrupt_exit_kernel_prepare() shows a useless read
of MD_AP register. This is shown by r9 being re-used immediately without
doing anything with the value read.

  c000e0e0:       60 00 00 00     nop
  c000e0e4: ===>  7d 3a c2 a6     mfmd_ap r9	<====
  c000e0e8:       7d 20 00 a6     mfmsr   r9
  c000e0ec:       7c 51 13 a6     mtspr   81,r2
  c000e0f0:       81 3f 00 84     lwz     r9,132(r31)
  c000e0f4:       71 29 80 00     andi.   r9,r9,32768

kuap_get_and_assert_locked() is paired with kuap_kernel_restore()
and are only used in interrupt_exit_kernel_prepare(). The value
returned by kuap_get_and_assert_locked() is only used by
kuap_kernel_restore().

On 8xx, kuap_kernel_restore() doesn't use the value read by
kuap_get_and_assert_locked() so modify kuap_get_and_assert_locked()
to not perform the read of MD_AP and return 0 instead.

The same applies on BOOKE.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/include/asm/nohash/32/kup-8xx.h | 8 ++------
 arch/powerpc/include/asm/nohash/kup-booke.h  | 6 ++----
 2 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index c44d97751723..8579210f2a6a 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -41,14 +41,10 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
 
 static inline unsigned long __kuap_get_and_assert_locked(void)
 {
-	unsigned long kuap;
-
-	kuap = mfspr(SPRN_MD_AP);
-
 	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
-		WARN_ON_ONCE(kuap >> 16 != MD_APG_KUAP >> 16);
+		WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16);
 
-	return kuap;
+	return 0;
 }
 
 static inline void __allow_user_access(void __user *to, const void __user *from,
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 49bb41ed0816..823c5a3a96d8 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -58,12 +58,10 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
 
 static inline unsigned long __kuap_get_and_assert_locked(void)
 {
-	unsigned long kuap = mfspr(SPRN_PID);
-
 	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
-		WARN_ON_ONCE(kuap);
+		WARN_ON_ONCE(mfspr(SPRN_PID));
 
-	return kuap;
+	return 0;
 }
 
 static inline void __allow_user_access(void __user *to, const void __user *from,
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 02/15] powerpc/kuap: Avoid useless jump_label on empty function
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 01/15] powerpc/kuap: Avoid unnecessary reads of MD_AP Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 03/15] powerpc/kuap: Refactor static branch for disabling kuap Christophe Leroy
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

Disassembly of interrupt_enter_prepare() shows a pointless nop
before the mftb

  c000abf0 <interrupt_enter_prepare>:
  c000abf0:       81 23 00 84     lwz     r9,132(r3)
  c000abf4:       71 29 40 00     andi.   r9,r9,16384
  c000abf8:       41 82 00 28     beq-    c000ac20 <interrupt_enter_prepare+0x30>
  c000abfc: ===>  60 00 00 00     nop	<====
  c000ac00:       7d 0c 42 e6     mftb    r8
  c000ac04:       80 e2 00 08     lwz     r7,8(r2)
  c000ac08:       81 22 00 28     lwz     r9,40(r2)
  c000ac0c:       91 02 00 24     stw     r8,36(r2)
  c000ac10:       7d 29 38 50     subf    r9,r9,r7
  c000ac14:       7d 29 42 14     add     r9,r9,r8
  c000ac18:       91 22 00 08     stw     r9,8(r2)
  c000ac1c:       4e 80 00 20     blr
  c000ac20:       60 00 00 00     nop
  c000ac24:       7d 5a c2 a6     mfmd_ap r10
  c000ac28:       3d 20 de 00     lis     r9,-8704
  c000ac2c:       91 43 00 b0     stw     r10,176(r3)
  c000ac30:       7d 3a c3 a6     mtspr   794,r9
  c000ac34:       4e 80 00 20     blr

That comes from the call to kuap_loc(), allthough __kuap_lock() is an
empty function on the 8xx.

To avoid that, only perform kuap_is_disabled() check when there is
something to do with __kuap_lock().

Do the same with __kuap_save_and_lock() and
__kuap_get_and_assert_locked().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
---
v2: Add back comment about __kupa_lock() not needed on 64s
---
 arch/powerpc/include/asm/book3s/32/kup.h     |  6 ++---
 arch/powerpc/include/asm/book3s/64/kup.h     | 10 ++------
 arch/powerpc/include/asm/kup.h               | 25 ++++++++++----------
 arch/powerpc/include/asm/nohash/32/kup-8xx.h | 11 ++++-----
 arch/powerpc/include/asm/nohash/kup-booke.h  |  8 +++++--
 5 files changed, 27 insertions(+), 33 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index 678f9c9d89b6..466a19cfb4df 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -77,10 +77,6 @@ static inline void kuap_unlock(unsigned long addr, bool ool)
 		kuap_unlock_all_ool();
 }
 
-static inline void __kuap_lock(void)
-{
-}
-
 static inline void __kuap_save_and_lock(struct pt_regs *regs)
 {
 	unsigned long kuap = current->thread.kuap;
@@ -92,6 +88,7 @@ static inline void __kuap_save_and_lock(struct pt_regs *regs)
 	current->thread.kuap = KUAP_NONE;
 	kuap_lock_addr(kuap, false);
 }
+#define __kuap_save_and_lock __kuap_save_and_lock
 
 static inline void kuap_user_restore(struct pt_regs *regs)
 {
@@ -120,6 +117,7 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
 
 	return kuap;
 }
+#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
 
 static __always_inline void __allow_user_access(void __user *to, const void __user *from,
 						u32 size, unsigned long dir)
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 84c09e546115..2a7bd3ecc556 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -298,15 +298,9 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
 		WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED);
 	return amr;
 }
+#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
 
-/* Do nothing, book3s/64 does that in ASM */
-static inline void __kuap_lock(void)
-{
-}
-
-static inline void __kuap_save_and_lock(struct pt_regs *regs)
-{
-}
+/* __kuap_lock() not required, book3s/64 does that in ASM */
 
 /*
  * We support individually allowing read or write, but we don't support nesting
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index d751ddd08110..0a4e07175612 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -52,16 +52,9 @@ __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 	return false;
 }
 
-static inline void __kuap_lock(void) { }
-static inline void __kuap_save_and_lock(struct pt_regs *regs) { }
 static inline void kuap_user_restore(struct pt_regs *regs) { }
 static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
 
-static inline unsigned long __kuap_get_and_assert_locked(void)
-{
-	return 0;
-}
-
 /*
  * book3s/64/kup-radix.h defines these functions for the !KUAP case to flush
  * the L1D cache after user accesses. Only include the empty stubs for other
@@ -87,27 +80,32 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 
 static __always_inline void kuap_assert_locked(void)
 {
+#if defined(CONFIG_PPC_KUAP_DEBUG) && defined(__kuap_get_and_assert_locked)
 	if (kuap_is_disabled())
 		return;
 
-	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
-		__kuap_get_and_assert_locked();
+	__kuap_get_and_assert_locked();
+#endif
 }
 
 static __always_inline void kuap_lock(void)
 {
+#ifdef __kuap_lock
 	if (kuap_is_disabled())
 		return;
 
 	__kuap_lock();
+#endif
 }
 
 static __always_inline void kuap_save_and_lock(struct pt_regs *regs)
 {
+#ifdef __kuap_save_and_lock
 	if (kuap_is_disabled())
 		return;
 
 	__kuap_save_and_lock(regs);
+#endif
 }
 
 static __always_inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
@@ -120,10 +118,11 @@ static __always_inline void kuap_kernel_restore(struct pt_regs *regs, unsigned l
 
 static __always_inline unsigned long kuap_get_and_assert_locked(void)
 {
-	if (kuap_is_disabled())
-		return 0;
-
-	return __kuap_get_and_assert_locked();
+#ifdef __kuap_get_and_assert_locked
+	if (!kuap_is_disabled())
+		return __kuap_get_and_assert_locked();
+#endif
+	return 0;
 }
 
 #ifndef CONFIG_PPC_BOOK3S_64
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 8579210f2a6a..a372cd822887 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -20,15 +20,12 @@ static __always_inline bool kuap_is_disabled(void)
 	return static_branch_unlikely(&disable_kuap_key);
 }
 
-static inline void __kuap_lock(void)
-{
-}
-
 static inline void __kuap_save_and_lock(struct pt_regs *regs)
 {
 	regs->kuap = mfspr(SPRN_MD_AP);
 	mtspr(SPRN_MD_AP, MD_APG_KUAP);
 }
+#define __kuap_save_and_lock __kuap_save_and_lock
 
 static inline void kuap_user_restore(struct pt_regs *regs)
 {
@@ -39,13 +36,15 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
 	mtspr(SPRN_MD_AP, regs->kuap);
 }
 
+#ifdef CONFIG_PPC_KUAP_DEBUG
 static inline unsigned long __kuap_get_and_assert_locked(void)
 {
-	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
-		WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16);
+	WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16);
 
 	return 0;
 }
+#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
+#endif
 
 static inline void __allow_user_access(void __user *to, const void __user *from,
 				       unsigned long size, unsigned long dir)
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 823c5a3a96d8..71182cbe20c3 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -30,6 +30,7 @@ static inline void __kuap_lock(void)
 	mtspr(SPRN_PID, 0);
 	isync();
 }
+#define __kuap_lock __kuap_lock
 
 static inline void __kuap_save_and_lock(struct pt_regs *regs)
 {
@@ -37,6 +38,7 @@ static inline void __kuap_save_and_lock(struct pt_regs *regs)
 	mtspr(SPRN_PID, 0);
 	isync();
 }
+#define __kuap_save_and_lock __kuap_save_and_lock
 
 static inline void kuap_user_restore(struct pt_regs *regs)
 {
@@ -56,13 +58,15 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
 	/* Context synchronisation is performed by rfi */
 }
 
+#ifdef CONFIG_PPC_KUAP_DEBUG
 static inline unsigned long __kuap_get_and_assert_locked(void)
 {
-	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG))
-		WARN_ON_ONCE(mfspr(SPRN_PID));
+	WARN_ON_ONCE(mfspr(SPRN_PID));
 
 	return 0;
 }
+#define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
+#endif
 
 static inline void __allow_user_access(void __user *to, const void __user *from,
 				       unsigned long size, unsigned long dir)
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 03/15] powerpc/kuap: Refactor static branch for disabling kuap
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 01/15] powerpc/kuap: Avoid unnecessary reads of MD_AP Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 02/15] powerpc/kuap: Avoid useless jump_label on empty function Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 04/15] powerpc/kuap: Make disabling KUAP at boottime impossible except on book3s/64 Christophe Leroy
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

All but book3s/64 use a static branch key for disabling kuap.
book3s/64 uses a memory feature.

Refactor all targets except book3s/64.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/include/asm/book3s/32/kup.h     |  7 -------
 arch/powerpc/include/asm/book3s/64/kup.h     |  1 +
 arch/powerpc/include/asm/kup.h               | 15 +++++++++++++++
 arch/powerpc/include/asm/nohash/32/kup-8xx.h |  7 -------
 arch/powerpc/include/asm/nohash/kup-booke.h  |  7 -------
 arch/powerpc/mm/book3s32/kuap.c              |  3 ---
 arch/powerpc/mm/init-common.c                |  3 +++
 arch/powerpc/mm/nohash/kup.c                 |  3 ---
 8 files changed, 19 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index 466a19cfb4df..8da9997a67ba 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -11,8 +11,6 @@
 
 #include <linux/jump_label.h>
 
-extern struct static_key_false disable_kuap_key;
-
 static __always_inline bool kuep_is_disabled(void)
 {
 	return !IS_ENABLED(CONFIG_PPC_KUEP);
@@ -25,11 +23,6 @@ static __always_inline bool kuep_is_disabled(void)
 #define KUAP_NONE	(~0UL)
 #define KUAP_ALL	(~1UL)
 
-static __always_inline bool kuap_is_disabled(void)
-{
-	return static_branch_unlikely(&disable_kuap_key);
-}
-
 static inline void kuap_lock_one(unsigned long addr)
 {
 	mtsr(mfsr(addr) | SR_KS, addr);
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 2a7bd3ecc556..d44c4ee2e8c3 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -234,6 +234,7 @@ static __always_inline bool kuap_is_disabled(void)
 {
 	return !mmu_has_feature(MMU_FTR_BOOK3S_KUAP);
 }
+#define kuap_is_disabled kuap_is_disabled
 
 static inline void kuap_user_restore(struct pt_regs *regs)
 {
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 0a4e07175612..74b7f4cee2ed 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -6,6 +6,12 @@
 #define KUAP_WRITE	2
 #define KUAP_READ_WRITE	(KUAP_READ | KUAP_WRITE)
 
+#ifndef __ASSEMBLY__
+#include <linux/types.h>
+
+static __always_inline bool kuap_is_disabled(void);
+#endif
+
 #ifdef CONFIG_PPC_BOOK3S_64
 #include <asm/book3s/64/kup.h>
 #endif
@@ -41,6 +47,15 @@ void setup_kuep(bool disabled);
 
 #ifdef CONFIG_PPC_KUAP
 void setup_kuap(bool disabled);
+
+#ifndef kuap_is_disabled
+extern struct static_key_false disable_kuap_key;
+
+static __always_inline bool kuap_is_disabled(void)
+{
+	return static_branch_unlikely(&disable_kuap_key);
+}
+#endif
 #else
 static inline void setup_kuap(bool disabled) { }
 
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index a372cd822887..1d53f38c5cd5 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -13,13 +13,6 @@
 
 #include <asm/reg.h>
 
-extern struct static_key_false disable_kuap_key;
-
-static __always_inline bool kuap_is_disabled(void)
-{
-	return static_branch_unlikely(&disable_kuap_key);
-}
-
 static inline void __kuap_save_and_lock(struct pt_regs *regs)
 {
 	regs->kuap = mfspr(SPRN_MD_AP);
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 71182cbe20c3..07759ae9117b 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -18,13 +18,6 @@
 
 #include <asm/reg.h>
 
-extern struct static_key_false disable_kuap_key;
-
-static __always_inline bool kuap_is_disabled(void)
-{
-	return static_branch_unlikely(&disable_kuap_key);
-}
-
 static inline void __kuap_lock(void)
 {
 	mtspr(SPRN_PID, 0);
diff --git a/arch/powerpc/mm/book3s32/kuap.c b/arch/powerpc/mm/book3s32/kuap.c
index 28676cabb005..c5484729b595 100644
--- a/arch/powerpc/mm/book3s32/kuap.c
+++ b/arch/powerpc/mm/book3s32/kuap.c
@@ -3,9 +3,6 @@
 #include <asm/kup.h>
 #include <asm/smp.h>
 
-struct static_key_false disable_kuap_key;
-EXPORT_SYMBOL(disable_kuap_key);
-
 void kuap_lock_all_ool(void)
 {
 	kuap_lock_all();
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 119ef491f797..74e140b1efef 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -32,6 +32,9 @@ EXPORT_SYMBOL_GPL(kernstart_virt_addr);
 bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
 bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
 
+struct static_key_false disable_kuap_key;
+EXPORT_SYMBOL(disable_kuap_key);
+
 static int __init parse_nosmep(char *p)
 {
 	if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64))
diff --git a/arch/powerpc/mm/nohash/kup.c b/arch/powerpc/mm/nohash/kup.c
index 552becf90e97..4e22adfa2aa8 100644
--- a/arch/powerpc/mm/nohash/kup.c
+++ b/arch/powerpc/mm/nohash/kup.c
@@ -13,9 +13,6 @@
 #include <asm/smp.h>
 
 #ifdef CONFIG_PPC_KUAP
-struct static_key_false disable_kuap_key;
-EXPORT_SYMBOL(disable_kuap_key);
-
 void setup_kuap(bool disabled)
 {
 	if (disabled) {
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 04/15] powerpc/kuap: Make disabling KUAP at boottime impossible except on book3s/64
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (2 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 03/15] powerpc/kuap: Refactor static branch for disabling kuap Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 05/15] powerpc/kuap: KUAP enabling/disabling functions must be __always_inline Christophe Leroy
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

It is possible to disable KUAP at boottime with 'nosmap' parameter.

That is implemented with jump_label hence adds a 'nop' in front
of each open/close of userspace access.

From a security point of view it makes sence to disallow disabling
KUAP. And on processors like the 8xx where 'nop' is not seamless,
it saves a few cycles.

In addition, objtool UACCESS validation cannot cope with that
feature because it visits every static branches independentely and
is not able to see that when a UACCESS enable is skipped, the
matching UACCESS disable is skipped as well.

So make KUAP always active when selected in kernel config.

In the future it may be re-implemented by noping the instructions
instead of using static branches.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/include/asm/kup.h  |  4 +---
 arch/powerpc/mm/book3s32/kuap.c | 15 ++++++---------
 arch/powerpc/mm/init-common.c   |  6 +++---
 arch/powerpc/mm/nohash/kup.c    |  8 +-------
 4 files changed, 11 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 74b7f4cee2ed..a02340535efa 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -49,11 +49,9 @@ void setup_kuep(bool disabled);
 void setup_kuap(bool disabled);
 
 #ifndef kuap_is_disabled
-extern struct static_key_false disable_kuap_key;
-
 static __always_inline bool kuap_is_disabled(void)
 {
-	return static_branch_unlikely(&disable_kuap_key);
+	return false;
 }
 #endif
 #else
diff --git a/arch/powerpc/mm/book3s32/kuap.c b/arch/powerpc/mm/book3s32/kuap.c
index c5484729b595..639219d0a821 100644
--- a/arch/powerpc/mm/book3s32/kuap.c
+++ b/arch/powerpc/mm/book3s32/kuap.c
@@ -17,17 +17,14 @@ EXPORT_SYMBOL(kuap_unlock_all_ool);
 
 void setup_kuap(bool disabled)
 {
-	if (!disabled) {
-		kuap_lock_all_ool();
-		init_mm.context.sr0 |= SR_KS;
-		current->thread.sr0 |= SR_KS;
-	}
+	WARN_ON(disabled);
+
+	kuap_lock_all_ool();
+	init_mm.context.sr0 |= SR_KS;
+	current->thread.sr0 |= SR_KS;
 
 	if (smp_processor_id() != boot_cpuid)
 		return;
 
-	if (disabled)
-		static_branch_enable(&disable_kuap_key);
-	else
-		pr_info("Activating Kernel Userspace Access Protection\n");
+	pr_info("Activating Kernel Userspace Access Protection\n");
 }
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 74e140b1efef..237086742ec7 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -32,9 +32,6 @@ EXPORT_SYMBOL_GPL(kernstart_virt_addr);
 bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
 bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
 
-struct static_key_false disable_kuap_key;
-EXPORT_SYMBOL(disable_kuap_key);
-
 static int __init parse_nosmep(char *p)
 {
 	if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64))
@@ -48,6 +45,9 @@ early_param("nosmep", parse_nosmep);
 
 static int __init parse_nosmap(char *p)
 {
+	if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64))
+		return 0;
+
 	disable_kuap = true;
 	pr_warn("Disabling Kernel Userspace Access Protection\n");
 	return 0;
diff --git a/arch/powerpc/mm/nohash/kup.c b/arch/powerpc/mm/nohash/kup.c
index 4e22adfa2aa8..3bb79f89de06 100644
--- a/arch/powerpc/mm/nohash/kup.c
+++ b/arch/powerpc/mm/nohash/kup.c
@@ -15,13 +15,7 @@
 #ifdef CONFIG_PPC_KUAP
 void setup_kuap(bool disabled)
 {
-	if (disabled) {
-		if (IS_ENABLED(CONFIG_40x))
-			disable_kuep = true;
-		if (smp_processor_id() == boot_cpuid)
-			static_branch_enable(&disable_kuap_key);
-		return;
-	}
+	WARN_ON(disabled);
 
 	pr_info("Activating Kernel Userspace Access Protection\n");
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 05/15] powerpc/kuap: KUAP enabling/disabling functions must be __always_inline
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (3 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 04/15] powerpc/kuap: Make disabling KUAP at boottime impossible except on book3s/64 Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 06/15] Revert "powerpc/bug: Provide better flexibility to WARN_ON/__WARN_FLAGS() with asm goto" Christophe Leroy
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

Objtool reports following warnings:

  arch/powerpc/kernel/signal_32.o: warning: objtool:
    __prevent_user_access.constprop.0+0x4 (.text+0x4):
    redundant UACCESS disable

  arch/powerpc/kernel/signal_32.o: warning: objtool: user_access_begin+0x2c
    (.text+0x4c): return with UACCESS enabled

  arch/powerpc/kernel/signal_32.o: warning: objtool: handle_rt_signal32+0x188
    (.text+0x360): call to __prevent_user_access.constprop.0() with UACCESS enabled

  arch/powerpc/kernel/signal_32.o: warning: objtool: handle_signal32+0x150
    (.text+0x4d4): call to __prevent_user_access.constprop.0() with UACCESS enabled

This is due to some KUAP enabling/disabling functions being outline
allthough they are marked inline. Use __always_inline instead.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/include/asm/book3s/32/kup.h     | 26 ++++++++++----------
 arch/powerpc/include/asm/book3s/64/kup.h     | 23 ++++++++---------
 arch/powerpc/include/asm/kup.h               | 16 ++++++------
 arch/powerpc/include/asm/nohash/32/kup-8xx.h | 20 +++++++--------
 arch/powerpc/include/asm/nohash/kup-booke.h  | 22 ++++++++---------
 arch/powerpc/include/asm/uaccess.h           |  6 ++---
 6 files changed, 57 insertions(+), 56 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index 8da9997a67ba..41746a22c650 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -23,25 +23,25 @@ static __always_inline bool kuep_is_disabled(void)
 #define KUAP_NONE	(~0UL)
 #define KUAP_ALL	(~1UL)
 
-static inline void kuap_lock_one(unsigned long addr)
+static __always_inline void kuap_lock_one(unsigned long addr)
 {
 	mtsr(mfsr(addr) | SR_KS, addr);
 	isync();	/* Context sync required after mtsr() */
 }
 
-static inline void kuap_unlock_one(unsigned long addr)
+static __always_inline void kuap_unlock_one(unsigned long addr)
 {
 	mtsr(mfsr(addr) & ~SR_KS, addr);
 	isync();	/* Context sync required after mtsr() */
 }
 
-static inline void kuap_lock_all(void)
+static __always_inline void kuap_lock_all(void)
 {
 	update_user_segments(mfsr(0) | SR_KS);
 	isync();	/* Context sync required after mtsr() */
 }
 
-static inline void kuap_unlock_all(void)
+static __always_inline void kuap_unlock_all(void)
 {
 	update_user_segments(mfsr(0) & ~SR_KS);
 	isync();	/* Context sync required after mtsr() */
@@ -50,7 +50,7 @@ static inline void kuap_unlock_all(void)
 void kuap_lock_all_ool(void);
 void kuap_unlock_all_ool(void);
 
-static inline void kuap_lock_addr(unsigned long addr, bool ool)
+static __always_inline void kuap_lock_addr(unsigned long addr, bool ool)
 {
 	if (likely(addr != KUAP_ALL))
 		kuap_lock_one(addr);
@@ -60,7 +60,7 @@ static inline void kuap_lock_addr(unsigned long addr, bool ool)
 		kuap_lock_all_ool();
 }
 
-static inline void kuap_unlock(unsigned long addr, bool ool)
+static __always_inline void kuap_unlock(unsigned long addr, bool ool)
 {
 	if (likely(addr != KUAP_ALL))
 		kuap_unlock_one(addr);
@@ -70,7 +70,7 @@ static inline void kuap_unlock(unsigned long addr, bool ool)
 		kuap_unlock_all_ool();
 }
 
-static inline void __kuap_save_and_lock(struct pt_regs *regs)
+static __always_inline void __kuap_save_and_lock(struct pt_regs *regs)
 {
 	unsigned long kuap = current->thread.kuap;
 
@@ -83,11 +83,11 @@ static inline void __kuap_save_and_lock(struct pt_regs *regs)
 }
 #define __kuap_save_and_lock __kuap_save_and_lock
 
-static inline void kuap_user_restore(struct pt_regs *regs)
+static __always_inline void kuap_user_restore(struct pt_regs *regs)
 {
 }
 
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
 {
 	if (unlikely(kuap != KUAP_NONE)) {
 		current->thread.kuap = KUAP_NONE;
@@ -102,7 +102,7 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
 	kuap_unlock(regs->kuap, false);
 }
 
-static inline unsigned long __kuap_get_and_assert_locked(void)
+static __always_inline unsigned long __kuap_get_and_assert_locked(void)
 {
 	unsigned long kuap = current->thread.kuap;
 
@@ -137,7 +137,7 @@ static __always_inline void __prevent_user_access(unsigned long dir)
 	kuap_lock_addr(kuap, true);
 }
 
-static inline unsigned long __prevent_user_access_return(void)
+static __always_inline unsigned long __prevent_user_access_return(void)
 {
 	unsigned long flags = current->thread.kuap;
 
@@ -149,7 +149,7 @@ static inline unsigned long __prevent_user_access_return(void)
 	return flags;
 }
 
-static inline void __restore_user_access(unsigned long flags)
+static __always_inline void __restore_user_access(unsigned long flags)
 {
 	if (flags != KUAP_NONE) {
 		current->thread.kuap = flags;
@@ -157,7 +157,7 @@ static inline void __restore_user_access(unsigned long flags)
 	}
 }
 
-static inline bool
+static __always_inline bool
 __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 {
 	unsigned long kuap = regs->kuap;
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index d44c4ee2e8c3..f33e064b9f5f 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -213,14 +213,14 @@ extern u64 __ro_after_init default_iamr;
  * access restrictions. Because of this ignore AMR value when accessing
  * userspace via kernel thread.
  */
-static inline u64 current_thread_amr(void)
+static __always_inline u64 current_thread_amr(void)
 {
 	if (current->thread.regs)
 		return current->thread.regs->amr;
 	return default_amr;
 }
 
-static inline u64 current_thread_iamr(void)
+static __always_inline u64 current_thread_iamr(void)
 {
 	if (current->thread.regs)
 		return current->thread.regs->iamr;
@@ -236,7 +236,7 @@ static __always_inline bool kuap_is_disabled(void)
 }
 #define kuap_is_disabled kuap_is_disabled
 
-static inline void kuap_user_restore(struct pt_regs *regs)
+static __always_inline void kuap_user_restore(struct pt_regs *regs)
 {
 	bool restore_amr = false, restore_iamr = false;
 	unsigned long amr, iamr;
@@ -275,7 +275,7 @@ static inline void kuap_user_restore(struct pt_regs *regs)
 	 */
 }
 
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr)
 {
 	if (likely(regs->amr == amr))
 		return;
@@ -291,7 +291,7 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr
 	 */
 }
 
-static inline unsigned long __kuap_get_and_assert_locked(void)
+static __always_inline unsigned long __kuap_get_and_assert_locked(void)
 {
 	unsigned long amr = mfspr(SPRN_AMR);
 
@@ -308,7 +308,7 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
  * because that would require an expensive read/modify write of the AMR.
  */
 
-static inline unsigned long get_kuap(void)
+static __always_inline unsigned long get_kuap(void)
 {
 	/*
 	 * We return AMR_KUAP_BLOCKED when we don't support KUAP because
@@ -338,7 +338,8 @@ static __always_inline void set_kuap(unsigned long value)
 	isync();
 }
 
-static inline bool __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+static __always_inline bool
+__bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 {
 	/*
 	 * For radix this will be a storage protection fault (DSISR_PROTFAULT).
@@ -381,12 +382,12 @@ static __always_inline void allow_user_access(void __user *to, const void __user
 
 #else /* CONFIG_PPC_KUAP */
 
-static inline unsigned long get_kuap(void)
+static __always_inline unsigned long get_kuap(void)
 {
 	return AMR_KUAP_BLOCKED;
 }
 
-static inline void set_kuap(unsigned long value) { }
+static __always_inline void set_kuap(unsigned long value) { }
 
 static __always_inline void allow_user_access(void __user *to, const void __user *from,
 					      unsigned long size, unsigned long dir)
@@ -401,7 +402,7 @@ static __always_inline void prevent_user_access(unsigned long dir)
 		do_uaccess_flush();
 }
 
-static inline unsigned long prevent_user_access_return(void)
+static __always_inline unsigned long prevent_user_access_return(void)
 {
 	unsigned long flags = get_kuap();
 
@@ -412,7 +413,7 @@ static inline unsigned long prevent_user_access_return(void)
 	return flags;
 }
 
-static inline void restore_user_access(unsigned long flags)
+static __always_inline void restore_user_access(unsigned long flags)
 {
 	set_kuap(flags);
 	if (static_branch_unlikely(&uaccess_flush_key) && flags == AMR_KUAP_BLOCKED)
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index a02340535efa..132f1c7e1064 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -59,14 +59,14 @@ static inline void setup_kuap(bool disabled) { }
 
 static __always_inline bool kuap_is_disabled(void) { return true; }
 
-static inline bool
+static __always_inline bool
 __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 {
 	return false;
 }
 
-static inline void kuap_user_restore(struct pt_regs *regs) { }
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
+static __always_inline void kuap_user_restore(struct pt_regs *regs) { }
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { }
 
 /*
  * book3s/64/kup-radix.h defines these functions for the !KUAP case to flush
@@ -74,11 +74,11 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr
  * platforms.
  */
 #ifndef CONFIG_PPC_BOOK3S_64
-static inline void __allow_user_access(void __user *to, const void __user *from,
-				       unsigned long size, unsigned long dir) { }
-static inline void __prevent_user_access(unsigned long dir) { }
-static inline unsigned long __prevent_user_access_return(void) { return 0UL; }
-static inline void __restore_user_access(unsigned long flags) { }
+static __always_inline void __allow_user_access(void __user *to, const void __user *from,
+						unsigned long size, unsigned long dir) { }
+static __always_inline void __prevent_user_access(unsigned long dir) { }
+static __always_inline unsigned long __prevent_user_access_return(void) { return 0UL; }
+static __always_inline void __restore_user_access(unsigned long flags) { }
 #endif /* CONFIG_PPC_BOOK3S_64 */
 #endif /* CONFIG_PPC_KUAP */
 
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 1d53f38c5cd5..61067e4c8f22 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -13,24 +13,24 @@
 
 #include <asm/reg.h>
 
-static inline void __kuap_save_and_lock(struct pt_regs *regs)
+static __always_inline void __kuap_save_and_lock(struct pt_regs *regs)
 {
 	regs->kuap = mfspr(SPRN_MD_AP);
 	mtspr(SPRN_MD_AP, MD_APG_KUAP);
 }
 #define __kuap_save_and_lock __kuap_save_and_lock
 
-static inline void kuap_user_restore(struct pt_regs *regs)
+static __always_inline void kuap_user_restore(struct pt_regs *regs)
 {
 }
 
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
 {
 	mtspr(SPRN_MD_AP, regs->kuap);
 }
 
 #ifdef CONFIG_PPC_KUAP_DEBUG
-static inline unsigned long __kuap_get_and_assert_locked(void)
+static __always_inline unsigned long __kuap_get_and_assert_locked(void)
 {
 	WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16);
 
@@ -39,18 +39,18 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
 #define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
 #endif
 
-static inline void __allow_user_access(void __user *to, const void __user *from,
-				       unsigned long size, unsigned long dir)
+static __always_inline void __allow_user_access(void __user *to, const void __user *from,
+						unsigned long size, unsigned long dir)
 {
 	mtspr(SPRN_MD_AP, MD_APG_INIT);
 }
 
-static inline void __prevent_user_access(unsigned long dir)
+static __always_inline void __prevent_user_access(unsigned long dir)
 {
 	mtspr(SPRN_MD_AP, MD_APG_KUAP);
 }
 
-static inline unsigned long __prevent_user_access_return(void)
+static __always_inline unsigned long __prevent_user_access_return(void)
 {
 	unsigned long flags;
 
@@ -61,12 +61,12 @@ static inline unsigned long __prevent_user_access_return(void)
 	return flags;
 }
 
-static inline void __restore_user_access(unsigned long flags)
+static __always_inline void __restore_user_access(unsigned long flags)
 {
 	mtspr(SPRN_MD_AP, flags);
 }
 
-static inline bool
+static __always_inline bool
 __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 {
 	return !((regs->kuap ^ MD_APG_KUAP) & 0xff000000);
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 07759ae9117b..416f3e0897d5 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -18,14 +18,14 @@
 
 #include <asm/reg.h>
 
-static inline void __kuap_lock(void)
+static __always_inline void __kuap_lock(void)
 {
 	mtspr(SPRN_PID, 0);
 	isync();
 }
 #define __kuap_lock __kuap_lock
 
-static inline void __kuap_save_and_lock(struct pt_regs *regs)
+static __always_inline void __kuap_save_and_lock(struct pt_regs *regs)
 {
 	regs->kuap = mfspr(SPRN_PID);
 	mtspr(SPRN_PID, 0);
@@ -33,7 +33,7 @@ static inline void __kuap_save_and_lock(struct pt_regs *regs)
 }
 #define __kuap_save_and_lock __kuap_save_and_lock
 
-static inline void kuap_user_restore(struct pt_regs *regs)
+static __always_inline void kuap_user_restore(struct pt_regs *regs)
 {
 	if (kuap_is_disabled())
 		return;
@@ -43,7 +43,7 @@ static inline void kuap_user_restore(struct pt_regs *regs)
 	/* Context synchronisation is performed by rfi */
 }
 
-static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
+static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap)
 {
 	if (regs->kuap)
 		mtspr(SPRN_PID, current->thread.pid);
@@ -52,7 +52,7 @@ static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kua
 }
 
 #ifdef CONFIG_PPC_KUAP_DEBUG
-static inline unsigned long __kuap_get_and_assert_locked(void)
+static __always_inline unsigned long __kuap_get_and_assert_locked(void)
 {
 	WARN_ON_ONCE(mfspr(SPRN_PID));
 
@@ -61,20 +61,20 @@ static inline unsigned long __kuap_get_and_assert_locked(void)
 #define __kuap_get_and_assert_locked __kuap_get_and_assert_locked
 #endif
 
-static inline void __allow_user_access(void __user *to, const void __user *from,
-				       unsigned long size, unsigned long dir)
+static __always_inline void __allow_user_access(void __user *to, const void __user *from,
+						unsigned long size, unsigned long dir)
 {
 	mtspr(SPRN_PID, current->thread.pid);
 	isync();
 }
 
-static inline void __prevent_user_access(unsigned long dir)
+static __always_inline void __prevent_user_access(unsigned long dir)
 {
 	mtspr(SPRN_PID, 0);
 	isync();
 }
 
-static inline unsigned long __prevent_user_access_return(void)
+static __always_inline unsigned long __prevent_user_access_return(void)
 {
 	unsigned long flags = mfspr(SPRN_PID);
 
@@ -84,7 +84,7 @@ static inline unsigned long __prevent_user_access_return(void)
 	return flags;
 }
 
-static inline void __restore_user_access(unsigned long flags)
+static __always_inline void __restore_user_access(unsigned long flags)
 {
 	if (flags) {
 		mtspr(SPRN_PID, current->thread.pid);
@@ -92,7 +92,7 @@ static inline void __restore_user_access(unsigned long flags)
 	}
 }
 
-static inline bool
+static __always_inline bool
 __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 {
 	return !regs->kuap;
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index a2d255aa9627..fb725ec77926 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -386,7 +386,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
 extern long __copy_from_user_flushcache(void *dst, const void __user *src,
 		unsigned size);
 
-static __must_check inline bool user_access_begin(const void __user *ptr, size_t len)
+static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len)
 {
 	if (unlikely(!access_ok(ptr, len)))
 		return false;
@@ -401,7 +401,7 @@ static __must_check inline bool user_access_begin(const void __user *ptr, size_t
 #define user_access_save	prevent_user_access_return
 #define user_access_restore	restore_user_access
 
-static __must_check inline bool
+static __must_check __always_inline bool
 user_read_access_begin(const void __user *ptr, size_t len)
 {
 	if (unlikely(!access_ok(ptr, len)))
@@ -415,7 +415,7 @@ user_read_access_begin(const void __user *ptr, size_t len)
 #define user_read_access_begin	user_read_access_begin
 #define user_read_access_end		prevent_current_read_from_user
 
-static __must_check inline bool
+static __must_check __always_inline bool
 user_write_access_begin(const void __user *ptr, size_t len)
 {
 	if (unlikely(!access_ok(ptr, len)))
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 06/15] Revert "powerpc/bug: Provide better flexibility to WARN_ON/__WARN_FLAGS() with asm goto"
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (4 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 05/15] powerpc/kuap: KUAP enabling/disabling functions must be __always_inline Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 07/15] objtool: Allow an architecture to disable objtool on ASM files Christophe Leroy
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

This reverts commit 1e688dd2a3d6759d416616ff07afc4bb836c4213.

That commit aimed at optimising the code around generation of
WARN_ON/BUG_ON but this leads to a lot of dead code erroneously
generated by GCC.

That dead code becomes a problem when we start using objtool validation
because objtool will abort validation with a warning as soon as it
detects unreachable code. This is because unreachable code might
be the indication that objtool doesn't properly decode object text.

     text	   data	    bss	    dec	    hex	filename
  9551585	3627834	 224376	13403795	 cc8693	vmlinux.before
  9535281	3628358	 224376	13388015	 cc48ef	vmlinux.after

Once this change is reverted, in a standard configuration (pmac32 +
function tracer) the text is reduced by 16k which is around 1.7%

Taking into account that other problems are encountered with that
'asm goto' in WARN_ON(), including build failures, keeping that
change is not worth it allthough it is primarily a compiler bug.

So revert it for now.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Naveen N Rao <naveen@kernel.org>
---
v2: Do not remove tools/testing/selftests/powerpc/primitives/asm/extable.h and leave EX_TABLE in asm/extable.h
---
 arch/powerpc/include/asm/book3s/64/kup.h |  2 +-
 arch/powerpc/include/asm/bug.h           | 67 ++++--------------------
 arch/powerpc/kernel/misc_32.S            |  2 +-
 arch/powerpc/kernel/traps.c              |  9 +---
 4 files changed, 15 insertions(+), 65 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index f33e064b9f5f..7c8cc0f096b1 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -90,7 +90,7 @@
 	/* Prevent access to userspace using any key values */
 	LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
 999:	tdne	\gpr1, \gpr2
-	EMIT_WARN_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
+	EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
 	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 67)
 #endif
 .endm
diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index ef42adb44aa3..a565995fb742 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -4,14 +4,13 @@
 #ifdef __KERNEL__
 
 #include <asm/asm-compat.h>
-#include <asm/extable.h>
 
 #ifdef CONFIG_BUG
 
 #ifdef __ASSEMBLY__
 #include <asm/asm-offsets.h>
 #ifdef CONFIG_DEBUG_BUGVERBOSE
-.macro __EMIT_BUG_ENTRY addr,file,line,flags
+.macro EMIT_BUG_ENTRY addr,file,line,flags
 	 .section __bug_table,"aw"
 5001:	 .4byte \addr - .
 	 .4byte 5002f - .
@@ -23,7 +22,7 @@
 	 .previous
 .endm
 #else
-.macro __EMIT_BUG_ENTRY addr,file,line,flags
+.macro EMIT_BUG_ENTRY addr,file,line,flags
 	 .section __bug_table,"aw"
 5001:	 .4byte \addr - .
 	 .short \flags
@@ -32,18 +31,6 @@
 .endm
 #endif /* verbose */
 
-.macro EMIT_WARN_ENTRY addr,file,line,flags
-	EX_TABLE(\addr,\addr+4)
-	__EMIT_BUG_ENTRY \addr,\file,\line,\flags
-.endm
-
-.macro EMIT_BUG_ENTRY addr,file,line,flags
-	.if \flags & 1 /* BUGFLAG_WARNING */
-	.err /* Use EMIT_WARN_ENTRY for warnings */
-	.endif
-	__EMIT_BUG_ENTRY \addr,\file,\line,\flags
-.endm
-
 #else /* !__ASSEMBLY__ */
 /* _EMIT_BUG_ENTRY expects args %0,%1,%2,%3 to be FILE, LINE, flags and
    sizeof(struct bug_entry), respectively */
@@ -73,16 +60,6 @@
 		  "i" (sizeof(struct bug_entry)),	\
 		  ##__VA_ARGS__)
 
-#define WARN_ENTRY(insn, flags, label, ...)		\
-	asm_volatile_goto(				\
-		"1:	" insn "\n"			\
-		EX_TABLE(1b, %l[label])			\
-		_EMIT_BUG_ENTRY				\
-		: : "i" (__FILE__), "i" (__LINE__),	\
-		  "i" (flags),				\
-		  "i" (sizeof(struct bug_entry)),	\
-		  ##__VA_ARGS__ : : label)
-
 /*
  * BUG_ON() and WARN_ON() do their best to cooperate with compile-time
  * optimisations. However depending on the complexity of the condition
@@ -95,16 +72,7 @@
 } while (0)
 #define HAVE_ARCH_BUG
 
-#define __WARN_FLAGS(flags) do {				\
-	__label__ __label_warn_on;				\
-								\
-	WARN_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags), __label_warn_on); \
-	barrier_before_unreachable();				\
-	__builtin_unreachable();				\
-								\
-__label_warn_on:						\
-	break;							\
-} while (0)
+#define __WARN_FLAGS(flags) BUG_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags))
 
 #ifdef CONFIG_PPC64
 #define BUG_ON(x) do {						\
@@ -117,25 +85,15 @@ __label_warn_on:						\
 } while (0)
 
 #define WARN_ON(x) ({						\
-	bool __ret_warn_on = false;				\
-	do {							\
-		if (__builtin_constant_p((x))) {		\
-			if (!(x)) 				\
-				break; 				\
+	int __ret_warn_on = !!(x);				\
+	if (__builtin_constant_p(__ret_warn_on)) {		\
+		if (__ret_warn_on)				\
 			__WARN();				\
-			__ret_warn_on = true;			\
-		} else {					\
-			__label__ __label_warn_on;		\
-								\
-			WARN_ENTRY(PPC_TLNEI " %4, 0",		\
-				   BUGFLAG_WARNING | BUGFLAG_TAINT(TAINT_WARN),	\
-				   __label_warn_on,		\
-				   "r" ((__force long)(x)));	\
-			break;					\
-__label_warn_on:						\
-			__ret_warn_on = true;			\
-		}						\
-	} while (0);						\
+	} else {						\
+		BUG_ENTRY(PPC_TLNEI " %4, 0",			\
+			  BUGFLAG_WARNING | BUGFLAG_TAINT(TAINT_WARN),	\
+			  "r" (__ret_warn_on));	\
+	}							\
 	unlikely(__ret_warn_on);				\
 })
 
@@ -148,11 +106,8 @@ __label_warn_on:						\
 #ifdef __ASSEMBLY__
 .macro EMIT_BUG_ENTRY addr,file,line,flags
 .endm
-.macro EMIT_WARN_ENTRY addr,file,line,flags
-.endm
 #else /* !__ASSEMBLY__ */
 #define _EMIT_BUG_ENTRY
-#define _EMIT_WARN_ENTRY
 #endif
 #endif /* CONFIG_BUG */
 
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index daf8f87d2372..fd11ec42df89 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -237,7 +237,7 @@ _GLOBAL(copy_page)
 	addi	r3,r3,-4
 
 0:	twnei	r5, 0	/* WARN if r3 is not cache aligned */
-	EMIT_WARN_ENTRY 0b,__FILE__,__LINE__, BUGFLAG_WARNING
+	EMIT_BUG_ENTRY 0b,__FILE__,__LINE__, BUGFLAG_WARNING
 
 	addi	r4,r4,-4
 
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index e59ec6d32d37..7ef147e2a20d 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1508,13 +1508,8 @@ static void do_program_check(struct pt_regs *regs)
 
 		if (!(regs->msr & MSR_PR) &&  /* not user-mode */
 		    report_bug(bugaddr, regs) == BUG_TRAP_TYPE_WARN) {
-			const struct exception_table_entry *entry;
-
-			entry = search_exception_tables(bugaddr);
-			if (entry) {
-				regs_set_return_ip(regs, extable_fixup(entry) + regs->nip - bugaddr);
-				return;
-			}
+			regs_add_return_ip(regs, 4);
+			return;
 		}
 
 		if (cpu_has_feature(CPU_FTR_DEXCR_NPHIE) && user_mode(regs)) {
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 07/15] objtool: Allow an architecture to disable objtool on ASM files
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (5 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 06/15] Revert "powerpc/bug: Provide better flexibility to WARN_ON/__WARN_FLAGS() with asm goto" Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 08/15] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc Christophe Leroy
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

Supporting objtool on ASM files requires quite an effort.

Features like UACCESS validation don't require ASM files validation.

In order to allow architectures to enable objtool validation
without spending unnecessary effort on cleaning up ASM files,
provide an option to disable objtool validation on ASM files.

Suggested-by: Naveen N Rao <naveen@kernel.org>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/Kconfig           | 5 +++++
 scripts/Makefile.build | 4 ++++
 2 files changed, 9 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 205fd23e0cad..dd66b884bcfb 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1091,6 +1091,11 @@ config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
 config HAVE_OBJTOOL
 	bool
 
+config ARCH_OBJTOOL_SKIP_ASM
+	bool
+	help
+	  Architecture doesn't support objtool on ASM files
+
 config HAVE_JUMP_LABEL_HACK
 	bool
 
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 9f94fc83f086..878027cf4faf 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -359,7 +359,11 @@ $(obj)/%.s: $(src)/%.S FORCE
 	$(call if_changed_dep,cpp_s_S)
 
 quiet_cmd_as_o_S = AS $(quiet_modtag)  $@
+ifndef CONFIG_ARCH_OBJTOOL_SKIP_ASM
       cmd_as_o_S = $(CC) $(a_flags) -c -o $@ $< $(cmd_objtool)
+else
+      cmd_as_o_S = $(CC) $(a_flags) -c -o $@ $<
+endif
 
 ifdef CONFIG_ASM_MODVERSIONS
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 08/15] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (6 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 07/15] objtool: Allow an architecture to disable objtool on ASM files Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 09/15] objtool: Add INSN_RETURN_CONDITIONAL Christophe Leroy
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

	struct jump_entry {
		s32 code;
		s32 target;
		long key;
	};

It means that the size of the third argument depends on
whether we are building a 32 bits or 64 bits kernel.

Therefore JUMP_ENTRY_SIZE must depend on elf_class_addrsize(elf).

To allow that, entries[] table must be initialised at runtime. This is
easily done by moving it into its only user which is special_get_alts().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 .../arch/powerpc/include/arch/special.h       |  2 +-
 tools/objtool/special.c                       | 55 +++++++++----------
 2 files changed, 28 insertions(+), 29 deletions(-)

diff --git a/tools/objtool/arch/powerpc/include/arch/special.h b/tools/objtool/arch/powerpc/include/arch/special.h
index ffef9ada7133..b17802dcf436 100644
--- a/tools/objtool/arch/powerpc/include/arch/special.h
+++ b/tools/objtool/arch/powerpc/include/arch/special.h
@@ -6,7 +6,7 @@
 #define EX_ORIG_OFFSET 0
 #define EX_NEW_OFFSET 4
 
-#define JUMP_ENTRY_SIZE 16
+#define JUMP_ENTRY_SIZE (8 + elf_addr_size(elf)) /* 12 on PPC32, 16 on PPC64 */
 #define JUMP_ORIG_OFFSET 0
 #define JUMP_NEW_OFFSET 4
 #define JUMP_KEY_OFFSET 8
diff --git a/tools/objtool/special.c b/tools/objtool/special.c
index 91b1950f5bd8..b3f07e8beb85 100644
--- a/tools/objtool/special.c
+++ b/tools/objtool/special.c
@@ -26,34 +26,6 @@ struct special_entry {
 	unsigned char key; /* jump_label key */
 };
 
-static const struct special_entry entries[] = {
-	{
-		.sec = ".altinstructions",
-		.group = true,
-		.size = ALT_ENTRY_SIZE,
-		.orig = ALT_ORIG_OFFSET,
-		.orig_len = ALT_ORIG_LEN_OFFSET,
-		.new = ALT_NEW_OFFSET,
-		.new_len = ALT_NEW_LEN_OFFSET,
-		.feature = ALT_FEATURE_OFFSET,
-	},
-	{
-		.sec = "__jump_table",
-		.jump_or_nop = true,
-		.size = JUMP_ENTRY_SIZE,
-		.orig = JUMP_ORIG_OFFSET,
-		.new = JUMP_NEW_OFFSET,
-		.key = JUMP_KEY_OFFSET,
-	},
-	{
-		.sec = "__ex_table",
-		.size = EX_ENTRY_SIZE,
-		.orig = EX_ORIG_OFFSET,
-		.new = EX_NEW_OFFSET,
-	},
-	{},
-};
-
 void __weak arch_handle_alternative(unsigned short feature, struct special_alt *alt)
 {
 }
@@ -144,6 +116,33 @@ int special_get_alts(struct elf *elf, struct list_head *alts)
 	unsigned int nr_entries;
 	struct special_alt *alt;
 	int idx, ret;
+	const struct special_entry entries[] = {
+		{
+			.sec = ".altinstructions",
+			.group = true,
+			.size = ALT_ENTRY_SIZE,
+			.orig = ALT_ORIG_OFFSET,
+			.orig_len = ALT_ORIG_LEN_OFFSET,
+			.new = ALT_NEW_OFFSET,
+			.new_len = ALT_NEW_LEN_OFFSET,
+			.feature = ALT_FEATURE_OFFSET,
+		},
+		{
+			.sec = "__jump_table",
+			.jump_or_nop = true,
+			.size = JUMP_ENTRY_SIZE,
+			.orig = JUMP_ORIG_OFFSET,
+			.new = JUMP_NEW_OFFSET,
+			.key = JUMP_KEY_OFFSET,
+		},
+		{
+			.sec = "__ex_table",
+			.size = EX_ENTRY_SIZE,
+			.orig = EX_ORIG_OFFSET,
+			.new = EX_NEW_OFFSET,
+		},
+		{},
+	};
 
 	INIT_LIST_HEAD(alts);
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 09/15] objtool: Add INSN_RETURN_CONDITIONAL
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (7 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 08/15] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 10/15] objtool: Add support for relative switch tables Christophe Leroy
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

Most functions have an unconditional return at the end, like
this one:

	00000000 <is_exec_fault>:
	   0:	81 22 04 d0 	lwz     r9,1232(r2)
	   4:	38 60 00 00 	li      r3,0
	   8:	2c 09 00 00 	cmpwi   r9,0
	   c:	4d 82 00 20 	beqlr		<== Conditional return
	  10:	80 69 00 a0 	lwz     r3,160(r9)
	  14:	54 63 00 36 	clrrwi  r3,r3,4
	  18:	68 63 04 00 	xori    r3,r3,1024
	  1c:	7c 63 00 34 	cntlzw  r3,r3
	  20:	54 63 d9 7e 	srwi    r3,r3,5
	  24:	4e 80 00 20 	blr		<== Unconditional return

But other functions like this other one below only have
conditional returns:

	00000028 <pte_update.isra.0>:
	  28:	81 25 00 00 	lwz     r9,0(r5)
	  2c:	2c 08 00 00 	cmpwi   r8,0
	  30:	7d 29 30 78 	andc    r9,r9,r6
	  34:	7d 27 3b 78 	or      r7,r9,r7
	  38:	54 84 65 3a 	rlwinm  r4,r4,12,20,29
	  3c:	81 23 00 18 	lwz     r9,24(r3)
	  40:	41 82 00 58 	beq     98 <pte_update.isra.0+0x70>
	  44:	7d 29 20 2e 	lwzx    r9,r9,r4
	  48:	55 29 07 3a 	rlwinm  r9,r9,0,28,29
	  4c:	2c 09 00 0c 	cmpwi   r9,12
	  50:	41 82 00 08 	beq     58 <pte_update.isra.0+0x30>
	  54:	39 00 00 80 	li      r8,128
	  58:	2c 08 00 01 	cmpwi   r8,1
	  5c:	90 e5 00 00 	stw     r7,0(r5)
	  60:	4d a2 00 20 	beqlr+		<== Conditional return
	  64:	7c e9 3b 78 	mr      r9,r7
	  68:	39 40 00 00 	li      r10,0
	  6c:	39 4a 00 04 	addi    r10,r10,4
	  70:	7c 0a 40 00 	cmpw    r10,r8
	  74:	91 25 00 04 	stw     r9,4(r5)
	  78:	91 25 00 08 	stw     r9,8(r5)
	  7c:	38 a5 00 10 	addi    r5,r5,16
	  80:	91 25 ff fc 	stw     r9,-4(r5)
	  84:	4c 80 00 20 	bgelr		<== Conditional return
	  88:	55 49 60 26 	slwi    r9,r10,12
	  8c:	7d 29 3a 14 	add     r9,r9,r7
	  90:	91 25 00 00 	stw     r9,0(r5)
	  94:	4b ff ff d8 	b       6c <pte_update.isra.0+0x44>
	  98:	39 00 00 04 	li      r8,4
	  9c:	4b ff ff bc 	b       58 <pte_update.isra.0+0x30>

If conditional returns are decoded as INSN_OTHER, objtool considers
that the second function never returns.

If conditional returns are decoded as INSN_RETURN, objtool considers
that code after that conditional return is dead.

To overcome this situation, introduce INSN_RETURN_CONDITIONAL which
is taken as a confirmation that a function is not noreturn but still
sees following code as reachable.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 tools/objtool/check.c                | 2 +-
 tools/objtool/include/objtool/arch.h | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 8936a05f0e5a..ae4a1608d97e 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -219,7 +219,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
 	func_for_each_insn(file, func, insn) {
 		empty = false;
 
-		if (insn->type == INSN_RETURN)
+		if (insn->type == INSN_RETURN || insn->type == INSN_RETURN_CONDITIONAL)
 			return false;
 	}
 
diff --git a/tools/objtool/include/objtool/arch.h b/tools/objtool/include/objtool/arch.h
index 2b6d2ce4f9a5..84ba75112934 100644
--- a/tools/objtool/include/objtool/arch.h
+++ b/tools/objtool/include/objtool/arch.h
@@ -19,6 +19,7 @@ enum insn_type {
 	INSN_CALL,
 	INSN_CALL_DYNAMIC,
 	INSN_RETURN,
+	INSN_RETURN_CONDITIONAL,
 	INSN_CONTEXT_SWITCH,
 	INSN_BUG,
 	INSN_NOP,
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 10/15] objtool: Add support for relative switch tables
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (8 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 09/15] objtool: Add INSN_RETURN_CONDITIONAL Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 11/15] objtool: Remove too strict constraint in jump table search Christophe Leroy
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

On powerpc, switch tables are relative, than means the address of the
table is added to the value of the entry in order to get the pointed
address: (r10 is the table address, r4 the index in the table)

      lis     r10,0		<== Load r10 with upper part of .rodata address
          R_PPC_ADDR16_HA     .rodata
      addi    r10,r10,0		<== Add lower part of .rodata address
          R_PPC_ADDR16_LO     .rodata
      lwzx    r8,r10,r4		<== Read table entry at r10 + r4 into r8
      add     r10,r8,r10	<== Add table address to read value
      mtctr   r10		<== Save calculated address in CTR
      bctr			<== Branch to address in CTR

      RELOCATION RECORDS FOR [.rodata]:
      OFFSET   TYPE              VALUE
      00000000 R_PPC_REL32       .text+0x0000054c
      00000004 R_PPC_REL32       .text+0x000003d0
	...

But for c_jump_tables it is not the case, they contain the
pointed address directly:

      lis     r28,0		<== Load r28 with upper .rodata..c_jump_table
          R_PPC_ADDR16_HA   .rodata..c_jump_table
      addi    r28,r28,0		<== Add lower part of .rodata..c_jump_table
          R_PPC_ADDR16_LO   .rodata..c_jump_table
      lwzx    r10,r28,r10	<== Read table entry at r10 + r28 into r10
      mtctr   r10		<== Save read value in CTR
      bctr			<== Branch to address in CTR

      RELOCATION RECORDS FOR [.rodata..c_jump_table]:
      OFFSET   TYPE              VALUE
      00000000 R_PPC_ADDR32      .text+0x00000dc8
      00000004 R_PPC_ADDR32      .text+0x00000dc8
	...

Add support to objtool for relative tables, based on the relocation
type which is R_PPC_REL32 for switch tables and R_PPC_ADDR32 for
C jump tables. Do the comparison using R_ABS32 and R_ABS64 which are
architecture agnostic.

And use correct size for 'long' instead of hard coding a size of '8'.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 tools/objtool/check.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index ae4a1608d97e..05d789e6d3b6 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1988,7 +1988,7 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
 	struct symbol *pfunc = insn_func(insn)->pfunc;
 	struct reloc *table = insn_jump_table(insn);
 	struct instruction *dest_insn;
-	unsigned int prev_offset = 0;
+	unsigned int offset, prev_offset = 0;
 	struct reloc *reloc = table;
 	struct alternative *alt;
 
@@ -2003,7 +2003,7 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
 			break;
 
 		/* Make sure the table entries are consecutive: */
-		if (prev_offset && reloc_offset(reloc) != prev_offset + 8)
+		if (prev_offset && reloc_offset(reloc) != prev_offset + elf_addr_size(file->elf))
 			break;
 
 		/* Detect function pointers from contiguous objects: */
@@ -2011,7 +2011,12 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
 		    reloc_addend(reloc) == pfunc->offset)
 			break;
 
-		dest_insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
+		if (reloc_type(reloc) == R_ABS32 || reloc_type(reloc) == R_ABS64)
+			offset = reloc_addend(reloc);
+		else
+			offset = reloc_addend(reloc) + reloc_offset(table) - reloc_offset(reloc);
+
+		dest_insn = find_insn(file, reloc->sym->sec, offset);
 		if (!dest_insn)
 			break;
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 11/15] objtool: Remove too strict constraint in jump table search
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (9 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 10/15] objtool: Add support for relative switch tables Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 12/15] objtool: Add support for more complex UACCESS control Christophe Leroy
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

In code there is often a pattern like:

	load jump table address
	do some test
	conditional jump to label1:
	do something
	unconditional jump to label2:
label1:
	do something else
	read jump table
	dynamic jump
label2:
	do other job here ....

find_jump_table() contains a constraint that stops the backsearch
of the table address loading when a jump is found in-between.

Remove that constraint.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 tools/objtool/check.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 05d789e6d3b6..c9f0fd31d146 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -2067,13 +2067,6 @@ static struct reloc *find_jump_table(struct objtool_file *file,
 		if (insn != orig_insn && insn->type == INSN_JUMP_DYNAMIC)
 			break;
 
-		/* allow small jumps within the range */
-		if (insn->type == INSN_JUMP_UNCONDITIONAL &&
-		    insn->jump_dest &&
-		    (insn->jump_dest->offset <= insn->offset ||
-		     insn->jump_dest->offset > orig_insn->offset))
-		    break;
-
 		table_reloc = arch_find_switch_table(file, insn);
 		if (!table_reloc)
 			continue;
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 12/15] objtool: Add support for more complex UACCESS control
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (10 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 11/15] objtool: Remove too strict constraint in jump table search Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 13/15] objtool: Prepare noreturns.h for more architectures Christophe Leroy
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

On x86, UACCESS is controlled by two instructions: STAC and CLAC.
STAC instruction enables UACCESS while CLAC disables UACCESS.
This is simple enough for objtool to locate UACCESS enable and
disable.

But on powerpc it is a bit more complex, the same instruction is
used for enabling and disabling UACCESS, and the same instruction
can be used for many other things. It would be too complex to use
exclusively instruction decoding.

To help objtool, mark such instruction into .discard.uaccess_begin
and .discard.uaccess_end sections, on the same principle as for
reachable/unreachable instructions. And add ASM_UACCESS_BEGIN
and ASM_UACCESS_END macros to be used in inline assembly code to
annotate UACCESS enable and UACCESS disable instructions.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 include/linux/objtool.h | 14 ++++++++++++++
 tools/objtool/check.c   | 33 +++++++++++++++++++++++++++++++++
 2 files changed, 47 insertions(+)

diff --git a/include/linux/objtool.h b/include/linux/objtool.h
index 03f82c2c2ebf..d8fde4158a40 100644
--- a/include/linux/objtool.h
+++ b/include/linux/objtool.h
@@ -57,6 +57,18 @@
 	".long 998b - .\n\t"						\
 	".popsection\n\t"
 
+#define ASM_UACCESS_BEGIN						\
+	"998:\n\t"							\
+	".pushsection .discard.uaccess_begin\n\t"			\
+	".long 998b - .\n\t"						\
+	".popsection\n\t"
+
+#define ASM_UACCESS_END							\
+	"998:\n\t"							\
+	".pushsection .discard.uaccess_end\n\t"				\
+	".long 998b - .\n\t"						\
+	".popsection\n\t"
+
 #else /* __ASSEMBLY__ */
 
 /*
@@ -156,6 +168,8 @@
 #define STACK_FRAME_NON_STANDARD_FP(func)
 #define ANNOTATE_NOENDBR
 #define ASM_REACHABLE
+#define ASM_UACCESS_BEGIN
+#define ASM_UACCESS_END
 #else
 #define ANNOTATE_INTRA_FUNCTION_CALL
 .macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 signal=0
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index c9f0fd31d146..6daa13c25183 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1052,6 +1052,38 @@ static void add_ignores(struct objtool_file *file)
 	}
 }
 
+static void __add_uaccess(struct objtool_file *file, const char *name,
+			  int type, const char *action)
+{
+	struct section *rsec;
+	struct reloc *reloc;
+	struct instruction *insn;
+
+	rsec = find_section_by_name(file->elf, name);
+	if (!rsec)
+		return;
+
+	for_each_reloc(rsec, reloc) {
+		if (reloc->sym->type != STT_SECTION) {
+			WARN("unexpected relocation symbol type in %s: ", rsec->name);
+			continue;
+		}
+		insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
+		if (!insn) {
+			WARN("can't find UACCESS %s insn at %s+0x%" PRIx64,
+			     action, reloc->sym->sec->name, reloc_addend(reloc));
+			continue;
+		}
+		insn->type = type;
+	}
+}
+
+static void add_uaccess(struct objtool_file *file)
+{
+	__add_uaccess(file, ".rela.discard.uaccess_begin", INSN_STAC, "enable");
+	__add_uaccess(file, ".rela.discard.uaccess_end", INSN_CLAC, "disable");
+}
+
 /*
  * This is a whitelist of functions that is allowed to be called with AC set.
  * The list is meant to be minimal and only contains compiler instrumentation
@@ -2558,6 +2590,7 @@ static int decode_sections(struct objtool_file *file)
 		return ret;
 
 	add_ignores(file);
+	add_uaccess(file);
 	add_uaccess_safe(file);
 
 	ret = add_ignore_alternatives(file);
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 13/15] objtool: Prepare noreturns.h for more architectures
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (11 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 12/15] objtool: Add support for more complex UACCESS control Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 14/15] powerpc/bug: Annotate reachable after warning trap Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 15/15] powerpc: Implement UACCESS validation on PPC32 Christophe Leroy
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

noreturns.h is a mix of x86 specific functions and more generic
core functions.

In preparation of inclusion of powerpc, split x86 functions out of
noreturns.h into arch/noreturns.h

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 .../objtool/arch/x86/include/arch/noreturns.h | 20 +++++++++++++++++++
 tools/objtool/noreturns.h                     | 14 ++-----------
 2 files changed, 22 insertions(+), 12 deletions(-)
 create mode 100644 tools/objtool/arch/x86/include/arch/noreturns.h

diff --git a/tools/objtool/arch/x86/include/arch/noreturns.h b/tools/objtool/arch/x86/include/arch/noreturns.h
new file mode 100644
index 000000000000..a4262aff3917
--- /dev/null
+++ b/tools/objtool/arch/x86/include/arch/noreturns.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * This is a (sorted!) list of all known __noreturn functions in arch/x86.
+ * It's needed for objtool to properly reverse-engineer the control flow graph.
+ *
+ * Yes, this is unfortunate.  A better solution is in the works.
+ */
+NORETURN(cpu_bringup_and_idle)
+NORETURN(ex_handler_msr_mce)
+NORETURN(hlt_play_dead)
+NORETURN(hv_ghcb_terminate)
+NORETURN(machine_real_restart)
+NORETURN(rewind_stack_and_make_dead)
+NORETURN(sev_es_terminate)
+NORETURN(snp_abort)
+NORETURN(x86_64_start_kernel)
+NORETURN(x86_64_start_reservations)
+NORETURN(xen_cpu_bringup_again)
+NORETURN(xen_start_kernel)
diff --git a/tools/objtool/noreturns.h b/tools/objtool/noreturns.h
index 1514e84d5cc4..2df20addd746 100644
--- a/tools/objtool/noreturns.h
+++ b/tools/objtool/noreturns.h
@@ -1,5 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
+#include <arch/noreturns.h>
+
 /*
  * This is a (sorted!) list of all known __noreturn functions in the kernel.
  * It's needed for objtool to properly reverse-engineer the control flow graph.
@@ -15,32 +17,20 @@ NORETURN(__ubsan_handle_builtin_unreachable)
 NORETURN(arch_call_rest_init)
 NORETURN(arch_cpu_idle_dead)
 NORETURN(btrfs_assertfail)
-NORETURN(cpu_bringup_and_idle)
 NORETURN(cpu_startup_entry)
 NORETURN(do_exit)
 NORETURN(do_group_exit)
 NORETURN(do_task_dead)
-NORETURN(ex_handler_msr_mce)
 NORETURN(fortify_panic)
-NORETURN(hlt_play_dead)
-NORETURN(hv_ghcb_terminate)
 NORETURN(kthread_complete_and_exit)
 NORETURN(kthread_exit)
 NORETURN(kunit_try_catch_throw)
-NORETURN(machine_real_restart)
 NORETURN(make_task_dead)
 NORETURN(mpt_halt_firmware)
 NORETURN(nmi_panic_self_stop)
 NORETURN(panic)
 NORETURN(panic_smp_self_stop)
 NORETURN(rest_init)
-NORETURN(rewind_stack_and_make_dead)
-NORETURN(sev_es_terminate)
-NORETURN(snp_abort)
 NORETURN(start_kernel)
 NORETURN(stop_this_cpu)
 NORETURN(usercopy_abort)
-NORETURN(x86_64_start_kernel)
-NORETURN(x86_64_start_reservations)
-NORETURN(xen_cpu_bringup_again)
-NORETURN(xen_start_kernel)
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 14/15] powerpc/bug: Annotate reachable after warning trap
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (12 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 13/15] objtool: Prepare noreturns.h for more architectures Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  2023-06-23 17:05 ` [PATCH v3 15/15] powerpc: Implement UACCESS validation on PPC32 Christophe Leroy
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

This commit is copied from commit bfb1a7c91fb7 ("x86/bug: Merge
annotate_reachable() into _BUG_FLAGS() asm")

'twi 31,0,0' is a BUG instruction, which is by default a dead end.

But the same instruction is used for WARNINGs and the execution
resumes with the following instruction. Mark it reachable so
that objtool knows that it is not a dead end in that case.

Also change the unreachable() annotation by __builtin_unreachable()
since objtool already knows that a BUG instruction is a dead end.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/include/asm/bug.h | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index a565995fb742..5550ebffb146 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -4,6 +4,7 @@
 #ifdef __KERNEL__
 
 #include <asm/asm-compat.h>
+#include <linux/objtool.h>
 
 #ifdef CONFIG_BUG
 
@@ -51,10 +52,11 @@
 	".previous\n"
 #endif
 
-#define BUG_ENTRY(insn, flags, ...)			\
+#define BUG_ENTRY(insn, flags, extra, ...)		\
 	__asm__ __volatile__(				\
 		"1:	" insn "\n"			\
 		_EMIT_BUG_ENTRY				\
+		extra					\
 		: : "i" (__FILE__), "i" (__LINE__),	\
 		  "i" (flags),				\
 		  "i" (sizeof(struct bug_entry)),	\
@@ -67,12 +69,12 @@
  */
 
 #define BUG() do {						\
-	BUG_ENTRY("twi 31, 0, 0", 0);				\
-	unreachable();						\
+	BUG_ENTRY("twi 31, 0, 0", 0, "");			\
+	__builtin_unreachable();				\
 } while (0)
 #define HAVE_ARCH_BUG
 
-#define __WARN_FLAGS(flags) BUG_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags))
+#define __WARN_FLAGS(flags) BUG_ENTRY("twi 31, 0, 0", BUGFLAG_WARNING | (flags), ASM_REACHABLE)
 
 #ifdef CONFIG_PPC64
 #define BUG_ON(x) do {						\
@@ -80,7 +82,7 @@
 		if (x)						\
 			BUG();					\
 	} else {						\
-		BUG_ENTRY(PPC_TLNEI " %4, 0", 0, "r" ((__force long)(x)));	\
+		BUG_ENTRY(PPC_TLNEI " %4, 0", 0, "", "r" ((__force long)(x)));	\
 	}							\
 } while (0)
 
@@ -92,7 +94,7 @@
 	} else {						\
 		BUG_ENTRY(PPC_TLNEI " %4, 0",			\
 			  BUGFLAG_WARNING | BUGFLAG_TAINT(TAINT_WARN),	\
-			  "r" (__ret_warn_on));	\
+			  "", "r" (__ret_warn_on));	\
 	}							\
 	unlikely(__ret_warn_on);				\
 })
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 15/15] powerpc: Implement UACCESS validation on PPC32
  2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
                   ` (13 preceding siblings ...)
  2023-06-23 17:05 ` [PATCH v3 14/15] powerpc/bug: Annotate reachable after warning trap Christophe Leroy
@ 2023-06-23 17:05 ` Christophe Leroy
  14 siblings, 0 replies; 16+ messages in thread
From: Christophe Leroy @ 2023-06-23 17:05 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Josh Poimboeuf, Peter Zijlstra,
	Sathvika Vasireddy, Naveen N Rao
  Cc: linuxppc-dev, linux-kernel

In order to implement UACCESS validation, objtool support
for powerpc needs to be enhanced to decode more instructions.

It also requires implementation of switch tables finding.
On PPC32 it is similar to x86, switch tables are anonymous in .rodata,
the difference is that the value is relative to its index in the table.

Then comes the UACCESS enabling/disabling instructions. On booke and
8xx it is done with a mtspr instruction. For 8xx that's in SPRN_MD_AP,
for booke that's in SPRN_PID. Annotate those instructions.

For book3s/32 that's a bit more complex and left aside for the moment.

No work has been done for ASM files, they are not used for UACCESS
so for the moment just tell objtool to ignore ASM files.

For relocable code, the .got2 relocation preceding each global
function needs to be marked as ignored because some versions of GCC
do this:

     120:	00 00 00 00	.long 0x0
			120: R_PPC_REL32	.got2+0x7ff0

00000124 <tohex>:
     124:	94 21 ff f0 	stwu    r1,-16(r1)
     128:	7c 08 02 a6 	mflr    r0
     12c:	42 9f 00 05 	bcl     20,4*cr7+so,130 <tohex+0xc>
     130:	39 00 00 00 	li      r8,0
     134:	39 20 00 08 	li      r9,8
     138:	93 c1 00 08 	stw     r30,8(r1)
     13c:	7f c8 02 a6 	mflr    r30
     140:	90 01 00 14 	stw     r0,20(r1)
     144:	80 1e ff f0 	lwz     r0,-16(r30)
     148:	7f c0 f2 14 	add     r30,r0,r30
     14c:	81 5e 80 00 	lwz     r10,-32768(r30)
     150:	80 fe 80 04 	lwz     r7,-32764(r30)

While other versions do that:

00000120 <tohex>:
     120:	94 21 ff f0 	stwu    r1,-16(r1)
     124:	7c 08 02 a6 	mflr    r0
     128:	42 9f 00 05 	bcl     20,4*cr7+so,12c <tohex+0xc>
     12c:	39 00 00 00 	li      r8,0
     130:	39 20 00 08 	li      r9,8
     134:	93 c1 00 08 	stw     r30,8(r1)
     138:	7f c8 02 a6 	mflr    r30
     13c:	3f de 00 00 	addis   r30,r30,0
			13e: R_PPC_REL16_HA	.got2+0x8012
     140:	90 01 00 14 	stw     r0,20(r1)
     144:	3b de 00 00 	addi    r30,r30,0
			146: R_PPC_REL16_LO	.got2+0x801a
     148:	81 5e 80 00 	lwz     r10,-32768(r30)
     14c:	80 fe 80 04 	lwz     r7,-32764(r30)

Also declare longjmp() and start_secondary_resume() as global noreturn
functions, and declare __copy_tofrom_user() and __arch_clear_user()
as UACCESS safe.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/Kconfig                          |  2 +
 arch/powerpc/include/asm/kup.h                | 12 +++
 arch/powerpc/include/asm/nohash/32/kup-8xx.h  |  8 +-
 arch/powerpc/include/asm/nohash/kup-booke.h   |  8 +-
 arch/powerpc/kexec/core_32.c                  |  4 +-
 tools/objtool/arch/powerpc/decode.c           | 83 +++++++++++++++++--
 .../arch/powerpc/include/arch/noreturns.h     | 11 +++
 tools/objtool/arch/powerpc/special.c          | 34 +++++++-
 tools/objtool/check.c                         |  6 +-
 9 files changed, 152 insertions(+), 16 deletions(-)
 create mode 100644 tools/objtool/arch/powerpc/include/arch/noreturns.h

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8b955bc7b59f..8b613e520143 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -160,6 +160,7 @@ config PPC
 	select ARCH_KEEP_MEMBLOCK
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select ARCH_MIGHT_HAVE_PC_SERIO
+	select ARCH_OBJTOOL_SKIP_ASM
 	select ARCH_OPTIONAL_KERNEL_RWX		if ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
 	select ARCH_SPLIT_ARG64			if PPC32
@@ -258,6 +259,7 @@ config PPC
 	select HAVE_OPTPROBES
 	select HAVE_OBJTOOL			if PPC32 || MPROFILE_KERNEL
 	select HAVE_OBJTOOL_MCOUNT		if HAVE_OBJTOOL
+	select HAVE_UACCESS_VALIDATION		if HAVE_OBJTOOL && PPC_KUAP && PPC32
 	select HAVE_PERF_EVENTS
 	select HAVE_PERF_EVENTS_NMI		if PPC64
 	select HAVE_PERF_REGS
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 132f1c7e1064..9f824cb93d8a 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -10,6 +10,8 @@
 #include <linux/types.h>
 
 static __always_inline bool kuap_is_disabled(void);
+static __always_inline void mtspr_uaccess_begin(int rn, unsigned long val);
+static __always_inline void mtspr_uaccess_end(int rn, unsigned long val);
 #endif
 
 #ifdef CONFIG_PPC_BOOK3S_64
@@ -222,6 +224,16 @@ static __always_inline void prevent_current_write_to_user(void)
 	prevent_user_access(KUAP_WRITE);
 }
 
+static __always_inline void mtspr_uaccess_begin(int rn, unsigned long val)
+{
+	asm(ASM_UACCESS_BEGIN "mtspr %0, %1\n\t" : : "i"(rn), "r"(val) : "memory");
+}
+
+static __always_inline void mtspr_uaccess_end(int rn, unsigned long val)
+{
+	asm(ASM_UACCESS_END "mtspr %0, %1\n\t" : : "i"(rn), "r"(val) : "memory");
+}
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_KUAP_H_ */
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 61067e4c8f22..9b59231d87c9 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -42,12 +42,12 @@ static __always_inline unsigned long __kuap_get_and_assert_locked(void)
 static __always_inline void __allow_user_access(void __user *to, const void __user *from,
 						unsigned long size, unsigned long dir)
 {
-	mtspr(SPRN_MD_AP, MD_APG_INIT);
+	mtspr_uaccess_begin(SPRN_MD_AP, MD_APG_INIT);
 }
 
 static __always_inline void __prevent_user_access(unsigned long dir)
 {
-	mtspr(SPRN_MD_AP, MD_APG_KUAP);
+	mtspr_uaccess_end(SPRN_MD_AP, MD_APG_KUAP);
 }
 
 static __always_inline unsigned long __prevent_user_access_return(void)
@@ -56,14 +56,14 @@ static __always_inline unsigned long __prevent_user_access_return(void)
 
 	flags = mfspr(SPRN_MD_AP);
 
-	mtspr(SPRN_MD_AP, MD_APG_KUAP);
+	mtspr_uaccess_end(SPRN_MD_AP, MD_APG_KUAP);
 
 	return flags;
 }
 
 static __always_inline void __restore_user_access(unsigned long flags)
 {
-	mtspr(SPRN_MD_AP, flags);
+	mtspr_uaccess_begin(SPRN_MD_AP, flags);
 }
 
 static __always_inline bool
diff --git a/arch/powerpc/include/asm/nohash/kup-booke.h b/arch/powerpc/include/asm/nohash/kup-booke.h
index 416f3e0897d5..2967501c434e 100644
--- a/arch/powerpc/include/asm/nohash/kup-booke.h
+++ b/arch/powerpc/include/asm/nohash/kup-booke.h
@@ -64,13 +64,13 @@ static __always_inline unsigned long __kuap_get_and_assert_locked(void)
 static __always_inline void __allow_user_access(void __user *to, const void __user *from,
 						unsigned long size, unsigned long dir)
 {
-	mtspr(SPRN_PID, current->thread.pid);
+	mtspr_uaccess_begin(SPRN_PID, current->thread.pid);
 	isync();
 }
 
 static __always_inline void __prevent_user_access(unsigned long dir)
 {
-	mtspr(SPRN_PID, 0);
+	mtspr_uaccess_end(SPRN_PID, 0);
 	isync();
 }
 
@@ -78,7 +78,7 @@ static __always_inline unsigned long __prevent_user_access_return(void)
 {
 	unsigned long flags = mfspr(SPRN_PID);
 
-	mtspr(SPRN_PID, 0);
+	mtspr_uaccess_end(SPRN_PID, 0);
 	isync();
 
 	return flags;
@@ -87,7 +87,7 @@ static __always_inline unsigned long __prevent_user_access_return(void)
 static __always_inline void __restore_user_access(unsigned long flags)
 {
 	if (flags) {
-		mtspr(SPRN_PID, current->thread.pid);
+		mtspr_uaccess_begin(SPRN_PID, current->thread.pid);
 		isync();
 	}
 }
diff --git a/arch/powerpc/kexec/core_32.c b/arch/powerpc/kexec/core_32.c
index c95f96850c9e..6e955f32e7c3 100644
--- a/arch/powerpc/kexec/core_32.c
+++ b/arch/powerpc/kexec/core_32.c
@@ -17,7 +17,7 @@
 typedef void (*relocate_new_kernel_t)(
 				unsigned long indirection_page,
 				unsigned long reboot_code_buffer,
-				unsigned long start_address) __noreturn;
+				unsigned long start_address);
 
 /*
  * This is a generic machine_kexec function suitable at least for
@@ -61,6 +61,8 @@ void default_machine_kexec(struct kimage *image)
 	/* now call it */
 	rnk = (relocate_new_kernel_t) reboot_code_buffer;
 	(*rnk)(page_list, reboot_code_buffer_phys, image->start);
+
+	unreachable();	/* For objtool */
 }
 
 int machine_kexec_prepare(struct kimage *image)
diff --git a/tools/objtool/arch/powerpc/decode.c b/tools/objtool/arch/powerpc/decode.c
index 53b55690f320..bac861cf1176 100644
--- a/tools/objtool/arch/powerpc/decode.c
+++ b/tools/objtool/arch/powerpc/decode.c
@@ -43,24 +43,95 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
 			    unsigned long offset, unsigned int maxlen,
 			    struct instruction *insn)
 {
-	unsigned int opcode;
+	unsigned int opcode, xop;
+	unsigned int rs, ra, rb, bo, bi, to, uimm, simm, lk, aa;
 	enum insn_type typ;
 	unsigned long imm;
 	u32 ins;
 
 	ins = bswap_if_needed(file->elf, *(u32 *)(sec->data->d_buf + offset));
+
+	if (!ins && file->elf->ehdr.e_flags & EF_PPC_RELOCATABLE_LIB) {
+		struct reloc *reloc;
+
+		reloc = find_reloc_by_dest_range(file->elf, insn->sec, insn->offset, 4);
+
+		if (reloc && reloc_type(reloc) == R_PPC_REL32 &&
+		    !strncmp(reloc->sym->sec->name, ".got2", 5)) {
+			insn->type = INSN_OTHER;
+			insn->ignore = true;
+			insn->len = 4;
+
+			return 0;
+		}
+	}
+
 	opcode = ins >> 26;
-	typ = INSN_OTHER;
-	imm = 0;
+	xop = (ins >> 1) & 0x3ff;
+	rs = bo = to = (ins >> 21) & 0x1f;
+	ra = bi = (ins >> 16) & 0x1f;
+	rb = (ins >> 11) & 0x1f;
+	uimm = simm = (ins >> 0) & 0xffff;
+	aa = ins & 2;
+	lk = ins & 1;
 
 	switch (opcode) {
+	case 3:
+		if (to == 31 && ra == 0 && simm == 0) /* twi 31, r0, 0 */
+			typ = INSN_BUG;
+		else
+			typ = INSN_OTHER;
+		break;
+	case 16: /* bc[l][a] */
+		if (lk)	/* bcl[a] */
+			typ = INSN_OTHER;
+		else		/* bc[a] */
+			typ = INSN_JUMP_CONDITIONAL;
+
+		imm = ins & 0xfffc;
+		if (imm & 0x8000)
+			imm -= 0x10000;
+		insn->immediate = imm | aa;
+		break;
 	case 18: /* b[l][a] */
-		if ((ins & 3) == 1) /* bl */
+		if (lk)	/* bl[a] */
 			typ = INSN_CALL;
+		else		/* b[a] */
+			typ = INSN_JUMP_UNCONDITIONAL;
 
 		imm = ins & 0x3fffffc;
 		if (imm & 0x2000000)
 			imm -= 0x4000000;
+		insn->immediate = imm | aa;
+		break;
+	case 19:
+		if (xop == 16 && bo == 20 && bi == 0)	/* blr */
+			typ = INSN_RETURN;
+		else if (xop == 16)	/* bclr */
+			typ = INSN_RETURN_CONDITIONAL;
+		else if (xop == 50)	/* rfi */
+			typ = INSN_JUMP_DYNAMIC;
+		else if (xop == 528 && bo == 20 && bi == 0 && !lk)	/* bctr */
+			typ = INSN_JUMP_DYNAMIC;
+		else if (xop == 528 && bo == 20 && bi == 0 && lk)	/* bctrl */
+			typ = INSN_CALL_DYNAMIC;
+		else
+			typ = INSN_OTHER;
+		break;
+	case 24:
+		if (rs == 0 && ra == 0 && uimm == 0)
+			typ = INSN_NOP;
+		else
+			typ = INSN_OTHER;
+		break;
+	case 31:
+		if (xop == 4 && to == 31 && ra == 0 && rb == 0) /* trap */
+			typ = INSN_BUG;
+		else
+			typ = INSN_OTHER;
+		break;
+	default:
+		typ = INSN_OTHER;
 		break;
 	}
 
@@ -70,13 +141,15 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
 		insn->len = 4;
 
 	insn->type = typ;
-	insn->immediate = imm;
 
 	return 0;
 }
 
 unsigned long arch_jump_destination(struct instruction *insn)
 {
+	if (insn->immediate & 2)
+		return insn->immediate & ~2;
+
 	return insn->offset + insn->immediate;
 }
 
diff --git a/tools/objtool/arch/powerpc/include/arch/noreturns.h b/tools/objtool/arch/powerpc/include/arch/noreturns.h
new file mode 100644
index 000000000000..664f17d39026
--- /dev/null
+++ b/tools/objtool/arch/powerpc/include/arch/noreturns.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * This is a (sorted!) list of all known __noreturn functions in arch/powerpc.
+ * It's needed for objtool to properly reverse-engineer the control flow graph.
+ *
+ * Yes, this is unfortunate.  A better solution is in the works.
+ */
+NORETURN(longjmp)
+NORETURN(start_secondary_resume)
+NORETURN(unrecoverable_exception)
diff --git a/tools/objtool/arch/powerpc/special.c b/tools/objtool/arch/powerpc/special.c
index d33868147196..21940cbfd550 100644
--- a/tools/objtool/arch/powerpc/special.c
+++ b/tools/objtool/arch/powerpc/special.c
@@ -15,5 +15,37 @@ bool arch_support_alt_relocation(struct special_alt *special_alt,
 struct reloc *arch_find_switch_table(struct objtool_file *file,
 				    struct instruction *insn)
 {
-	exit(-1);
+	struct reloc *text_reloc;
+	struct section *table_sec;
+	unsigned long table_offset;
+
+	/* look for a relocation which references .rodata */
+	text_reloc = find_reloc_by_dest_range(file->elf, insn->sec,
+					      insn->offset, insn->len);
+	if (!text_reloc || text_reloc->sym->type != STT_SECTION ||
+	    !text_reloc->sym->sec->rodata)
+		return NULL;
+
+	table_offset = reloc_addend(text_reloc);
+	table_sec = text_reloc->sym->sec;
+
+	/*
+	 * Make sure the .rodata address isn't associated with a
+	 * symbol.  GCC jump tables are anonymous data.
+	 *
+	 * Also support C jump tables which are in the same format as
+	 * switch jump tables.  For objtool to recognize them, they
+	 * need to be placed in the C_JUMP_TABLE_SECTION section.  They
+	 * have symbols associated with them.
+	 */
+	if (find_symbol_containing(table_sec, table_offset) &&
+	    strcmp(table_sec->name, C_JUMP_TABLE_SECTION))
+		return NULL;
+
+	/*
+	 * Each table entry has a rela associated with it.  The rela
+	 * should reference text in the same function as the original
+	 * instruction.
+	 */
+	return find_reloc_by_dest(file->elf, table_sec, table_offset);
 }
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 6daa13c25183..2d564d0e2ae1 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1259,13 +1259,17 @@ static const char *uaccess_safe_builtin[] = {
 	"stackleak_track_stack",
 	/* misc */
 	"csum_partial_copy_generic",
+	"ftrace_likely_update", /* CONFIG_TRACE_BRANCH_PROFILING */
+	/* misc x86 */
 	"copy_mc_fragile",
 	"copy_mc_fragile_handle_tail",
 	"copy_mc_enhanced_fast_string",
-	"ftrace_likely_update", /* CONFIG_TRACE_BRANCH_PROFILING */
 	"rep_stos_alternative",
 	"rep_movs_alternative",
 	"__copy_user_nocache",
+	/* misc powerpc */
+	"__copy_tofrom_user",
+	"__arch_clear_user",
 	NULL
 };
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-06-23 17:14 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-23 17:05 [PATCH v3 00/15] powerpc/objtool: uaccess validation for PPC32 (v3) Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 01/15] powerpc/kuap: Avoid unnecessary reads of MD_AP Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 02/15] powerpc/kuap: Avoid useless jump_label on empty function Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 03/15] powerpc/kuap: Refactor static branch for disabling kuap Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 04/15] powerpc/kuap: Make disabling KUAP at boottime impossible except on book3s/64 Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 05/15] powerpc/kuap: KUAP enabling/disabling functions must be __always_inline Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 06/15] Revert "powerpc/bug: Provide better flexibility to WARN_ON/__WARN_FLAGS() with asm goto" Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 07/15] objtool: Allow an architecture to disable objtool on ASM files Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 08/15] objtool: Fix JUMP_ENTRY_SIZE for bi-arch like powerpc Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 09/15] objtool: Add INSN_RETURN_CONDITIONAL Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 10/15] objtool: Add support for relative switch tables Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 11/15] objtool: Remove too strict constraint in jump table search Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 12/15] objtool: Add support for more complex UACCESS control Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 13/15] objtool: Prepare noreturns.h for more architectures Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 14/15] powerpc/bug: Annotate reachable after warning trap Christophe Leroy
2023-06-23 17:05 ` [PATCH v3 15/15] powerpc: Implement UACCESS validation on PPC32 Christophe Leroy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).