* [PATCH 1/3] asm-generic/mmiowb: Mark accesses to fix KCSAN warnings
@ 2023-11-27 5:46 Rohan McLure
2023-11-27 5:46 ` [PATCH 2/3] powerpc: Apply __always_inline to interrupt_{enter,exit}_prepare() Rohan McLure
2023-11-27 5:46 ` [PATCH 3/3] powerpc/64: Only warn for kuap locked when KCSAN not present Rohan McLure
0 siblings, 2 replies; 3+ messages in thread
From: Rohan McLure @ 2023-11-27 5:46 UTC (permalink / raw)
To: linuxppc-dev; +Cc: elver, arnd, gautam, Rohan McLure, npiggin, will
Prior to this patch, data races are detectable by KCSAN of the following
forms:
[1] Asynchronous calls to mmiowb_set_pending() from an interrupt context
or otherwise outside of a critical section
[2] Interrupted critical sections, where the interrupt will itself
acquire a lock
In case [1], calling context does not need an mmiowb() call to be
issued, otherwise it would do so itself. Such calls to
mmiowb_set_pending() are either idempotent or no-ops.
In case [2], irrespective of when the interrupt occurs, the interrupt
will acquire and release its locks prior to its return, nesting_count
will continue balanced. In the worst case, the interrupted critical
section during a mmiowb_spin_unlock() call observes an mmiowb to be
pending and afterward is interrupted, leading to an extraneous call to
mmiowb(). This data race is clearly innocuous.
Resolve KCSAN warnings of type [1] by means of READ_ONCE, WRITE_ONCE.
As increments and decrements to nesting_count are balanced by interrupt
contexts, resolve type [2] warnings by simply revoking instrumentation,
with data_race() rather than READ_ONCE() and WRITE_ONCE(), the memory
consistency semantics of plain-accesses will still lead to correct
behaviour.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Reported-by: Gautam Menghani <gautam@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
Previously discussed here:
https://lore.kernel.org/linuxppc-dev/20230510033117.1395895-4-rmclure@linux.ibm.com/
But pushed back due to affecting other architectures. Reissuing, to
linuxppc-dev, as it does not enact a functional change.
---
include/asm-generic/mmiowb.h | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
index 5698fca3bf56..f8c7c8a84e9e 100644
--- a/include/asm-generic/mmiowb.h
+++ b/include/asm-generic/mmiowb.h
@@ -37,25 +37,28 @@ static inline void mmiowb_set_pending(void)
struct mmiowb_state *ms = __mmiowb_state();
if (likely(ms->nesting_count))
- ms->mmiowb_pending = ms->nesting_count;
+ WRITE_ONCE(ms->mmiowb_pending, ms->nesting_count);
}
static inline void mmiowb_spin_lock(void)
{
struct mmiowb_state *ms = __mmiowb_state();
- ms->nesting_count++;
+
+ /* Increment need not be atomic. Nestedness is balanced over interrupts. */
+ data_race(ms->nesting_count++);
}
static inline void mmiowb_spin_unlock(void)
{
struct mmiowb_state *ms = __mmiowb_state();
+ u16 pending = READ_ONCE(ms->mmiowb_pending);
- if (unlikely(ms->mmiowb_pending)) {
- ms->mmiowb_pending = 0;
+ WRITE_ONCE(ms->mmiowb_pending, 0);
+ if (unlikely(pending))
mmiowb();
- }
- ms->nesting_count--;
+ /* Decrement need not be atomic. Nestedness is balanced over interrupts. */
+ data_race(ms->nesting_count--);
}
#else
#define mmiowb_set_pending() do { } while (0)
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* [PATCH 2/3] powerpc: Apply __always_inline to interrupt_{enter,exit}_prepare()
2023-11-27 5:46 [PATCH 1/3] asm-generic/mmiowb: Mark accesses to fix KCSAN warnings Rohan McLure
@ 2023-11-27 5:46 ` Rohan McLure
2023-11-27 5:46 ` [PATCH 3/3] powerpc/64: Only warn for kuap locked when KCSAN not present Rohan McLure
1 sibling, 0 replies; 3+ messages in thread
From: Rohan McLure @ 2023-11-27 5:46 UTC (permalink / raw)
To: linuxppc-dev; +Cc: elver, arnd, gautam, Rohan McLure, npiggin, will
In keeping with the advice given by Documentation/core-api/entry.rst,
entry and exit handlers for interrupts should not be instrumented.
Guarantee that the interrupt_{enter,exit}_prepare() routines are inlined
so that they will inheret instrumentation from their caller.
KCSAN kernels were observed to compile without inlining these routines,
which would lead to grief on NMI handlers.
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
arch/powerpc/include/asm/interrupt.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index a4196ab1d016..5f9be87c01ca 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -150,7 +150,7 @@ static inline void booke_restore_dbcr0(void)
#endif
}
-static inline void interrupt_enter_prepare(struct pt_regs *regs)
+static __always_inline void interrupt_enter_prepare(struct pt_regs *regs)
{
#ifdef CONFIG_PPC64
irq_soft_mask_set(IRQS_ALL_DISABLED);
@@ -215,11 +215,11 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs)
* However interrupt_nmi_exit_prepare does return directly to regs, because
* NMIs do not do "exit work" or replay soft-masked interrupts.
*/
-static inline void interrupt_exit_prepare(struct pt_regs *regs)
+static __always_inline void interrupt_exit_prepare(struct pt_regs *regs)
{
}
-static inline void interrupt_async_enter_prepare(struct pt_regs *regs)
+static __always_inline void interrupt_async_enter_prepare(struct pt_regs *regs)
{
#ifdef CONFIG_PPC64
/* Ensure interrupt_enter_prepare does not enable MSR[EE] */
@@ -238,7 +238,7 @@ static inline void interrupt_async_enter_prepare(struct pt_regs *regs)
irq_enter();
}
-static inline void interrupt_async_exit_prepare(struct pt_regs *regs)
+static __always_inline void interrupt_async_exit_prepare(struct pt_regs *regs)
{
/*
* Adjust at exit so the main handler sees the true NIA. This must
@@ -278,7 +278,7 @@ static inline bool nmi_disables_ftrace(struct pt_regs *regs)
return true;
}
-static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
+static __always_inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
{
#ifdef CONFIG_PPC64
state->irq_soft_mask = local_paca->irq_soft_mask;
@@ -340,7 +340,7 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
nmi_enter();
}
-static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
+static __always_inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
{
if (mfmsr() & MSR_DR) {
// nmi_exit if relocations are on
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* [PATCH 3/3] powerpc/64: Only warn for kuap locked when KCSAN not present
2023-11-27 5:46 [PATCH 1/3] asm-generic/mmiowb: Mark accesses to fix KCSAN warnings Rohan McLure
2023-11-27 5:46 ` [PATCH 2/3] powerpc: Apply __always_inline to interrupt_{enter,exit}_prepare() Rohan McLure
@ 2023-11-27 5:46 ` Rohan McLure
1 sibling, 0 replies; 3+ messages in thread
From: Rohan McLure @ 2023-11-27 5:46 UTC (permalink / raw)
To: linuxppc-dev; +Cc: elver, arnd, gautam, Rohan McLure, npiggin, will
Arbitrary instrumented locations, including syscall handlers, can call
arch_local_irq_restore() transitively when KCSAN is enabled, and in turn
also replay_soft_interrupts_irqrestore(). The precondition on entry to
this routine that is checked is that KUAP is enabled (user access
prohibited). Failure to meet this condition only triggers a warning
however, and afterwards KUAP is enabled anyway. That is, KUAP being
disabled on entry is in fact permissable, but not possible on an
uninstrumented kernel.
Disable this assertion only when KCSAN is enabled.
Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Rohan McLure <rmclure@linux.ibm.com>
---
arch/powerpc/kernel/irq_64.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/irq_64.c b/arch/powerpc/kernel/irq_64.c
index 938e66829eae..1b7e8ebb052a 100644
--- a/arch/powerpc/kernel/irq_64.c
+++ b/arch/powerpc/kernel/irq_64.c
@@ -189,7 +189,8 @@ static inline __no_kcsan void replay_soft_interrupts_irqrestore(void)
* and re-locking AMR but we shouldn't get here in the first place,
* hence the warning.
*/
- kuap_assert_locked();
+ if (!IS_ENABLED(CONFIG_KCSAN))
+ kuap_assert_locked();
if (kuap_state != AMR_KUAP_BLOCKED)
set_kuap(AMR_KUAP_BLOCKED);
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-11-27 5:50 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-27 5:46 [PATCH 1/3] asm-generic/mmiowb: Mark accesses to fix KCSAN warnings Rohan McLure
2023-11-27 5:46 ` [PATCH 2/3] powerpc: Apply __always_inline to interrupt_{enter,exit}_prepare() Rohan McLure
2023-11-27 5:46 ` [PATCH 3/3] powerpc/64: Only warn for kuap locked when KCSAN not present Rohan McLure
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).