public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] x86/asm: Pin sensitive CR4 and CR0 bits
@ 2019-06-18  4:55 Kees Cook
  2019-06-18  4:55 ` [PATCH v3 1/3] lkdtm: Check for SMEP clearing protections Kees Cook
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Kees Cook @ 2019-06-18  4:55 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Kees Cook, Linus Torvalds, x86, Peter Zijlstra, Dave Hansen,
	linux-kernel, kernel-hardening

Hi,

This updates the cr pinning series to allow for no silent papering-over
of pinning bugs (thanks to tglx to pointing out where I needed to poke
cr4 harder). I've tested with under normal boot and hibernation.

-Kees

Kees Cook (3):
  lkdtm: Check for SMEP clearing protections
  x86/asm: Pin sensitive CR4 bits
  x86/asm: Pin sensitive CR0 bits

 arch/x86/include/asm/special_insns.h | 37 +++++++++++++++-
 arch/x86/kernel/cpu/common.c         | 20 +++++++++
 arch/x86/kernel/smpboot.c            |  8 +++-
 drivers/misc/lkdtm/bugs.c            | 66 ++++++++++++++++++++++++++++
 drivers/misc/lkdtm/core.c            |  1 +
 drivers/misc/lkdtm/lkdtm.h           |  1 +
 6 files changed, 130 insertions(+), 3 deletions(-)

--
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread
* [PATCH v2 1/3] x86/asm: Pin sensitive CR0 bits
@ 2019-02-27 20:01 Kees Cook
  2019-03-06  9:55 ` [tip:x86/asm] " tip-bot for Kees Cook
  2019-03-06 13:31 ` tip-bot for Kees Cook
  0 siblings, 2 replies; 13+ messages in thread
From: Kees Cook @ 2019-02-27 20:01 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Kees Cook, Peter Zijlstra, Solar Designer, Greg KH, Jann Horn,
	Sean Christopherson, Dominik Brodowski, linux-kernel,
	Kernel Hardening

With sensitive CR4 bits pinned now, it's possible that the WP bit for CR0
might become a target as well. Following the same reasoning for the CR4
pinning, this pins CR0's WP bit (but this can be done with a static value).

As before, to convince the compiler to not optimize away the check for the
WP bit after the set, this marks "val" as an output from the asm() block.
This protects against just jumping into the function past where the masking
happens; we must check that the mask was applied after we do the set). Due
to how this function can be built by the compiler (especially due to the
removal of frame pointers), jumping into the middle of the function
frequently doesn't require stack manipulation to construct a stack frame
(there may only a retq without pops, which is sufficient for use with
exploits like timer overwrites).

Additionally, this avoids WARN()ing before resetting the bit, just to
minimize any race conditions with leaving the bit unset.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/special_insns.h | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index fabda1400137..1f01dc3f6c64 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -25,7 +25,28 @@ static inline unsigned long native_read_cr0(void)
 
 static inline void native_write_cr0(unsigned long val)
 {
-	asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order));
+	bool warn = false;
+
+again:
+	val |= X86_CR0_WP;
+	/*
+	 * In order to have the compiler not optimize away the check
+	 * after the cr4 write, mark "val" as being also an output ("+r")
+	 * by this asm() block so it will perform an explicit check, as
+	 * if it were "volatile".
+	 */
+	asm volatile("mov %0,%%cr0" : "+r" (val) : "m" (__force_order) : );
+	/*
+	 * If the MOV above was used directly as a ROP gadget we can
+	 * notice the lack of pinned bits in "val" and start the function
+	 * from the beginning to gain the WP bit for sure. And do it
+	 * without first taking the exception for a WARN().
+	 */
+	if ((val & X86_CR0_WP) != X86_CR0_WP) {
+		warn = true;
+		goto again;
+	}
+	WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n");
 }
 
 static inline unsigned long native_read_cr2(void)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-06-22  9:59 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-06-18  4:55 [PATCH v3 0/3] x86/asm: Pin sensitive CR4 and CR0 bits Kees Cook
2019-06-18  4:55 ` [PATCH v3 1/3] lkdtm: Check for SMEP clearing protections Kees Cook
2019-06-18  7:10   ` Rasmus Villemoes
2019-06-18  7:23     ` Kees Cook
2019-06-18  4:55 ` [PATCH v3 2/3] x86/asm: Pin sensitive CR4 bits Kees Cook
2019-06-22  9:58   ` [tip:x86/asm] " tip-bot for Kees Cook
2019-06-18  4:55 ` [PATCH v3 3/3] x86/asm: Pin sensitive CR0 bits Kees Cook
2019-06-18  9:38   ` Jann Horn
2019-06-18 12:24     ` Peter Zijlstra
2019-06-18 17:12       ` Kees Cook
2019-06-22  9:58   ` [tip:x86/asm] " tip-bot for Kees Cook
  -- strict thread matches above, loose matches on Subject: below --
2019-02-27 20:01 [PATCH v2 1/3] " Kees Cook
2019-03-06  9:55 ` [tip:x86/asm] " tip-bot for Kees Cook
2019-03-06 13:31 ` tip-bot for Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox