public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup
       [not found] <20260318075654.1792916-3-nikunj@amd.com>
@ 2026-03-18 18:51 ` tip-bot2 for Dave Hansen
  2026-03-18 20:47   ` Peter Zijlstra
  0 siblings, 1 reply; 10+ messages in thread
From: tip-bot2 for Dave Hansen @ 2026-03-18 18:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Nikunj A Dadhania, Dave Hansen, Borislav Petkov (AMD),
	Sohil Mehta, stable, #, 6.9+, x86, linux-kernel

The following commit has been merged into the x86/urgent branch of tip:

Commit-ID:     cccc0c8ff0a9849378dcbc1d2ee6ca8018740aab
Gitweb:        https://git.kernel.org/tip/cccc0c8ff0a9849378dcbc1d2ee6ca8018740aab
Author:        Dave Hansen <dave.hansen@linux.intel.com>
AuthorDate:    Wed, 18 Mar 2026 07:56:53 
Committer:     Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Wed, 18 Mar 2026 16:40:54 +01:00

x86/cpu: Disable CR pinning during CPU bringup

== CR Pinning Background ==

Modern CPU hardening features like SMAP/SMEP are enabled by flipping control
register (CR) bits. Attackers find these features inconvenient and often try
to disable them.

CR-pinning is a kernel hardening feature that detects when security-sensitive
control bits are flipped off, complains about it, then turns them back on. The
CR-pinning checks are performed in the CR manipulation helpers.

X86_CR4_FRED controls FRED enabling and is pinned. There is a single,
system-wide static key that controls CR-pinning behavior. The static key is
enabled by the boot CPU after it has established its CR configuration.

The end result is that CR-pinning is not active while initializing the boot
CPU but it is active while bringing up secondary CPUs.

== FRED Background ==

FRED is a new hardware entry/exit feature for the kernel. It is not on by
default and started out as Intel-only. AMD is just adding support now.

FRED has MSRs for configuration and is enabled by the pinned X86_CR4_FRED
bit. It should not be enabled until after MSRs are properly initialized.

== SEV Background ==

AMD SEV-ES and SEV-SNP use #VC (Virtualization Communication) exceptions to
handle operations that require hypervisor assistance. These exceptions
occur during various operations including MMIO access, CPUID instructions,
and certain memory accesses.

Writes to the console can generate #VC.

== Problem ==

CR-pinning implicitly enables FRED on secondary CPUs at a different point
than the boot CPU. This point is *before* the CPU has done an explicit
cr4_set_bits(X86_CR4_FRED) and before the MSRs are initialized. This means
that there is a window where no exceptions can be handled.

For SEV-ES/SNP and TDX guests, any console output during this window
triggers #VC or #VE exceptions that result in triple faults because the
exception handlers rely on FRED MSRs that aren't yet configured.

== Fix ==

Defer CR-pinning enforcement during secondary CPU bringup. This avoids any
implicit CR changes during CPU bringup, ensuring that FRED is not enabled
before it is configured and able to handle a #VC or #VE.

Drop CR4 pinning logic from cr4_init() as it runs only during early
secondary bring up while the CPU is still offline, so CR4 pinning is never
in effect there. Remove the redundant pinned-mask application and add
WARN_ON_ONCE() to detect any future changes that might violate this
assumption.

This also aligns boot and secondary CPU bringup.

Note: FRED is not on by default anywhere so this is not likely to be
causing many problems. The only reason this was noticed was that AMD
started to enable FRED and was turning it on.

  [ Nikunj: Updated SEV background section wording ]

Fixes: 14619d912b65 ("x86/fred: FRED entry/exit and dispatch code")
Reported-by: Nikunj A Dadhania <nikunj@amd.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Sohil Mehta <sohil.mehta@intel.com>
Cc: stable@vger.kernel.org # 6.9+
Link: https://patch.msgid.link/20260318075654.1792916-3-nikunj@amd.com
---
 arch/x86/kernel/cpu/common.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 7840b22..dbd7bce 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -437,6 +437,21 @@ static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_C
 static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
 static unsigned long cr4_pinned_bits __ro_after_init;
 
+static bool cr_pinning_enabled(void)
+{
+	if (!static_branch_likely(&cr_pinning))
+		return false;
+
+	/*
+	 * Do not enforce pinning during CPU bringup. It might
+	 * turn on features that are not set up yet, like FRED.
+	 */
+	if (!cpu_online(smp_processor_id()))
+		return false;
+
+	return true;
+}
+
 void native_write_cr0(unsigned long val)
 {
 	unsigned long bits_missing = 0;
@@ -444,7 +459,7 @@ void native_write_cr0(unsigned long val)
 set_register:
 	asm volatile("mov %0,%%cr0": "+r" (val) : : "memory");
 
-	if (static_branch_likely(&cr_pinning)) {
+	if (cr_pinning_enabled()) {
 		if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) {
 			bits_missing = X86_CR0_WP;
 			val |= bits_missing;
@@ -463,7 +478,7 @@ void __no_profile native_write_cr4(unsigned long val)
 set_register:
 	asm volatile("mov %0,%%cr4": "+r" (val) : : "memory");
 
-	if (static_branch_likely(&cr_pinning)) {
+	if (cr_pinning_enabled()) {
 		if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) {
 			bits_changed = (val & cr4_pinned_mask) ^ cr4_pinned_bits;
 			val = (val & ~cr4_pinned_mask) | cr4_pinned_bits;
@@ -505,8 +520,8 @@ void cr4_init(void)
 
 	if (boot_cpu_has(X86_FEATURE_PCID))
 		cr4 |= X86_CR4_PCIDE;
-	if (static_branch_likely(&cr_pinning))
-		cr4 = (cr4 & ~cr4_pinned_mask) | cr4_pinned_bits;
+
+	WARN_ON_ONCE(cr_pinning_enabled());
 
 	__write_cr4(cr4);
 

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup
  2026-03-18 18:51 ` [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup tip-bot2 for Dave Hansen
@ 2026-03-18 20:47   ` Peter Zijlstra
  2026-03-18 21:08     ` Borislav Petkov
                       ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Peter Zijlstra @ 2026-03-18 20:47 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-tip-commits, Nikunj A Dadhania, Dave Hansen,
	Borislav Petkov (AMD), Sohil Mehta, stable, #, 6.9+, x86

On Wed, Mar 18, 2026 at 06:51:10PM -0000, tip-bot2 for Dave Hansen wrote:
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -437,6 +437,21 @@ static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_C
>  static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
>  static unsigned long cr4_pinned_bits __ro_after_init;
>  
> +static bool cr_pinning_enabled(void)
> +{
> +	if (!static_branch_likely(&cr_pinning))
> +		return false;
> +
> +	/*
> +	 * Do not enforce pinning during CPU bringup. It might
> +	 * turn on features that are not set up yet, like FRED.
> +	 */
> +	if (!cpu_online(smp_processor_id()))
> +		return false;
> +
> +	return true;
> +}

Urgh, so this means all an attack needs to do is disable the online bit
and it gets to poke CR4 bits.

This seems unfortunate.

And sure, randomly clearing the online bit will eventually cause havoc,
but I suspect you still get plenty time until the system goes wobbly.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup
  2026-03-18 20:47   ` Peter Zijlstra
@ 2026-03-18 21:08     ` Borislav Petkov
  2026-03-18 21:30       ` Peter Zijlstra
  2026-03-18 21:09     ` Peter Zijlstra
  2026-03-18 22:09     ` Peter Zijlstra
  2 siblings, 1 reply; 10+ messages in thread
From: Borislav Petkov @ 2026-03-18 21:08 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-tip-commits, Nikunj A Dadhania, Dave Hansen,
	Sohil Mehta, stable, #, 6.9+, x86

On Wed, Mar 18, 2026 at 09:47:22PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 18, 2026 at 06:51:10PM -0000, tip-bot2 for Dave Hansen wrote:
> > --- a/arch/x86/kernel/cpu/common.c
> > +++ b/arch/x86/kernel/cpu/common.c
> > @@ -437,6 +437,21 @@ static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_C
> >  static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
> >  static unsigned long cr4_pinned_bits __ro_after_init;
> >  
> > +static bool cr_pinning_enabled(void)
> > +{
> > +	if (!static_branch_likely(&cr_pinning))
> > +		return false;
> > +
> > +	/*
> > +	 * Do not enforce pinning during CPU bringup. It might
> > +	 * turn on features that are not set up yet, like FRED.
> > +	 */
> > +	if (!cpu_online(smp_processor_id()))
> > +		return false;
> > +
> > +	return true;
> > +}
> 
> Urgh, so this means all an attack needs to do is disable the online bit
> and it gets to poke CR4 bits.
> 
> This seems unfortunate.
> 
> And sure, randomly clearing the online bit will eventually cause havoc,
> but I suspect you still get plenty time until the system goes wobbly.

My idea was that this is only temporary and then, ontop, we'll do something
like this:

https://lore.kernel.org/r/cb492a37-3517-4738-b435-73311402e820@intel.com

I.e., you figure out all the CR4 pinned bits on the BSP *once*, cast them in
stone and then replicate them on the APs when they come up.

I.e., you figure everything out the earliest and then no more switching.

Then all that gunk will disappear, hopefully.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup
  2026-03-18 20:47   ` Peter Zijlstra
  2026-03-18 21:08     ` Borislav Petkov
@ 2026-03-18 21:09     ` Peter Zijlstra
  2026-03-18 21:30       ` Dave Hansen
  2026-03-18 22:09     ` Peter Zijlstra
  2 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2026-03-18 21:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-tip-commits, Nikunj A Dadhania, Dave Hansen,
	Borislav Petkov (AMD), Sohil Mehta, stable, #, 6.9+, x86

On Wed, Mar 18, 2026 at 09:47:22PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 18, 2026 at 06:51:10PM -0000, tip-bot2 for Dave Hansen wrote:
> > --- a/arch/x86/kernel/cpu/common.c
> > +++ b/arch/x86/kernel/cpu/common.c
> > @@ -437,6 +437,21 @@ static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_C
> >  static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
> >  static unsigned long cr4_pinned_bits __ro_after_init;
> >  
> > +static bool cr_pinning_enabled(void)
> > +{
> > +	if (!static_branch_likely(&cr_pinning))
> > +		return false;
> > +
> > +	/*
> > +	 * Do not enforce pinning during CPU bringup. It might
> > +	 * turn on features that are not set up yet, like FRED.
> > +	 */
> > +	if (!cpu_online(smp_processor_id()))
> > +		return false;
> > +
> > +	return true;
> > +}
> 
> Urgh, so this means all an attack needs to do is disable the online bit
> and it gets to poke CR4 bits.
> 
> This seems unfortunate.
> 
> And sure, randomly clearing the online bit will eventually cause havoc,
> but I suspect you still get plenty time until the system goes wobbly.

So what is the problem with removing FRED from cr4_pinned_mask?
Specifically, set it up such that if you 'accidentally' clear that, the
machines insta dies a horrible death.

So currently we setup an IDT and everything, then setup the FRED MSRs,
flip CR4_FRED and call it a day. But we could just explicitly poison all
the IDT stuff to cause tripple faults.

Fixing that up is a much bigger ask of an attacker, no?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup
  2026-03-18 21:09     ` Peter Zijlstra
@ 2026-03-18 21:30       ` Dave Hansen
  0 siblings, 0 replies; 10+ messages in thread
From: Dave Hansen @ 2026-03-18 21:30 UTC (permalink / raw)
  To: Peter Zijlstra, linux-kernel
  Cc: linux-tip-commits, Nikunj A Dadhania, Dave Hansen,
	Borislav Petkov (AMD), Sohil Mehta, stable, #, 6.9+, x86

On 3/18/26 14:09, Peter Zijlstra wrote:
> So currently we setup an IDT and everything, then setup the FRED MSRs,
> flip CR4_FRED and call it a day. But we could just explicitly poison all
> the IDT stuff to cause tripple faults.

We already have:

        /* Enable FRED */
        cr4_set_bits(X86_CR4_FRED);
        /* Any further IDT use is a bug */
        idt_invalidate();

which I think means that if you clear X86_CR4_FRED, you triple-fault on
the next reference to the IDT. That's a fate far worse than having the
CR-pinning code silently fix up X86_CR4_FRED.

It's arguable that having X86_CR4_FRED pinned in the first place makes
things less secure if an attacker is thwacking CR4 bits.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup
  2026-03-18 21:08     ` Borislav Petkov
@ 2026-03-18 21:30       ` Peter Zijlstra
  2026-03-18 22:01         ` Borislav Petkov
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2026-03-18 21:30 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, linux-tip-commits, Nikunj A Dadhania, Dave Hansen,
	Sohil Mehta, stable, #, 6.9+, x86

On Wed, Mar 18, 2026 at 10:08:13PM +0100, Borislav Petkov wrote:
> On Wed, Mar 18, 2026 at 09:47:22PM +0100, Peter Zijlstra wrote:
> > On Wed, Mar 18, 2026 at 06:51:10PM -0000, tip-bot2 for Dave Hansen wrote:
> > > --- a/arch/x86/kernel/cpu/common.c
> > > +++ b/arch/x86/kernel/cpu/common.c
> > > @@ -437,6 +437,21 @@ static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_C
> > >  static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
> > >  static unsigned long cr4_pinned_bits __ro_after_init;
> > >  
> > > +static bool cr_pinning_enabled(void)
> > > +{
> > > +	if (!static_branch_likely(&cr_pinning))
> > > +		return false;
> > > +
> > > +	/*
> > > +	 * Do not enforce pinning during CPU bringup. It might
> > > +	 * turn on features that are not set up yet, like FRED.
> > > +	 */
> > > +	if (!cpu_online(smp_processor_id()))
> > > +		return false;
> > > +
> > > +	return true;
> > > +}
> > 
> > Urgh, so this means all an attack needs to do is disable the online bit
> > and it gets to poke CR4 bits.
> > 
> > This seems unfortunate.
> > 
> > And sure, randomly clearing the online bit will eventually cause havoc,
> > but I suspect you still get plenty time until the system goes wobbly.
> 
> My idea was that this is only temporary and then, ontop, we'll do something

This isn't temporary, this is marked for infinite backports :/ And it is
really really bad.

> like this:
> 
> https://lore.kernel.org/r/cb492a37-3517-4738-b435-73311402e820@intel.com

I'm not understanding.

> I.e., you figure out all the CR4 pinned bits on the BSP *once*, cast them in
> stone and then replicate them on the APs when they come up.

That's what we do now. Its just that the AP bringup code doesn't seem
capable of dealing with this.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup
  2026-03-18 21:30       ` Peter Zijlstra
@ 2026-03-18 22:01         ` Borislav Petkov
  0 siblings, 0 replies; 10+ messages in thread
From: Borislav Petkov @ 2026-03-18 22:01 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-tip-commits, Nikunj A Dadhania, Dave Hansen,
	Sohil Mehta, stable, #, 6.9+, x86

On Wed, Mar 18, 2026 at 10:30:29PM +0100, Peter Zijlstra wrote:
> This isn't temporary, this is marked for infinite backports :/ And it is
> really really bad.

Ok, zapping all three. I'll redo the whole thing tomorrow on a clear head and
then we can talk.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup
  2026-03-18 20:47   ` Peter Zijlstra
  2026-03-18 21:08     ` Borislav Petkov
  2026-03-18 21:09     ` Peter Zijlstra
@ 2026-03-18 22:09     ` Peter Zijlstra
  2026-03-20  9:25       ` [PATCH] x86/cpu: Add comment clarifying CRn pinning Peter Zijlstra
  2 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2026-03-18 22:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-tip-commits, Nikunj A Dadhania, Dave Hansen,
	Borislav Petkov (AMD), Sohil Mehta, stable, x86, Kees Cook

On Wed, Mar 18, 2026 at 09:47:22PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 18, 2026 at 06:51:10PM -0000, tip-bot2 for Dave Hansen wrote:
> > --- a/arch/x86/kernel/cpu/common.c
> > +++ b/arch/x86/kernel/cpu/common.c
> > @@ -437,6 +437,21 @@ static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_C
> >  static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
> >  static unsigned long cr4_pinned_bits __ro_after_init;
> >  
> > +static bool cr_pinning_enabled(void)
> > +{
> > +	if (!static_branch_likely(&cr_pinning))
> > +		return false;
> > +
> > +	/*
> > +	 * Do not enforce pinning during CPU bringup. It might
> > +	 * turn on features that are not set up yet, like FRED.
> > +	 */
> > +	if (!cpu_online(smp_processor_id()))
> > +		return false;
> > +
> > +	return true;
> > +}
> 
> Urgh, so this means all an attack needs to do is disable the online bit
> and it gets to poke CR4 bits.
> 
> This seems unfortunate.
> 
> And sure, randomly clearing the online bit will eventually cause havoc,
> but I suspect you still get plenty time until the system goes wobbly.

The below tries to explain the CR pinning; and shows how the above
effectively disables the entire scheme since the online bit lives in RW
memory.

That is, the sequence:

  clear online bit
  ROP into 'mov %reg, %CR4'
  (re)set online bit

is fairly trivial, all things considering.

---
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index bb937bc4b00f..994e09d8c2fb 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -450,6 +450,19 @@ late_initcall(cpu_finalize_pre_userspace);
 /* These bits should not change their value after CPU init is finished. */
 static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP |
 					     X86_CR4_FSGSBASE | X86_CR4_CET | X86_CR4_FRED;
+
+/*
+ * The CR pinning protects against ROP on the 'mov %reg, %CRn' instruction(s).
+ * Since you can ROP directly to these instructions (barring shadow stack),
+ * any protection must follow immediately and unconditionally after that.
+ *
+ * Specifically, the CR[04] write functions below will have the value
+ * validation controlled by the @cr_pinning static_branch which is
+ * __ro_after_init, just like the cr4_pinned_bits value.
+ *
+ * Once set, an attacker will have to defeat page-tables to get around these
+ * restrictions. Which is a much bigger ask than 'simple' ROP.
+ */
 static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
 static unsigned long cr4_pinned_bits __ro_after_init;
 

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH] x86/cpu: Add comment clarifying CRn pinning
  2026-03-18 22:09     ` Peter Zijlstra
@ 2026-03-20  9:25       ` Peter Zijlstra
  2026-03-20 11:34         ` Borislav Petkov
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2026-03-20  9:25 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-tip-commits, Nikunj A Dadhania, Dave Hansen,
	Borislav Petkov (AMD), Sohil Mehta, stable, x86, Kees Cook


Since Boris wanted a nice patch to just press 'apply' on, here goes :-)


---
Subject: x86/cpu: Add comment clarifying CRn pinning
From: Peter Zijlstra <peterz@infradead.org>
Date: Wed, 18 Mar 2026 23:09:39 +0100

To avoid future confusion on the purpose and design of the CRn pinning
code.

Also note that if the attacker controls page-tables, the CRn bits
loose much of the attraction anyway.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/kernel/cpu/common.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -434,6 +434,19 @@ static __always_inline void setup_lass(s
 /* These bits should not change their value after CPU init is finished. */
 static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP |
 					     X86_CR4_FSGSBASE | X86_CR4_CET | X86_CR4_FRED;
+
+/*
+ * The CR pinning protects against ROP on the 'mov %reg, %CRn' instruction(s).
+ * Since you can ROP directly to these instructions (barring shadow stack),
+ * any protection must follow immediately and unconditionally after that.
+ *
+ * Specifically, the CR[04] write functions below will have the value
+ * validation controlled by the @cr_pinning static_branch which is
+ * __ro_after_init, just like the cr4_pinned_bits value.
+ *
+ * Once set, an attacker will have to defeat page-tables to get around these
+ * restrictions. Which is a much bigger ask than 'simple' ROP.
+ */
 static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
 static unsigned long cr4_pinned_bits __ro_after_init;
 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] x86/cpu: Add comment clarifying CRn pinning
  2026-03-20  9:25       ` [PATCH] x86/cpu: Add comment clarifying CRn pinning Peter Zijlstra
@ 2026-03-20 11:34         ` Borislav Petkov
  0 siblings, 0 replies; 10+ messages in thread
From: Borislav Petkov @ 2026-03-20 11:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-tip-commits, Nikunj A Dadhania, Dave Hansen,
	Sohil Mehta, stable, x86, Kees Cook

On Fri, Mar 20, 2026 at 10:25:21AM +0100, Peter Zijlstra wrote:
> 
> Since Boris wanted a nice patch to just press 'apply' on, here goes :-)

/me presses that key!

Thanks man!

:-P

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2026-03-20 11:34 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20260318075654.1792916-3-nikunj@amd.com>
2026-03-18 18:51 ` [tip: x86/urgent] x86/cpu: Disable CR pinning during CPU bringup tip-bot2 for Dave Hansen
2026-03-18 20:47   ` Peter Zijlstra
2026-03-18 21:08     ` Borislav Petkov
2026-03-18 21:30       ` Peter Zijlstra
2026-03-18 22:01         ` Borislav Petkov
2026-03-18 21:09     ` Peter Zijlstra
2026-03-18 21:30       ` Dave Hansen
2026-03-18 22:09     ` Peter Zijlstra
2026-03-20  9:25       ` [PATCH] x86/cpu: Add comment clarifying CRn pinning Peter Zijlstra
2026-03-20 11:34         ` Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox