From: Thomas Gleixner <tglx@linutronix.de>
To: Rick Edgecombe <rick.p.edgecombe@intel.com>,
x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>,
Ingo Molnar <mingo@redhat.com>,
linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
linux-mm@kvack.org, linux-arch@vger.kernel.org,
linux-api@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>,
Andy Lutomirski <luto@kernel.org>,
Balbir Singh <bsingharora@gmail.com>,
Borislav Petkov <bp@alien8.de>,
Cyrill Gorcunov <gorcunov@gmail.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Eugene Syromiatnikov <esyr@redhat.com>,
Florian Weimer <fweimer@redhat.com>,
"H . J . Lu" <hjl.tools@gmail.com>, Jann Horn <jannh@google.com>,
Jonathan Corbet <corbet@lwn.net>,
Kees Cook <keescook@chromium.org>,
Mike Kravetz <mike.kravetz@oracle.com>,
Nadav Amit <nadav.amit@gmail.com>,
Oleg Nesterov <oleg@redhat.com>, Pavel Machek <pavel@ucw.cz>,
Peter Zijlstra <peterz@infradead.org>,
Randy Dunlap <rdunlap@infradead.org>,
"Ravi V . Shankar" <ravi.v.shankar@intel.com>,
Dave Martin <Dave.Martin@arm.com>,
Weijiang Yang <weijiang.yang@intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
joao.moreira@intel.com, John Allen <john.allen@amd.com>,
kcc@google.com, eranian@google.com
Cc: rick.p.edgecombe@intel.com
Subject: Re: [PATCH 23/35] x86/fpu: Add helpers for modifying supervisor xstate
Date: Tue, 08 Feb 2022 09:51:50 +0100 [thread overview]
Message-ID: <87pmnxvizd.ffs@tglx> (raw)
In-Reply-To: <20220130211838.8382-24-rick.p.edgecombe@intel.com>
On Sun, Jan 30 2022 at 13:18, Rick Edgecombe wrote:
> In addition, now that get_xsave_addr() is not available outside of the
> core fpu code, there isn't even a way for these supervisor features to
> modify the in memory state.
>
> To resolve these problems, add some helpers that encapsulate the correct
> logic to operate on the correct copy of the state. Map the MSR's to the
> struct field location in a case statements in __get_xsave_member().
I like the approach in principle, but you still expose the xstate
internals via the void pointer. It's just a question of time that this
is type casted and abused in interesting ways.
Something like the below untested (on top of the whole series) preserves
the encapsulation and reduces the code at the call sites.
Thanks,
tglx
---
--- a/arch/x86/include/asm/fpu/api.h
+++ b/arch/x86/include/asm/fpu/api.h
@@ -165,12 +165,7 @@ static inline bool fpstate_is_confidenti
struct task_struct;
extern long fpu_xstate_prctl(struct task_struct *tsk, int option, unsigned long arg2);
-void *start_update_xsave_msrs(int xfeature_nr);
-void end_update_xsave_msrs(void);
-int xsave_rdmsrl(void *xstate, unsigned int msr, unsigned long long *p);
-int xsave_wrmsrl(void *xstate, u32 msr, u64 val);
-int xsave_set_clear_bits_msrl(void *xstate, u32 msr, u64 set, u64 clear);
-
-void *get_xsave_buffer_unsafe(struct fpu *fpu, int xfeature_nr);
-int xsave_wrmsrl_unsafe(void *xstate, u32 msr, u64 val);
+int xsave_rdmsrs(int xfeature_nr, struct xstate_msr *xmsr, int num_msrs);
+int xsave_wrmsrs(int xfeature_nr, struct xstate_msr *xmsr, int num_msrs);
+int xsave_wrmsrs_on_task(struct task_struct *tsk, int xfeature_nr, struct xstate_msr *xmsr, int num_msrs);
#endif /* _ASM_X86_FPU_API_H */
--- a/arch/x86/include/asm/fpu/types.h
+++ b/arch/x86/include/asm/fpu/types.h
@@ -601,4 +601,12 @@ struct fpu_state_config {
/* FPU state configuration information */
extern struct fpu_state_config fpu_kernel_cfg, fpu_user_cfg;
+struct xstate_msr {
+ unsigned int msr;
+ unsigned int bitop;
+ u64 val;
+ u64 set;
+ u64 clear;
+};
+
#endif /* _ASM_X86_FPU_H */
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -1868,7 +1868,7 @@ int proc_pid_arch_status(struct seq_file
}
#endif /* CONFIG_PROC_PID_ARCH_STATUS */
-static u64 *__get_xsave_member(void *xstate, u32 msr)
+static u64 *xstate_get_member(void *xstate, u32 msr)
{
switch (msr) {
case MSR_IA32_PL3_SSP:
@@ -1882,22 +1882,11 @@ static u64 *__get_xsave_member(void *xst
}
/*
- * Operate on the xsave buffer directly. It makes no gaurantees that the
- * buffer will stay valid now or in the futre. This function is pretty
- * much only useful when the caller knows the fpu's thread can't be
- * scheduled or otherwise operated on concurrently.
- */
-void *get_xsave_buffer_unsafe(struct fpu *fpu, int xfeature_nr)
-{
- return get_xsave_addr(&fpu->fpstate->regs.xsave, xfeature_nr);
-}
-
-/*
* Return a pointer to the xstate for the feature if it should be used, or NULL
* if the MSRs should be written to directly. To do this safely, using the
* associated read/write helpers is required.
*/
-void *start_update_xsave_msrs(int xfeature_nr)
+static void *xsave_msrs_op_start(int xfeature_nr)
{
void *xstate;
@@ -1938,7 +1927,7 @@ void *start_update_xsave_msrs(int xfeatu
return xstate;
}
-void end_update_xsave_msrs(void)
+static void xsave_msrs_op_end(void)
{
fpregs_unlock();
}
@@ -1951,7 +1940,7 @@ void end_update_xsave_msrs(void)
*
* But if this correspondence is broken by either a write to the in-memory
* buffer or the registers, the kernel needs to be notified so it doesn't miss
- * an xsave or restore. __xsave_msrl_prepare_write() peforms this check and
+ * an xsave or restore. xsave_msrs_prepare_write() performs this check and
* notifies the kernel if needed. Use before writes only, to not take away
* the kernel's options when not required.
*
@@ -1959,65 +1948,107 @@ void end_update_xsave_msrs(void)
* must have resulted in targeting the in-memory state, so invaliding the
* registers is the right thing to do.
*/
-static void __xsave_msrl_prepare_write(void)
+static void xsave_msrs_prepare_write(void)
{
if (test_thread_flag(TIF_NEED_FPU_LOAD) &&
fpregs_state_valid(¤t->thread.fpu, smp_processor_id()))
__fpu_invalidate_fpregs_state(¤t->thread.fpu);
}
-int xsave_rdmsrl(void *xstate, unsigned int msr, unsigned long long *p)
+static int read_xstate_or_msr(struct xstate_msr *xmsr, void *xstate)
{
u64 *member_ptr;
if (!xstate)
- return rdmsrl_safe(msr, p);
+ return rdmsrl_safe(xmsr->msr, &xmsr->val);
- member_ptr = __get_xsave_member(xstate, msr);
+ member_ptr = xstate_get_member(xstate, xmsr->msr);
if (!member_ptr)
return 1;
- *p = *member_ptr;
-
+ xmsr->val = *member_ptr;
return 0;
}
+int xsave_rdmsrs(int xfeature_nr, struct xstate_msr *xmsr, int num_msrs)
+{
+ void *xstate = xsave_msrs_op_start(xfeature_nr);
+ int i, ret;
+
+ for (i = 0, ret = 0; !ret && i < num_msrs; i++, xmsr++)
+ ret = read_xstate_or_msr(xmsr, xstate);
+
+ xsave_msrs_op_end();
+ return ret;
+}
-int xsave_wrmsrl_unsafe(void *xstate, u32 msr, u64 val)
+static int write_xstate(struct xstate_msr *xmsr, void *xstate)
{
- u64 *member_ptr;
+ u64 *member_ptr = xstate_get_member(xstate, xmsr->msr);
- member_ptr = __get_xsave_member(xstate, msr);
if (!member_ptr)
return 1;
- *member_ptr = val;
-
+ *member_ptr = xmsr->val;
return 0;
}
-int xsave_wrmsrl(void *xstate, u32 msr, u64 val)
+static int write_xstate_or_msr(struct xstate_msr *xmsr, void *xstate)
{
- __xsave_msrl_prepare_write();
if (!xstate)
- return wrmsrl_safe(msr, val);
-
- return xsave_wrmsrl_unsafe(xstate, msr, val);
+ return wrmsrl_safe(xmsr->msr, xmsr->val);
+ return write_xstate(xmsr, xstate);
}
-int xsave_set_clear_bits_msrl(void *xstate, u32 msr, u64 set, u64 clear)
+static int mod_xstate_or_msr_bits(struct xstate_msr *xmsr, void *xstate)
{
- u64 val, new_val;
+ u64 val;
int ret;
- ret = xsave_rdmsrl(xstate, msr, &val);
+ ret = read_xstate_or_msr(xmsr, xstate);
if (ret)
return ret;
- new_val = (val & ~clear) | set;
+ val = xmsr->val;
+ xmsr->val = (val & ~xmsr->clear) | xmsr->set;
- if (new_val != val)
- return xsave_wrmsrl(xstate, msr, new_val);
+ if (val != xmsr->val)
+ return write_xstate_or_msr(xmsr, xstate);
return 0;
}
+
+static int __xsave_wrmsrs(void *xstate, struct xstate_msr *xmsr, int num_msrs)
+{
+ int i, ret;
+
+ for (i = 0, ret = 0; !ret && i < num_msrs; i++, xmsr++) {
+ if (!xmsr->bitop)
+ ret = write_xstate_or_msr(xmsr, xstate);
+ else
+ ret = mod_xstate_or_msr_bits(xmsr, xstate);
+ }
+
+ return ret;
+}
+
+int xsave_wrmsrs(int xfeature_nr, struct xstate_msr *xmsr, int num_msrs)
+{
+ void *xstate = xsave_msrs_op_start(xfeature_nr);
+ int ret;
+
+ xsave_msrs_prepare_write();
+ ret = __xsave_wrmsrs(xstate, xmsr, num_msrs);
+ xsave_msrs_op_end();
+ return ret;
+}
+
+int xsave_wrmsrs_on_task(struct task_struct *tsk, int xfeature_nr, struct xstate_msr *xmsr,
+ int num_msrs)
+{
+ void *xstate = get_xsave_addr(&tsk->thread.fpu.fpstate->regs.xsave, xfeature_nr);
+
+ if (WARN_ON_ONCE(!xstate))
+ return -EINVAL;
+ return __xsave_wrmsrs(xstate, xmsr, num_msrs);
+}
--- a/arch/x86/kernel/shstk.c
+++ b/arch/x86/kernel/shstk.c
@@ -106,8 +106,7 @@ int shstk_setup(void)
{
struct thread_shstk *shstk = ¤t->thread.shstk;
unsigned long addr, size;
- void *xstate;
- int err;
+ struct xstate_msr xmsr[2];
if (!cpu_feature_enabled(X86_FEATURE_SHSTK) ||
shstk->size ||
@@ -119,13 +118,10 @@ int shstk_setup(void)
if (IS_ERR_VALUE(addr))
return 1;
- xstate = start_update_xsave_msrs(XFEATURE_CET_USER);
- err = xsave_wrmsrl(xstate, MSR_IA32_PL3_SSP, addr + size);
- if (!err)
- err = xsave_wrmsrl(xstate, MSR_IA32_U_CET, CET_SHSTK_EN);
- end_update_xsave_msrs();
+ xmsr[0] = (struct xstate_msr) { .msr = MSR_IA32_PL3_SSP, .val = addr + size };
+ xmsr[0] = (struct xstate_msr) { .msr = MSR_IA32_U_CET, .val = CET_SHSTK_EN };
- if (err) {
+ if (xsave_wrmsrs(XFEATURE_CET_USER, xmsr, ARRAY_SIZE(xmsr))) {
/*
* Don't leak shadow stack if something went wrong with writing the
* msrs. Warn about it because things may be in a weird state.
@@ -150,8 +146,8 @@ int shstk_alloc_thread_stack(struct task
unsigned long stack_size)
{
struct thread_shstk *shstk = &tsk->thread.shstk;
+ struct xstate_msr xmsr[1];
unsigned long addr;
- void *xstate;
/*
* If shadow stack is not enabled on the new thread, skip any
@@ -183,15 +179,6 @@ int shstk_alloc_thread_stack(struct task
if (in_compat_syscall())
stack_size /= 4;
- /*
- * 'tsk' is configured with a shadow stack and the fpu.state is
- * up to date since it was just copied from the parent. There
- * must be a valid non-init CET state location in the buffer.
- */
- xstate = get_xsave_buffer_unsafe(&tsk->thread.fpu, XFEATURE_CET_USER);
- if (WARN_ON_ONCE(!xstate))
- return -EINVAL;
-
stack_size = PAGE_ALIGN(stack_size);
addr = alloc_shstk(stack_size, stack_size, false);
if (IS_ERR_VALUE(addr)) {
@@ -200,7 +187,11 @@ int shstk_alloc_thread_stack(struct task
return PTR_ERR((void *)addr);
}
- xsave_wrmsrl_unsafe(xstate, MSR_IA32_PL3_SSP, (u64)(addr + stack_size));
+ xmsr[0] = (struct xstate_msr) { .msr = MSR_IA32_PL3_SSP, .val = addr + stack_size };
+ if (xsave_wrmsrs_on_task(tsk, XFEATURE_CET_USER, xmsr, ARRAY_SIZE(xmsr))) {
+ unmap_shadow_stack(addr, stack_size);
+ return 1;
+ }
shstk->base = addr;
shstk->size = stack_size;
return 0;
@@ -232,8 +223,8 @@ void shstk_free(struct task_struct *tsk)
int wrss_control(bool enable)
{
+ struct xstate_msr xmsr[1] = {[0] = { .msr = MSR_IA32_U_CET, .bitop = 1,}, };
struct thread_shstk *shstk = ¤t->thread.shstk;
- void *xstate;
int err;
if (!cpu_feature_enabled(X86_FEATURE_SHSTK))
@@ -246,13 +237,11 @@ int wrss_control(bool enable)
if (!shstk->size || shstk->wrss == enable)
return 1;
- xstate = start_update_xsave_msrs(XFEATURE_CET_USER);
if (enable)
- err = xsave_set_clear_bits_msrl(xstate, MSR_IA32_U_CET, CET_WRSS_EN, 0);
+ xmsr[0].set = CET_WRSS_EN;
else
- err = xsave_set_clear_bits_msrl(xstate, MSR_IA32_U_CET, 0, CET_WRSS_EN);
- end_update_xsave_msrs();
-
+ xmsr[0].clear = CET_WRSS_EN;
+ err = xsave_wrmsrs(XFEATURE_CET_USER, xmsr, ARRAY_SIZE(xmsr));
if (err)
return 1;
@@ -263,7 +252,7 @@ int wrss_control(bool enable)
int shstk_disable(void)
{
struct thread_shstk *shstk = ¤t->thread.shstk;
- void *xstate;
+ struct xstate_msr xmsr[2];
int err;
if (!cpu_feature_enabled(X86_FEATURE_SHSTK) ||
@@ -271,14 +260,11 @@ int shstk_disable(void)
!shstk->base)
return 1;
- xstate = start_update_xsave_msrs(XFEATURE_CET_USER);
- /* Disable WRSS too when disabling shadow stack */
- err = xsave_set_clear_bits_msrl(xstate, MSR_IA32_U_CET, 0,
- CET_SHSTK_EN | CET_WRSS_EN);
- if (!err)
- err = xsave_wrmsrl(xstate, MSR_IA32_PL3_SSP, 0);
- end_update_xsave_msrs();
+ xmsr[0] = (struct xstate_msr) { .msr = MSR_IA32_U_CET, .bitop = 1,
+ .set = 0, .clear = CET_SHSTK_EN | CET_WRSS_EN };
+ xmsr[1] = (struct xstate_msr) { .msr = MSR_IA32_PL3_SSP, .val = 0 };
+ err = xsave_wrmsrs(XFEATURE_CET_USER, xmsr, ARRAY_SIZE(xmsr));
if (err)
return 1;
@@ -289,16 +275,10 @@ int shstk_disable(void)
static unsigned long get_user_shstk_addr(void)
{
- void *xstate;
- unsigned long long ssp;
-
- xstate = start_update_xsave_msrs(XFEATURE_CET_USER);
-
- xsave_rdmsrl(xstate, MSR_IA32_PL3_SSP, &ssp);
-
- end_update_xsave_msrs();
+ struct xstate_msr xmsr[1] = { [0] = {.msr = MSR_IA32_PL3_SSP, }, };
- return ssp;
+ xsave_rdmsrs(XFEATURE_CET_USER, xmsr, ARRAY_SIZE(xmsr));
+ return xmsr[0].val;
}
/*
@@ -385,8 +365,8 @@ int shstk_check_rstor_token(bool proc32,
int setup_signal_shadow_stack(int proc32, void __user *restorer)
{
struct thread_shstk *shstk = ¤t->thread.shstk;
+ struct xstate_msr xmsr[1];
unsigned long new_ssp;
- void *xstate;
int err;
if (!cpu_feature_enabled(X86_FEATURE_SHSTK) || !shstk->size)
@@ -397,18 +377,15 @@ int setup_signal_shadow_stack(int proc32
if (err)
return err;
- xstate = start_update_xsave_msrs(XFEATURE_CET_USER);
- err = xsave_wrmsrl(xstate, MSR_IA32_PL3_SSP, new_ssp);
- end_update_xsave_msrs();
-
- return err;
+ xmsr[0] = (struct xstate_msr) { .msr = MSR_IA32_PL3_SSP, .val = new_ssp };
+ return xsave_wrmsrs(XFEATURE_CET_USER, xmsr, ARRAY_SIZE(xmsr));
}
int restore_signal_shadow_stack(void)
{
struct thread_shstk *shstk = ¤t->thread.shstk;
- void *xstate;
int proc32 = in_ia32_syscall();
+ struct xstate_msr xmsr[1];
unsigned long new_ssp;
int err;
@@ -419,11 +396,8 @@ int restore_signal_shadow_stack(void)
if (err)
return err;
- xstate = start_update_xsave_msrs(XFEATURE_CET_USER);
- err = xsave_wrmsrl(xstate, MSR_IA32_PL3_SSP, new_ssp);
- end_update_xsave_msrs();
-
- return err;
+ xmsr[0] = (struct xstate_msr) { .msr = MSR_IA32_PL3_SSP, .val = new_ssp };
+ return xsave_wrmsrs(XFEATURE_CET_USER, xmsr, ARRAY_SIZE(xmsr));
}
SYSCALL_DEFINE2(map_shadow_stack, unsigned long, size, unsigned int, flags)
next prev parent reply other threads:[~2022-02-08 8:51 UTC|newest]
Thread overview: 152+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-30 21:18 [PATCH 00/35] Shadow stacks for userspace Rick Edgecombe
2022-01-30 21:18 ` [PATCH 01/35] Documentation/x86: Add CET description Rick Edgecombe
2022-01-30 21:18 ` [PATCH 02/35] x86/cet/shstk: Add Kconfig option for Shadow Stack Rick Edgecombe
2022-02-07 22:39 ` Dave Hansen
2022-02-08 8:41 ` Thomas Gleixner
2022-02-08 20:20 ` Edgecombe, Rick P
2022-02-08 8:39 ` Thomas Gleixner
2022-01-30 21:18 ` [PATCH 03/35] x86/cpufeatures: Add CET CPU feature flags for Control-flow Enforcement Technology (CET) Rick Edgecombe
2022-02-07 22:45 ` Dave Hansen
2022-02-08 20:23 ` Edgecombe, Rick P
2022-02-09 1:10 ` Kees Cook
2022-01-30 21:18 ` [PATCH 04/35] x86/cpufeatures: Introduce CPU setup and option parsing for CET Rick Edgecombe
2022-02-07 22:49 ` Dave Hansen
2022-02-08 20:29 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 05/35] x86/fpu/xstate: Introduce CET MSR and XSAVES supervisor states Rick Edgecombe
2022-02-07 23:28 ` Dave Hansen
2022-02-08 21:36 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 06/35] x86/cet: Add control-protection fault handler Rick Edgecombe
2022-02-07 23:56 ` Dave Hansen
2022-02-08 22:23 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 07/35] x86/mm: Remove _PAGE_DIRTY from kernel RO pages Rick Edgecombe
2022-02-08 0:13 ` Dave Hansen
2022-02-08 22:52 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 08/35] x86/mm: Move pmd_write(), pud_write() up in the file Rick Edgecombe
2022-01-30 21:18 ` [PATCH 09/35] x86/mm: Introduce _PAGE_COW Rick Edgecombe
2022-02-08 1:05 ` Dave Hansen
2022-01-30 21:18 ` [PATCH 10/35] drm/i915/gvt: Change _PAGE_DIRTY to _PAGE_DIRTY_BITS Rick Edgecombe
2022-02-09 16:58 ` Dave Hansen
2022-02-11 1:39 ` Edgecombe, Rick P
2022-02-11 7:13 ` Wang, Zhi A
2022-02-12 1:45 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 11/35] x86/mm: Update pte_modify for _PAGE_COW Rick Edgecombe
2022-02-09 18:00 ` Dave Hansen
2022-01-30 21:18 ` [PATCH 12/35] x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for transition from _PAGE_DIRTY to _PAGE_COW Rick Edgecombe
2022-02-09 18:30 ` Dave Hansen
2022-01-30 21:18 ` [PATCH 13/35] mm: Move VM_UFFD_MINOR_BIT from 37 to 38 Rick Edgecombe
2022-01-30 21:18 ` [PATCH 14/35] mm: Introduce VM_SHADOW_STACK for shadow stack memory Rick Edgecombe
2022-02-09 21:55 ` Dave Hansen
2022-01-30 21:18 ` [PATCH 15/35] x86/mm: Check Shadow Stack page fault errors Rick Edgecombe
2022-02-09 19:06 ` Dave Hansen
2022-01-30 21:18 ` [PATCH 16/35] x86/mm: Update maybe_mkwrite() for shadow stack Rick Edgecombe
2022-02-09 21:16 ` Dave Hansen
2022-01-30 21:18 ` [PATCH 17/35] mm: Fixup places that call pte_mkwrite() directly Rick Edgecombe
2022-02-09 21:51 ` Dave Hansen
2022-01-30 21:18 ` [PATCH 18/35] mm: Add guard pages around a shadow stack Rick Edgecombe
2022-02-09 22:23 ` Dave Hansen
2022-02-10 22:38 ` David Laight
2022-02-10 23:42 ` Edgecombe, Rick P
2022-02-11 9:08 ` David Laight
2022-02-10 22:43 ` Dave Hansen
2022-02-10 23:07 ` Andy Lutomirski
2022-02-10 23:40 ` Edgecombe, Rick P
2022-02-11 17:54 ` Andy Lutomirski
2022-02-12 0:10 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 19/35] mm/mmap: Add shadow stack pages to memory accounting Rick Edgecombe
2022-02-09 22:27 ` Dave Hansen
2022-01-30 21:18 ` [PATCH 20/35] mm: Update can_follow_write_pte() for shadow stack Rick Edgecombe
2022-02-09 22:50 ` Dave Hansen
2022-02-09 22:52 ` Dave Hansen
2022-02-10 22:45 ` David Laight
2022-01-30 21:18 ` [PATCH 21/35] mm/mprotect: Exclude shadow stack from preserve_write Rick Edgecombe
2022-02-10 19:27 ` Dave Hansen
2022-01-30 21:18 ` [PATCH 22/35] x86/mm: Prevent VM_WRITE shadow stacks Rick Edgecombe
2022-02-11 22:19 ` Dave Hansen
2022-02-12 1:44 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 23/35] x86/fpu: Add helpers for modifying supervisor xstate Rick Edgecombe
2022-02-08 8:51 ` Thomas Gleixner [this message]
2022-02-09 19:55 ` Edgecombe, Rick P
2022-02-12 0:27 ` Dave Hansen
2022-02-12 2:31 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 24/35] mm: Re-introduce vm_flags to do_mmap() Rick Edgecombe
2022-01-30 21:18 ` [PATCH 25/35] x86/cet/shstk: Add user-mode shadow stack support Rick Edgecombe
2022-02-11 23:37 ` Dave Hansen
2022-02-12 0:07 ` Andy Lutomirski
2022-02-12 0:11 ` Dave Hansen
2022-02-12 0:12 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 26/35] x86/process: Change copy_thread() argument 'arg' to 'stack_size' Rick Edgecombe
2022-02-08 8:38 ` Thomas Gleixner
2022-02-11 2:09 ` Edgecombe, Rick P
2022-02-14 12:33 ` Jann Horn
2022-02-15 1:22 ` Edgecombe, Rick P
2022-02-15 8:49 ` Christian Brauner
2022-01-30 21:18 ` [PATCH 27/35] x86/fpu: Add unsafe xsave buffer helpers Rick Edgecombe
2022-01-30 21:18 ` [PATCH 28/35] x86/cet/shstk: Handle thread shadow stack Rick Edgecombe
2022-01-30 21:18 ` [PATCH 29/35] x86/cet/shstk: Introduce shadow stack token setup/verify routines Rick Edgecombe
2022-01-30 21:18 ` [PATCH 30/35] x86/cet/shstk: Handle signals for shadow stack Rick Edgecombe
2022-01-30 21:18 ` [PATCH 31/35] x86/cet/shstk: Add arch_prctl elf feature functions Rick Edgecombe
2022-01-30 21:18 ` [PATCH 32/35] x86/cet/shstk: Introduce map_shadow_stack syscall Rick Edgecombe
2022-01-30 21:18 ` [PATCH 33/35] selftests/x86: Add map_shadow_stack syscall test Rick Edgecombe
2022-02-03 22:42 ` Dave Hansen
2022-02-04 1:22 ` Edgecombe, Rick P
2022-01-30 21:18 ` [PATCH 34/35] x86/cet/shstk: Support wrss for userspace Rick Edgecombe
2022-01-31 7:56 ` Florian Weimer
2022-01-31 18:26 ` H.J. Lu
2022-01-31 18:45 ` Florian Weimer
2022-01-30 21:18 ` [PATCH 35/35] x86/cpufeatures: Limit shadow stack to Intel CPUs Rick Edgecombe
2022-02-03 21:58 ` John Allen
2022-02-03 22:23 ` H.J. Lu
2022-02-04 22:21 ` John Allen
2022-02-03 21:07 ` [PATCH 00/35] Shadow stacks for userspace Thomas Gleixner
2022-02-04 1:08 ` Edgecombe, Rick P
2022-02-04 5:20 ` Andy Lutomirski
2022-02-04 20:23 ` Edgecombe, Rick P
2022-02-05 13:26 ` David Laight
2022-02-05 13:29 ` H.J. Lu
2022-02-05 20:15 ` Edgecombe, Rick P
2022-02-05 20:21 ` H.J. Lu
2022-02-06 13:19 ` Peter Zijlstra
2022-02-06 13:42 ` David Laight
2022-02-06 13:55 ` H.J. Lu
2022-02-07 10:22 ` Florian Weimer
2022-02-08 1:46 ` Edgecombe, Rick P
2022-02-08 1:31 ` Andy Lutomirski
2022-02-08 9:31 ` Thomas Gleixner
2022-02-08 16:15 ` Andy Lutomirski
2022-02-06 13:06 ` Peter Zijlstra
2022-02-06 18:42 ` Mike Rapoport
2022-02-07 7:20 ` Adrian Reber
2022-02-07 16:30 ` Dave Hansen
2022-02-08 9:16 ` Mike Rapoport
2022-02-08 9:29 ` Cyrill Gorcunov
2022-02-08 16:21 ` Andy Lutomirski
2022-02-08 17:02 ` Cyrill Gorcunov
2022-02-09 2:18 ` Edgecombe, Rick P
2022-02-09 6:43 ` Cyrill Gorcunov
2022-02-09 10:53 ` Mike Rapoport
2022-02-10 2:37 ` Andy Lutomirski
2022-02-10 2:53 ` H.J. Lu
2022-02-10 13:52 ` Willgerodt, Felix
2022-02-11 7:41 ` avagin
2022-02-11 8:04 ` Mike Rapoport
2022-02-28 20:27 ` Mike Rapoport
2022-02-28 20:30 ` Andy Lutomirski
2022-02-28 21:30 ` Mike Rapoport
2022-02-28 22:55 ` Andy Lutomirski
2022-03-03 19:40 ` Mike Rapoport
2022-03-03 23:00 ` Andy Lutomirski
2022-03-04 1:30 ` Edgecombe, Rick P
2022-03-04 19:13 ` Andy Lutomirski
2022-03-07 18:56 ` Mike Rapoport
2022-03-07 19:07 ` H.J. Lu
2022-05-31 11:59 ` Mike Rapoport
2022-05-31 16:25 ` Edgecombe, Rick P
2022-05-31 16:36 ` Mike Rapoport
2022-05-31 17:34 ` Edgecombe, Rick P
2022-05-31 18:00 ` H.J. Lu
2022-06-01 17:27 ` Edgecombe, Rick P
2022-06-01 19:27 ` H.J. Lu
2022-06-01 8:06 ` Mike Rapoport
2022-06-01 17:24 ` Edgecombe, Rick P
2022-06-09 18:04 ` Mike Rapoport
2022-03-07 22:21 ` David Laight
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87pmnxvizd.ffs@tglx \
--to=tglx@linutronix.de \
--cc=Dave.Martin@arm.com \
--cc=arnd@arndb.de \
--cc=bp@alien8.de \
--cc=bsingharora@gmail.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=eranian@google.com \
--cc=esyr@redhat.com \
--cc=fweimer@redhat.com \
--cc=gorcunov@gmail.com \
--cc=hjl.tools@gmail.com \
--cc=hpa@zytor.com \
--cc=jannh@google.com \
--cc=joao.moreira@intel.com \
--cc=john.allen@amd.com \
--cc=kcc@google.com \
--cc=keescook@chromium.org \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-api@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=mingo@redhat.com \
--cc=nadav.amit@gmail.com \
--cc=oleg@redhat.com \
--cc=pavel@ucw.cz \
--cc=peterz@infradead.org \
--cc=ravi.v.shankar@intel.com \
--cc=rdunlap@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=weijiang.yang@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).