* [PATCH v7 0/3] Fix dosemu vm86() fault
@ 2024-09-25 22:25 Pawan Gupta
2024-09-25 22:25 ` [PATCH v7 1/3] x86/entry_32: Do not clobber user EFLAGS.ZF Pawan Gupta
` (3 more replies)
0 siblings, 4 replies; 17+ messages in thread
From: Pawan Gupta @ 2024-09-25 22:25 UTC (permalink / raw)
To: Borislav Petkov, Dave Hansen
Cc: linux-kernel, x86, Robert Gill, Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
Changes in v7:
- Using %ss for verw fails kselftest ldt_gdt.c in 32-bit mode, use safer %cs instead (Dave).
v6: https://lore.kernel.org/r/20240905-fix-dosemu-vm86-v6-0-7aff8e53cbbf@linux.intel.com
- Use %ss in 64-bit mode as well for all VERW calls. This avoids any having
a separate macro for 32-bit (Dave).
- Split 32-bit mode fixes into separate patches.
v5: https://lore.kernel.org/r/20240711-fix-dosemu-vm86-v5-1-e87dcd7368aa@linux.intel.com
- Simplify the use of ALTERNATIVE construct (Uros/Jiri/Peter).
v4: https://lore.kernel.org/r/20240710-fix-dosemu-vm86-v4-1-aa6464e1de6f@linux.intel.com
- Further simplify the patch by using %ss for all VERW calls in 32-bit mode (Brian).
- In NMI exit path move VERW after RESTORE_ALL_NMI that touches GPRs (Dave).
v3: https://lore.kernel.org/r/20240701-fix-dosemu-vm86-v3-1-b1969532c75a@linux.intel.com
- Simplify CLEAR_CPU_BUFFERS_SAFE by using %ss instead of %ds (Brian).
- Do verw before popf in SYSEXIT path (Jari).
v2: https://lore.kernel.org/r/20240627-fix-dosemu-vm86-v2-1-d5579f698e77@linux.intel.com
- Safe guard against any other system calls like vm86() that might change %ds (Dave).
v1: https://lore.kernel.org/r/20240426-fix-dosemu-vm86-v1-1-88c826a3f378@linux.intel.com
Hi,
This series fixes a #GP in 32-bit kernels when executing vm86() system call
in dosemu software. In 32-bit mode, their are cases when user can set an
arbitrary %ds that can cause a #GP when executing VERW instruction. The
fix is to use %ss for referencing the VERW operand.
Patch 1-2: Fixes the VERW callsites in 32-bit entry path.
Patch 3: Uses %ss for VERW in 32-bit and 64-bit mode.
The fix is tested with below kselftest on 32-bit kernel:
./tools/testing/selftests/x86/entry_from_vm86.c
64-bit kernel was boot tested. On a Rocket Lake, measuring the CPU cycles
for VERW with and without the %ss shows no significant difference. This
indicates that the scrubbing behavior of VERW is intact.
Thanks,
Pawan
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
Pawan Gupta (3):
x86/entry_32: Do not clobber user EFLAGS.ZF
x86/entry_32: Clear CPU buffers after register restore in NMI return
x86/bugs: Use code segment selector for VERW operand
arch/x86/entry/entry_32.S | 6 ++++--
arch/x86/include/asm/nospec-branch.h | 6 ++++--
2 files changed, 8 insertions(+), 4 deletions(-)
---
base-commit: 431c1646e1f86b949fa3685efc50b660a364c2b6
change-id: 20240426-fix-dosemu-vm86-dd111a01737e
Best regards,
--
Thanks,
Pawan
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v7 1/3] x86/entry_32: Do not clobber user EFLAGS.ZF
2024-09-25 22:25 [PATCH v7 0/3] Fix dosemu vm86() fault Pawan Gupta
@ 2024-09-25 22:25 ` Pawan Gupta
2024-09-25 22:25 ` [PATCH v7 2/3] x86/entry_32: Clear CPU buffers after register restore in NMI return Pawan Gupta
` (2 subsequent siblings)
3 siblings, 0 replies; 17+ messages in thread
From: Pawan Gupta @ 2024-09-25 22:25 UTC (permalink / raw)
To: Borislav Petkov, Dave Hansen
Cc: linux-kernel, x86, Robert Gill, Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
Opportunistic SYSEXIT executes VERW to clear CPU buffers after user EFLAGS
are restored. This can clobber user EFLAGS.ZF.
Move CLEAR_CPU_BUFFERS before the user EFLAGS are restored. This ensures
that the user EFLAGS.ZF is not clobbered.
Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
Reported-by: Jari Ruusu <jariruusu@protonmail.com>
Closes: https://lore.kernel.org/lkml/yVXwe8gvgmPADpRB6lXlicS2fcHoV5OHHxyuFbB_MEleRPD7-KhGe5VtORejtPe-KCkT8Uhcg5d7-IBw4Ojb4H7z5LQxoZylSmJ8KNL3A8o=@protonmail.com/
Cc: stable@vger.kernel.org # 5.10+
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/entry/entry_32.S | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index d3a814efbff6..9ad6cd89b7ac 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -871,6 +871,8 @@ SYM_FUNC_START(entry_SYSENTER_32)
/* Now ready to switch the cr3 */
SWITCH_TO_USER_CR3 scratch_reg=%eax
+ /* Clobbers ZF */
+ CLEAR_CPU_BUFFERS
/*
* Restore all flags except IF. (We restore IF separately because
@@ -881,7 +883,6 @@ SYM_FUNC_START(entry_SYSENTER_32)
BUG_IF_WRONG_CR3 no_user_check=1
popfl
popl %eax
- CLEAR_CPU_BUFFERS
/*
* Return back to the vDSO, which will pop ecx and edx.
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v7 2/3] x86/entry_32: Clear CPU buffers after register restore in NMI return
2024-09-25 22:25 [PATCH v7 0/3] Fix dosemu vm86() fault Pawan Gupta
2024-09-25 22:25 ` [PATCH v7 1/3] x86/entry_32: Do not clobber user EFLAGS.ZF Pawan Gupta
@ 2024-09-25 22:25 ` Pawan Gupta
2024-09-25 22:25 ` [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand Pawan Gupta
2024-10-08 13:52 ` [PATCH v7 0/3] Fix dosemu vm86() fault Thorsten Leemhuis
3 siblings, 0 replies; 17+ messages in thread
From: Pawan Gupta @ 2024-09-25 22:25 UTC (permalink / raw)
To: Borislav Petkov, Dave Hansen
Cc: linux-kernel, x86, Robert Gill, Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
CPU buffers are currently cleared after call to exc_nmi, but before
register state is restored. This may be okay for MDS mitigation but not for
RDFS. Because RDFS mitigation requires CPU buffers to be cleared when
registers don't have any sensitive data.
Move CLEAR_CPU_BUFFERS after RESTORE_ALL_NMI.
Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
Cc: stable@vger.kernel.org # 5.10+
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/entry/entry_32.S | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 9ad6cd89b7ac..20be5758c2d2 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1145,7 +1145,6 @@ SYM_CODE_START(asm_exc_nmi)
/* Not on SYSENTER stack. */
call exc_nmi
- CLEAR_CPU_BUFFERS
jmp .Lnmi_return
.Lnmi_from_sysenter_stack:
@@ -1166,6 +1165,7 @@ SYM_CODE_START(asm_exc_nmi)
CHECK_AND_APPLY_ESPFIX
RESTORE_ALL_NMI cr3_reg=%edi pop=4
+ CLEAR_CPU_BUFFERS
jmp .Lirq_return
#ifdef CONFIG_X86_ESPFIX32
@@ -1207,6 +1207,7 @@ SYM_CODE_START(asm_exc_nmi)
* 1 - orig_ax
*/
lss (1+5+6)*4(%esp), %esp # back to espfix stack
+ CLEAR_CPU_BUFFERS
jmp .Lirq_return
#endif
SYM_CODE_END(asm_exc_nmi)
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-25 22:25 [PATCH v7 0/3] Fix dosemu vm86() fault Pawan Gupta
2024-09-25 22:25 ` [PATCH v7 1/3] x86/entry_32: Do not clobber user EFLAGS.ZF Pawan Gupta
2024-09-25 22:25 ` [PATCH v7 2/3] x86/entry_32: Clear CPU buffers after register restore in NMI return Pawan Gupta
@ 2024-09-25 22:25 ` Pawan Gupta
2024-09-25 23:29 ` Andrew Cooper
2024-10-08 13:52 ` [PATCH v7 0/3] Fix dosemu vm86() fault Thorsten Leemhuis
3 siblings, 1 reply; 17+ messages in thread
From: Pawan Gupta @ 2024-09-25 22:25 UTC (permalink / raw)
To: Borislav Petkov, Dave Hansen
Cc: linux-kernel, x86, Robert Gill, Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
Robert Gill reported below #GP in 32-bit mode when dosemu software was
executing vm86() system call:
general protection fault: 0000 [#1] PREEMPT SMP
CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
EIP: restore_all_switch_stack+0xbe/0xcf
EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
Call Trace:
show_regs+0x70/0x78
die_addr+0x29/0x70
exc_general_protection+0x13c/0x348
exc_bounds+0x98/0x98
handle_exception+0x14d/0x14d
exc_bounds+0x98/0x98
restore_all_switch_stack+0xbe/0xcf
exc_bounds+0x98/0x98
restore_all_switch_stack+0xbe/0xcf
This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
are enabled. This is because segment registers with an arbitrary user value
can result in #GP when executing VERW. Intel SDM vol. 2C documents the
following behavior for VERW instruction:
#GP(0) - If a memory operand effective address is outside the CS, DS, ES,
FS, or GS segment limit.
CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
space. Use %cs selector to reference VERW operand. This ensures VERW will
not #GP for an arbitrary user %ds.
Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
Cc: stable@vger.kernel.org # 5.10+
Reported-by: Robert Gill <rtgill82@gmail.com>
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Suggested-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/include/asm/nospec-branch.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index ff5f1ecc7d1e..e18a6aaf414c 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -318,12 +318,14 @@
/*
* Macro to execute VERW instruction that mitigate transient data sampling
* attacks such as MDS. On affected systems a microcode update overloaded VERW
- * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
+ * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
+ * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
+ * 32-bit mode.
*
* Note: Only the memory operand variant of VERW clears the CPU buffers.
*/
.macro CLEAR_CPU_BUFFERS
- ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
+ ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
.endm
#ifdef CONFIG_X86_64
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-25 22:25 ` [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand Pawan Gupta
@ 2024-09-25 23:29 ` Andrew Cooper
2024-09-25 23:46 ` Pawan Gupta
0 siblings, 1 reply; 17+ messages in thread
From: Andrew Cooper @ 2024-09-25 23:29 UTC (permalink / raw)
To: Pawan Gupta, Borislav Petkov, Dave Hansen
Cc: linux-kernel, x86, Robert Gill, Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On 25/09/2024 11:25 pm, Pawan Gupta wrote:
> Robert Gill reported below #GP in 32-bit mode when dosemu software was
> executing vm86() system call:
>
> general protection fault: 0000 [#1] PREEMPT SMP
> CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
> Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
> EIP: restore_all_switch_stack+0xbe/0xcf
> EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
> ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
> DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
> CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
> Call Trace:
> show_regs+0x70/0x78
> die_addr+0x29/0x70
> exc_general_protection+0x13c/0x348
> exc_bounds+0x98/0x98
> handle_exception+0x14d/0x14d
> exc_bounds+0x98/0x98
> restore_all_switch_stack+0xbe/0xcf
> exc_bounds+0x98/0x98
> restore_all_switch_stack+0xbe/0xcf
>
> This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
> are enabled. This is because segment registers with an arbitrary user value
> can result in #GP when executing VERW. Intel SDM vol. 2C documents the
> following behavior for VERW instruction:
>
> #GP(0) - If a memory operand effective address is outside the CS, DS, ES,
> FS, or GS segment limit.
>
> CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
> space. Use %cs selector to reference VERW operand. This ensures VERW will
> not #GP for an arbitrary user %ds.
>
> Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
> Cc: stable@vger.kernel.org # 5.10+
> Reported-by: Robert Gill <rtgill82@gmail.com>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
> Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Suggested-by: Brian Gerst <brgerst@gmail.com>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---
> arch/x86/include/asm/nospec-branch.h | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index ff5f1ecc7d1e..e18a6aaf414c 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -318,12 +318,14 @@
> /*
> * Macro to execute VERW instruction that mitigate transient data sampling
> * attacks such as MDS. On affected systems a microcode update overloaded VERW
> - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
> + * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
> + * 32-bit mode.
> *
> * Note: Only the memory operand variant of VERW clears the CPU buffers.
> */
> .macro CLEAR_CPU_BUFFERS
> - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> + ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> .endm
People ought rightly to double-take at this using %cs and not %ss.
There is a good reason, but it needs describing explicitly. May I
suggest the following:
*...
* In 32bit mode, the memory operand must be a %cs reference. The data
segments may not be usable (vm86 mode), and the stack segment may not be
flat (espfix32).
*...
.macro CLEAR_CPU_BUFFERS
#ifdef __x86_64__
ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
#else
ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
#endif
.endm
This also lets you drop _ASM_RIP(). It's a cute idea, but is more
confusion than it's worth, because there's no such thing in 32bit mode.
"%cs:_ASM_RIP(mds_verw_sel)" reads as if it does nothing, because it
really doesn't in 64bit mode.
~Andrew
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-25 23:29 ` Andrew Cooper
@ 2024-09-25 23:46 ` Pawan Gupta
2024-09-26 0:17 ` Pawan Gupta
0 siblings, 1 reply; 17+ messages in thread
From: Pawan Gupta @ 2024-09-25 23:46 UTC (permalink / raw)
To: Andrew Cooper
Cc: Borislav Petkov, Dave Hansen, linux-kernel, x86, Robert Gill,
Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On Thu, Sep 26, 2024 at 12:29:00AM +0100, Andrew Cooper wrote:
> On 25/09/2024 11:25 pm, Pawan Gupta wrote:
> > Robert Gill reported below #GP in 32-bit mode when dosemu software was
> > executing vm86() system call:
> >
> > general protection fault: 0000 [#1] PREEMPT SMP
> > CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
> > Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
> > EIP: restore_all_switch_stack+0xbe/0xcf
> > EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
> > ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
> > DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
> > CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
> > Call Trace:
> > show_regs+0x70/0x78
> > die_addr+0x29/0x70
> > exc_general_protection+0x13c/0x348
> > exc_bounds+0x98/0x98
> > handle_exception+0x14d/0x14d
> > exc_bounds+0x98/0x98
> > restore_all_switch_stack+0xbe/0xcf
> > exc_bounds+0x98/0x98
> > restore_all_switch_stack+0xbe/0xcf
> >
> > This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
> > are enabled. This is because segment registers with an arbitrary user value
> > can result in #GP when executing VERW. Intel SDM vol. 2C documents the
> > following behavior for VERW instruction:
> >
> > #GP(0) - If a memory operand effective address is outside the CS, DS, ES,
> > FS, or GS segment limit.
> >
> > CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
> > space. Use %cs selector to reference VERW operand. This ensures VERW will
> > not #GP for an arbitrary user %ds.
> >
> > Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
> > Cc: stable@vger.kernel.org # 5.10+
> > Reported-by: Robert Gill <rtgill82@gmail.com>
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
> > Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
> > Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> > Suggested-by: Brian Gerst <brgerst@gmail.com>
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > ---
> > arch/x86/include/asm/nospec-branch.h | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > index ff5f1ecc7d1e..e18a6aaf414c 100644
> > --- a/arch/x86/include/asm/nospec-branch.h
> > +++ b/arch/x86/include/asm/nospec-branch.h
> > @@ -318,12 +318,14 @@
> > /*
> > * Macro to execute VERW instruction that mitigate transient data sampling
> > * attacks such as MDS. On affected systems a microcode update overloaded VERW
> > - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> > + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
> > + * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
> > + * 32-bit mode.
> > *
> > * Note: Only the memory operand variant of VERW clears the CPU buffers.
> > */
> > .macro CLEAR_CPU_BUFFERS
> > - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> > + ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> > .endm
>
> People ought rightly to double-take at this using %cs and not %ss.
> There is a good reason, but it needs describing explicitly. May I
> suggest the following:
>
> *...
> * In 32bit mode, the memory operand must be a %cs reference. The data
> segments may not be usable (vm86 mode), and the stack segment may not be
> flat (espfix32).
> *...
Thanks for the suggestion. I will include this.
> .macro CLEAR_CPU_BUFFERS
> #ifdef __x86_64__
> ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
> #else
> ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
> #endif
> .endm
>
> This also lets you drop _ASM_RIP(). It's a cute idea, but is more
> confusion than it's worth, because there's no such thing in 32bit mode.
>
> "%cs:_ASM_RIP(mds_verw_sel)" reads as if it does nothing, because it
> really doesn't in 64bit mode.
Right, will drop _ASM_RIP() in 32-bit mode and %cs in 64-bit mode.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-25 23:46 ` Pawan Gupta
@ 2024-09-26 0:17 ` Pawan Gupta
2024-09-26 0:32 ` Andrew Cooper
2024-09-26 14:52 ` Uros Bizjak
0 siblings, 2 replies; 17+ messages in thread
From: Pawan Gupta @ 2024-09-26 0:17 UTC (permalink / raw)
To: Andrew Cooper
Cc: Borislav Petkov, Dave Hansen, linux-kernel, x86, Robert Gill,
Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On Wed, Sep 25, 2024 at 04:46:23PM -0700, Pawan Gupta wrote:
> On Thu, Sep 26, 2024 at 12:29:00AM +0100, Andrew Cooper wrote:
> > On 25/09/2024 11:25 pm, Pawan Gupta wrote:
> > > Robert Gill reported below #GP in 32-bit mode when dosemu software was
> > > executing vm86() system call:
> > >
> > > general protection fault: 0000 [#1] PREEMPT SMP
> > > CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
> > > Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
> > > EIP: restore_all_switch_stack+0xbe/0xcf
> > > EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
> > > ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
> > > DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
> > > CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
> > > Call Trace:
> > > show_regs+0x70/0x78
> > > die_addr+0x29/0x70
> > > exc_general_protection+0x13c/0x348
> > > exc_bounds+0x98/0x98
> > > handle_exception+0x14d/0x14d
> > > exc_bounds+0x98/0x98
> > > restore_all_switch_stack+0xbe/0xcf
> > > exc_bounds+0x98/0x98
> > > restore_all_switch_stack+0xbe/0xcf
> > >
> > > This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
> > > are enabled. This is because segment registers with an arbitrary user value
> > > can result in #GP when executing VERW. Intel SDM vol. 2C documents the
> > > following behavior for VERW instruction:
> > >
> > > #GP(0) - If a memory operand effective address is outside the CS, DS, ES,
> > > FS, or GS segment limit.
> > >
> > > CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
> > > space. Use %cs selector to reference VERW operand. This ensures VERW will
> > > not #GP for an arbitrary user %ds.
> > >
> > > Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
> > > Cc: stable@vger.kernel.org # 5.10+
> > > Reported-by: Robert Gill <rtgill82@gmail.com>
> > > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
> > > Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
> > > Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> > > Suggested-by: Brian Gerst <brgerst@gmail.com>
> > > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > > ---
> > > arch/x86/include/asm/nospec-branch.h | 6 ++++--
> > > 1 file changed, 4 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > > index ff5f1ecc7d1e..e18a6aaf414c 100644
> > > --- a/arch/x86/include/asm/nospec-branch.h
> > > +++ b/arch/x86/include/asm/nospec-branch.h
> > > @@ -318,12 +318,14 @@
> > > /*
> > > * Macro to execute VERW instruction that mitigate transient data sampling
> > > * attacks such as MDS. On affected systems a microcode update overloaded VERW
> > > - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> > > + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
> > > + * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
> > > + * 32-bit mode.
> > > *
> > > * Note: Only the memory operand variant of VERW clears the CPU buffers.
> > > */
> > > .macro CLEAR_CPU_BUFFERS
> > > - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> > > + ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> > > .endm
> >
> > People ought rightly to double-take at this using %cs and not %ss.
> > There is a good reason, but it needs describing explicitly. May I
> > suggest the following:
> >
> > *...
> > * In 32bit mode, the memory operand must be a %cs reference. The data
> > segments may not be usable (vm86 mode), and the stack segment may not be
> > flat (espfix32).
> > *...
>
> Thanks for the suggestion. I will include this.
>
> > .macro CLEAR_CPU_BUFFERS
> > #ifdef __x86_64__
> > ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
> > #else
> > ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
> > #endif
> > .endm
> >
> > This also lets you drop _ASM_RIP(). It's a cute idea, but is more
> > confusion than it's worth, because there's no such thing in 32bit mode.
> >
> > "%cs:_ASM_RIP(mds_verw_sel)" reads as if it does nothing, because it
> > really doesn't in 64bit mode.
>
> Right, will drop _ASM_RIP() in 32-bit mode and %cs in 64-bit mode.
Its probably too soon for next version, pasting the patch here:
---8<---
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index e18a6aaf414c..4228a1fd2c2e 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -318,14 +318,21 @@
/*
* Macro to execute VERW instruction that mitigate transient data sampling
* attacks such as MDS. On affected systems a microcode update overloaded VERW
- * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
- * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
- * 32-bit mode.
+ * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
*
* Note: Only the memory operand variant of VERW clears the CPU buffers.
*/
.macro CLEAR_CPU_BUFFERS
- ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
+#ifdef CONFIG_X86_64
+ ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
+#else
+ /*
+ * In 32bit mode, the memory operand must be a %cs reference. The data
+ * segments may not be usable (vm86 mode), and the stack segment may not
+ * be flat (ESPFIX32).
+ */
+ ALTERNATIVE "", __stringify(verw %cs:mds_verw_sel), X86_FEATURE_CLEAR_CPU_BUF
+#endif
.endm
#ifdef CONFIG_X86_64
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-26 0:17 ` Pawan Gupta
@ 2024-09-26 0:32 ` Andrew Cooper
2024-09-26 1:04 ` Pawan Gupta
2024-09-26 14:52 ` Uros Bizjak
1 sibling, 1 reply; 17+ messages in thread
From: Andrew Cooper @ 2024-09-26 0:32 UTC (permalink / raw)
To: Pawan Gupta
Cc: Borislav Petkov, Dave Hansen, linux-kernel, x86, Robert Gill,
Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On 26/09/2024 1:17 am, Pawan Gupta wrote:
> On Wed, Sep 25, 2024 at 04:46:23PM -0700, Pawan Gupta wrote:
>> On Thu, Sep 26, 2024 at 12:29:00AM +0100, Andrew Cooper wrote:
>>> On 25/09/2024 11:25 pm, Pawan Gupta wrote:
>>>> Robert Gill reported below #GP in 32-bit mode when dosemu software was
>>>> executing vm86() system call:
>>>>
>>>> general protection fault: 0000 [#1] PREEMPT SMP
>>>> CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
>>>> Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
>>>> EIP: restore_all_switch_stack+0xbe/0xcf
>>>> EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
>>>> ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
>>>> DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
>>>> CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
>>>> Call Trace:
>>>> show_regs+0x70/0x78
>>>> die_addr+0x29/0x70
>>>> exc_general_protection+0x13c/0x348
>>>> exc_bounds+0x98/0x98
>>>> handle_exception+0x14d/0x14d
>>>> exc_bounds+0x98/0x98
>>>> restore_all_switch_stack+0xbe/0xcf
>>>> exc_bounds+0x98/0x98
>>>> restore_all_switch_stack+0xbe/0xcf
>>>>
>>>> This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
>>>> are enabled. This is because segment registers with an arbitrary user value
>>>> can result in #GP when executing VERW. Intel SDM vol. 2C documents the
>>>> following behavior for VERW instruction:
>>>>
>>>> #GP(0) - If a memory operand effective address is outside the CS, DS, ES,
>>>> FS, or GS segment limit.
>>>>
>>>> CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
>>>> space. Use %cs selector to reference VERW operand. This ensures VERW will
>>>> not #GP for an arbitrary user %ds.
>>>>
>>>> Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
>>>> Cc: stable@vger.kernel.org # 5.10+
>>>> Reported-by: Robert Gill <rtgill82@gmail.com>
>>>> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
>>>> Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
>>>> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
>>>> Suggested-by: Brian Gerst <brgerst@gmail.com>
>>>> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
>>>> ---
>>>> arch/x86/include/asm/nospec-branch.h | 6 ++++--
>>>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
>>>> index ff5f1ecc7d1e..e18a6aaf414c 100644
>>>> --- a/arch/x86/include/asm/nospec-branch.h
>>>> +++ b/arch/x86/include/asm/nospec-branch.h
>>>> @@ -318,12 +318,14 @@
>>>> /*
>>>> * Macro to execute VERW instruction that mitigate transient data sampling
>>>> * attacks such as MDS. On affected systems a microcode update overloaded VERW
>>>> - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
>>>> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
>>>> + * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
>>>> + * 32-bit mode.
>>>> *
>>>> * Note: Only the memory operand variant of VERW clears the CPU buffers.
>>>> */
>>>> .macro CLEAR_CPU_BUFFERS
>>>> - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>>>> + ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>>>> .endm
>>> People ought rightly to double-take at this using %cs and not %ss.
>>> There is a good reason, but it needs describing explicitly. May I
>>> suggest the following:
>>>
>>> *...
>>> * In 32bit mode, the memory operand must be a %cs reference. The data
>>> segments may not be usable (vm86 mode), and the stack segment may not be
>>> flat (espfix32).
>>> *...
>> Thanks for the suggestion. I will include this.
>>
>>> .macro CLEAR_CPU_BUFFERS
>>> #ifdef __x86_64__
>>> ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
>>> #else
>>> ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
>>> #endif
>>> .endm
>>>
>>> This also lets you drop _ASM_RIP(). It's a cute idea, but is more
>>> confusion than it's worth, because there's no such thing in 32bit mode.
>>>
>>> "%cs:_ASM_RIP(mds_verw_sel)" reads as if it does nothing, because it
>>> really doesn't in 64bit mode.
>> Right, will drop _ASM_RIP() in 32-bit mode and %cs in 64-bit mode.
> Its probably too soon for next version, pasting the patch here:
>
> ---8<---
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index e18a6aaf414c..4228a1fd2c2e 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -318,14 +318,21 @@
> /*
> * Macro to execute VERW instruction that mitigate transient data sampling
> * attacks such as MDS. On affected systems a microcode update overloaded VERW
> - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
> - * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
> - * 32-bit mode.
> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> *
> * Note: Only the memory operand variant of VERW clears the CPU buffers.
> */
> .macro CLEAR_CPU_BUFFERS
> - ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> +#ifdef CONFIG_X86_64
> + ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> +#else
> + /*
> + * In 32bit mode, the memory operand must be a %cs reference. The data
> + * segments may not be usable (vm86 mode), and the stack segment may not
> + * be flat (ESPFIX32).
> + */
I was intending for this to replace the "Using %cs" sentence, as a new
paragraph in that main comment block.
Otherwise, yes, this is half of what I had in mind.
~Andrew
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-26 0:32 ` Andrew Cooper
@ 2024-09-26 1:04 ` Pawan Gupta
0 siblings, 0 replies; 17+ messages in thread
From: Pawan Gupta @ 2024-09-26 1:04 UTC (permalink / raw)
To: Andrew Cooper
Cc: Borislav Petkov, Dave Hansen, linux-kernel, x86, Robert Gill,
Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On Thu, Sep 26, 2024 at 01:32:19AM +0100, Andrew Cooper wrote:
> On 26/09/2024 1:17 am, Pawan Gupta wrote:
> > On Wed, Sep 25, 2024 at 04:46:23PM -0700, Pawan Gupta wrote:
> >> On Thu, Sep 26, 2024 at 12:29:00AM +0100, Andrew Cooper wrote:
> >>> On 25/09/2024 11:25 pm, Pawan Gupta wrote:
> >>>> Robert Gill reported below #GP in 32-bit mode when dosemu software was
> >>>> executing vm86() system call:
> >>>>
> >>>> general protection fault: 0000 [#1] PREEMPT SMP
> >>>> CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
> >>>> Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
> >>>> EIP: restore_all_switch_stack+0xbe/0xcf
> >>>> EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
> >>>> ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
> >>>> DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
> >>>> CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
> >>>> Call Trace:
> >>>> show_regs+0x70/0x78
> >>>> die_addr+0x29/0x70
> >>>> exc_general_protection+0x13c/0x348
> >>>> exc_bounds+0x98/0x98
> >>>> handle_exception+0x14d/0x14d
> >>>> exc_bounds+0x98/0x98
> >>>> restore_all_switch_stack+0xbe/0xcf
> >>>> exc_bounds+0x98/0x98
> >>>> restore_all_switch_stack+0xbe/0xcf
> >>>>
> >>>> This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
> >>>> are enabled. This is because segment registers with an arbitrary user value
> >>>> can result in #GP when executing VERW. Intel SDM vol. 2C documents the
> >>>> following behavior for VERW instruction:
> >>>>
> >>>> #GP(0) - If a memory operand effective address is outside the CS, DS, ES,
> >>>> FS, or GS segment limit.
> >>>>
> >>>> CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
> >>>> space. Use %cs selector to reference VERW operand. This ensures VERW will
> >>>> not #GP for an arbitrary user %ds.
> >>>>
> >>>> Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
> >>>> Cc: stable@vger.kernel.org # 5.10+
> >>>> Reported-by: Robert Gill <rtgill82@gmail.com>
> >>>> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
> >>>> Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
> >>>> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> >>>> Suggested-by: Brian Gerst <brgerst@gmail.com>
> >>>> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> >>>> ---
> >>>> arch/x86/include/asm/nospec-branch.h | 6 ++++--
> >>>> 1 file changed, 4 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> >>>> index ff5f1ecc7d1e..e18a6aaf414c 100644
> >>>> --- a/arch/x86/include/asm/nospec-branch.h
> >>>> +++ b/arch/x86/include/asm/nospec-branch.h
> >>>> @@ -318,12 +318,14 @@
> >>>> /*
> >>>> * Macro to execute VERW instruction that mitigate transient data sampling
> >>>> * attacks such as MDS. On affected systems a microcode update overloaded VERW
> >>>> - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> >>>> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
> >>>> + * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
> >>>> + * 32-bit mode.
> >>>> *
> >>>> * Note: Only the memory operand variant of VERW clears the CPU buffers.
> >>>> */
> >>>> .macro CLEAR_CPU_BUFFERS
> >>>> - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> >>>> + ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> >>>> .endm
> >>> People ought rightly to double-take at this using %cs and not %ss.
> >>> There is a good reason, but it needs describing explicitly. May I
> >>> suggest the following:
> >>>
> >>> *...
> >>> * In 32bit mode, the memory operand must be a %cs reference. The data
> >>> segments may not be usable (vm86 mode), and the stack segment may not be
> >>> flat (espfix32).
> >>> *...
> >> Thanks for the suggestion. I will include this.
> >>
> >>> .macro CLEAR_CPU_BUFFERS
> >>> #ifdef __x86_64__
> >>> ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
> >>> #else
> >>> ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
> >>> #endif
> >>> .endm
> >>>
> >>> This also lets you drop _ASM_RIP(). It's a cute idea, but is more
> >>> confusion than it's worth, because there's no such thing in 32bit mode.
> >>>
> >>> "%cs:_ASM_RIP(mds_verw_sel)" reads as if it does nothing, because it
> >>> really doesn't in 64bit mode.
> >> Right, will drop _ASM_RIP() in 32-bit mode and %cs in 64-bit mode.
> > Its probably too soon for next version, pasting the patch here:
> >
> > ---8<---
> > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > index e18a6aaf414c..4228a1fd2c2e 100644
> > --- a/arch/x86/include/asm/nospec-branch.h
> > +++ b/arch/x86/include/asm/nospec-branch.h
> > @@ -318,14 +318,21 @@
> > /*
> > * Macro to execute VERW instruction that mitigate transient data sampling
> > * attacks such as MDS. On affected systems a microcode update overloaded VERW
> > - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
> > - * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
> > - * 32-bit mode.
> > + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> > *
> > * Note: Only the memory operand variant of VERW clears the CPU buffers.
> > */
> > .macro CLEAR_CPU_BUFFERS
> > - ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> > +#ifdef CONFIG_X86_64
> > + ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> > +#else
> > + /*
> > + * In 32bit mode, the memory operand must be a %cs reference. The data
> > + * segments may not be usable (vm86 mode), and the stack segment may not
> > + * be flat (ESPFIX32).
> > + */
>
> I was intending for this to replace the "Using %cs" sentence, as a new
> paragraph in that main comment block.
The reason I added the comment to 32-bit leg is because most readers will
not care about 32-bit mode. The comment will mostly be a distraction for
majority. People who care about 32-bit mode will read the comment in 32-bit
leg. I can move the comment to main block if you still want.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-26 0:17 ` Pawan Gupta
2024-09-26 0:32 ` Andrew Cooper
@ 2024-09-26 14:52 ` Uros Bizjak
2024-09-26 16:10 ` Pawan Gupta
1 sibling, 1 reply; 17+ messages in thread
From: Uros Bizjak @ 2024-09-26 14:52 UTC (permalink / raw)
To: Pawan Gupta, Andrew Cooper
Cc: Borislav Petkov, Dave Hansen, linux-kernel, x86, Robert Gill,
Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On 26. 09. 24 02:17, Pawan Gupta wrote:
> On Wed, Sep 25, 2024 at 04:46:23PM -0700, Pawan Gupta wrote:
>> On Thu, Sep 26, 2024 at 12:29:00AM +0100, Andrew Cooper wrote:
>>> On 25/09/2024 11:25 pm, Pawan Gupta wrote:
>>>> Robert Gill reported below #GP in 32-bit mode when dosemu software was
>>>> executing vm86() system call:
>>>>
>>>> general protection fault: 0000 [#1] PREEMPT SMP
>>>> CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
>>>> Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
>>>> EIP: restore_all_switch_stack+0xbe/0xcf
>>>> EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
>>>> ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
>>>> DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
>>>> CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
>>>> Call Trace:
>>>> show_regs+0x70/0x78
>>>> die_addr+0x29/0x70
>>>> exc_general_protection+0x13c/0x348
>>>> exc_bounds+0x98/0x98
>>>> handle_exception+0x14d/0x14d
>>>> exc_bounds+0x98/0x98
>>>> restore_all_switch_stack+0xbe/0xcf
>>>> exc_bounds+0x98/0x98
>>>> restore_all_switch_stack+0xbe/0xcf
>>>>
>>>> This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
>>>> are enabled. This is because segment registers with an arbitrary user value
>>>> can result in #GP when executing VERW. Intel SDM vol. 2C documents the
>>>> following behavior for VERW instruction:
>>>>
>>>> #GP(0) - If a memory operand effective address is outside the CS, DS, ES,
>>>> FS, or GS segment limit.
>>>>
>>>> CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
>>>> space. Use %cs selector to reference VERW operand. This ensures VERW will
>>>> not #GP for an arbitrary user %ds.
>>>>
>>>> Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
>>>> Cc: stable@vger.kernel.org # 5.10+
>>>> Reported-by: Robert Gill <rtgill82@gmail.com>
>>>> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
>>>> Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
>>>> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
>>>> Suggested-by: Brian Gerst <brgerst@gmail.com>
>>>> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
>>>> ---
>>>> arch/x86/include/asm/nospec-branch.h | 6 ++++--
>>>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
>>>> index ff5f1ecc7d1e..e18a6aaf414c 100644
>>>> --- a/arch/x86/include/asm/nospec-branch.h
>>>> +++ b/arch/x86/include/asm/nospec-branch.h
>>>> @@ -318,12 +318,14 @@
>>>> /*
>>>> * Macro to execute VERW instruction that mitigate transient data sampling
>>>> * attacks such as MDS. On affected systems a microcode update overloaded VERW
>>>> - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
>>>> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
>>>> + * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
>>>> + * 32-bit mode.
>>>> *
>>>> * Note: Only the memory operand variant of VERW clears the CPU buffers.
>>>> */
>>>> .macro CLEAR_CPU_BUFFERS
>>>> - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>>>> + ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>>>> .endm
>>>
>>> People ought rightly to double-take at this using %cs and not %ss.
>>> There is a good reason, but it needs describing explicitly. May I
>>> suggest the following:
>>>
>>> *...
>>> * In 32bit mode, the memory operand must be a %cs reference. The data
>>> segments may not be usable (vm86 mode), and the stack segment may not be
>>> flat (espfix32).
>>> *...
>>
>> Thanks for the suggestion. I will include this.
>>
>>> .macro CLEAR_CPU_BUFFERS
>>> #ifdef __x86_64__
>>> ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
>>> #else
>>> ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
>>> #endif
>>> .endm
>>>
>>> This also lets you drop _ASM_RIP(). It's a cute idea, but is more
>>> confusion than it's worth, because there's no such thing in 32bit mode.
>>>
>>> "%cs:_ASM_RIP(mds_verw_sel)" reads as if it does nothing, because it
>>> really doesn't in 64bit mode.
>>
>> Right, will drop _ASM_RIP() in 32-bit mode and %cs in 64-bit mode.
>
> Its probably too soon for next version, pasting the patch here:
>
> ---8<---
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index e18a6aaf414c..4228a1fd2c2e 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -318,14 +318,21 @@
> /*
> * Macro to execute VERW instruction that mitigate transient data sampling
> * attacks such as MDS. On affected systems a microcode update overloaded VERW
> - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
> - * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
> - * 32-bit mode.
> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> *
> * Note: Only the memory operand variant of VERW clears the CPU buffers.
> */
> .macro CLEAR_CPU_BUFFERS
> - ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> +#ifdef CONFIG_X86_64
> + ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
You should drop _ASM_RIP here and direclty use (%rip). This way, you
also won't need __stringify:
ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
> +#else
> + /*
> + * In 32bit mode, the memory operand must be a %cs reference. The data
> + * segments may not be usable (vm86 mode), and the stack segment may not
> + * be flat (ESPFIX32).
> + */
> + ALTERNATIVE "", __stringify(verw %cs:mds_verw_sel), X86_FEATURE_CLEAR_CPU_BUF
Also here, no need for __stringify:
ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
This is in fact what Andrew proposed in his review.
Uros.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-26 14:52 ` Uros Bizjak
@ 2024-09-26 16:10 ` Pawan Gupta
2024-09-26 16:28 ` Andrew Cooper
0 siblings, 1 reply; 17+ messages in thread
From: Pawan Gupta @ 2024-09-26 16:10 UTC (permalink / raw)
To: Uros Bizjak
Cc: Andrew Cooper, Borislav Petkov, Dave Hansen, linux-kernel, x86,
Robert Gill, Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On Thu, Sep 26, 2024 at 04:52:53PM +0200, Uros Bizjak wrote:
> > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > index e18a6aaf414c..4228a1fd2c2e 100644
> > --- a/arch/x86/include/asm/nospec-branch.h
> > +++ b/arch/x86/include/asm/nospec-branch.h
> > @@ -318,14 +318,21 @@
> > /*
> > * Macro to execute VERW instruction that mitigate transient data sampling
> > * attacks such as MDS. On affected systems a microcode update overloaded VERW
> > - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
> > - * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
> > - * 32-bit mode.
> > + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> > *
> > * Note: Only the memory operand variant of VERW clears the CPU buffers.
> > */
> > .macro CLEAR_CPU_BUFFERS
> > - ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> > +#ifdef CONFIG_X86_64
> > + ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>
> You should drop _ASM_RIP here and direclty use (%rip). This way, you also
> won't need __stringify:
>
> ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
>
> > +#else
> > + /*
> > + * In 32bit mode, the memory operand must be a %cs reference. The data
> > + * segments may not be usable (vm86 mode), and the stack segment may not
> > + * be flat (ESPFIX32).
> > + */
> > + ALTERNATIVE "", __stringify(verw %cs:mds_verw_sel), X86_FEATURE_CLEAR_CPU_BUF
>
> Also here, no need for __stringify:
>
> ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
>
> This is in fact what Andrew proposed in his review.
Thanks for pointing out, I completely missed that part. Below is how it
looks like with stringify gone:
--- >8 ---
Subject: [PATCH] x86/bugs: Use code segment selector for VERW operand
Robert Gill reported below #GP in 32-bit mode when dosemu software was
executing vm86() system call:
general protection fault: 0000 [#1] PREEMPT SMP
CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
EIP: restore_all_switch_stack+0xbe/0xcf
EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
Call Trace:
show_regs+0x70/0x78
die_addr+0x29/0x70
exc_general_protection+0x13c/0x348
exc_bounds+0x98/0x98
handle_exception+0x14d/0x14d
exc_bounds+0x98/0x98
restore_all_switch_stack+0xbe/0xcf
exc_bounds+0x98/0x98
restore_all_switch_stack+0xbe/0xcf
This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
are enabled. This is because segment registers with an arbitrary user value
can result in #GP when executing VERW. Intel SDM vol. 2C documents the
following behavior for VERW instruction:
#GP(0) - If a memory operand effective address is outside the CS, DS, ES,
FS, or GS segment limit.
CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
space. Use %cs selector to reference VERW operand. This ensures VERW will
not #GP for an arbitrary user %ds.
Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
Cc: stable@vger.kernel.org # 5.10+
Reported-by: Robert Gill <rtgill82@gmail.com>
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Suggested-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/include/asm/nospec-branch.h | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index ff5f1ecc7d1e..96b410b1d4e8 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -323,7 +323,16 @@
* Note: Only the memory operand variant of VERW clears the CPU buffers.
*/
.macro CLEAR_CPU_BUFFERS
- ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
+#ifdef CONFIG_X86_64
+ ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
+#else
+ /*
+ * In 32bit mode, the memory operand must be a %cs reference. The data
+ * segments may not be usable (vm86 mode), and the stack segment may not
+ * be flat (ESPFIX32).
+ */
+ ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
+#endif
.endm
#ifdef CONFIG_X86_64
--
2.34.1
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-26 16:10 ` Pawan Gupta
@ 2024-09-26 16:28 ` Andrew Cooper
2024-09-26 16:56 ` Pawan Gupta
0 siblings, 1 reply; 17+ messages in thread
From: Andrew Cooper @ 2024-09-26 16:28 UTC (permalink / raw)
To: Pawan Gupta, Uros Bizjak
Cc: Borislav Petkov, Dave Hansen, linux-kernel, x86, Robert Gill,
Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On 26/09/2024 5:10 pm, Pawan Gupta wrote:
> On Thu, Sep 26, 2024 at 04:52:53PM +0200, Uros Bizjak wrote:
>>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
>>> index e18a6aaf414c..4228a1fd2c2e 100644
>>> --- a/arch/x86/include/asm/nospec-branch.h
>>> +++ b/arch/x86/include/asm/nospec-branch.h
>>> @@ -318,14 +318,21 @@
>>> /*
>>> * Macro to execute VERW instruction that mitigate transient data sampling
>>> * attacks such as MDS. On affected systems a microcode update overloaded VERW
>>> - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
>>> - * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
>>> - * 32-bit mode.
>>> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
>>> *
>>> * Note: Only the memory operand variant of VERW clears the CPU buffers.
>>> */
>>> .macro CLEAR_CPU_BUFFERS
>>> - ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>>> +#ifdef CONFIG_X86_64
>>> + ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>> You should drop _ASM_RIP here and direclty use (%rip). This way, you also
>> won't need __stringify:
>>
>> ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
>>
>>> +#else
>>> + /*
>>> + * In 32bit mode, the memory operand must be a %cs reference. The data
>>> + * segments may not be usable (vm86 mode), and the stack segment may not
>>> + * be flat (ESPFIX32).
>>> + */
>>> + ALTERNATIVE "", __stringify(verw %cs:mds_verw_sel), X86_FEATURE_CLEAR_CPU_BUF
>> Also here, no need for __stringify:
>>
>> ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
>>
>> This is in fact what Andrew proposed in his review.
> Thanks for pointing out, I completely missed that part. Below is how it
> looks like with stringify gone:
>
> --- >8 ---
> Subject: [PATCH] x86/bugs: Use code segment selector for VERW operand
>
> Robert Gill reported below #GP in 32-bit mode when dosemu software was
> executing vm86() system call:
>
> general protection fault: 0000 [#1] PREEMPT SMP
> CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
> Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
> EIP: restore_all_switch_stack+0xbe/0xcf
> EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
> ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
> DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
> CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
> Call Trace:
> show_regs+0x70/0x78
> die_addr+0x29/0x70
> exc_general_protection+0x13c/0x348
> exc_bounds+0x98/0x98
> handle_exception+0x14d/0x14d
> exc_bounds+0x98/0x98
> restore_all_switch_stack+0xbe/0xcf
> exc_bounds+0x98/0x98
> restore_all_switch_stack+0xbe/0xcf
>
> This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
> are enabled. This is because segment registers with an arbitrary user value
> can result in #GP when executing VERW. Intel SDM vol. 2C documents the
> following behavior for VERW instruction:
>
> #GP(0) - If a memory operand effective address is outside the CS, DS, ES,
> FS, or GS segment limit.
>
> CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
> space. Use %cs selector to reference VERW operand. This ensures VERW will
> not #GP for an arbitrary user %ds.
>
> Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
> Cc: stable@vger.kernel.org # 5.10+
> Reported-by: Robert Gill <rtgill82@gmail.com>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
> Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Suggested-by: Brian Gerst <brgerst@gmail.com>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---
> arch/x86/include/asm/nospec-branch.h | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index ff5f1ecc7d1e..96b410b1d4e8 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -323,7 +323,16 @@
> * Note: Only the memory operand variant of VERW clears the CPU buffers.
> */
> .macro CLEAR_CPU_BUFFERS
> - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> +#ifdef CONFIG_X86_64
> + ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
> +#else
> + /*
> + * In 32bit mode, the memory operand must be a %cs reference. The data
> + * segments may not be usable (vm86 mode), and the stack segment may not
> + * be flat (ESPFIX32).
> + */
> + ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
> +#endif
You should also delete _ASM_RIP() as you're removing the only user of it.
But yes, with that, Reviewed-by: Andrew Cooper
<andrew.cooper3@citrix.com> FWIW.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-26 16:28 ` Andrew Cooper
@ 2024-09-26 16:56 ` Pawan Gupta
2024-09-26 17:01 ` Andrew Cooper
0 siblings, 1 reply; 17+ messages in thread
From: Pawan Gupta @ 2024-09-26 16:56 UTC (permalink / raw)
To: Andrew Cooper
Cc: Uros Bizjak, Borislav Petkov, Dave Hansen, linux-kernel, x86,
Robert Gill, Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On Thu, Sep 26, 2024 at 05:28:05PM +0100, Andrew Cooper wrote:
> On 26/09/2024 5:10 pm, Pawan Gupta wrote:
> > On Thu, Sep 26, 2024 at 04:52:53PM +0200, Uros Bizjak wrote:
> >>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> >>> index e18a6aaf414c..4228a1fd2c2e 100644
> >>> --- a/arch/x86/include/asm/nospec-branch.h
> >>> +++ b/arch/x86/include/asm/nospec-branch.h
> >>> @@ -318,14 +318,21 @@
> >>> /*
> >>> * Macro to execute VERW instruction that mitigate transient data sampling
> >>> * attacks such as MDS. On affected systems a microcode update overloaded VERW
> >>> - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
> >>> - * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
> >>> - * 32-bit mode.
> >>> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> >>> *
> >>> * Note: Only the memory operand variant of VERW clears the CPU buffers.
> >>> */
> >>> .macro CLEAR_CPU_BUFFERS
> >>> - ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> >>> +#ifdef CONFIG_X86_64
> >>> + ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> >> You should drop _ASM_RIP here and direclty use (%rip). This way, you also
> >> won't need __stringify:
> >>
> >> ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
> >>
> >>> +#else
> >>> + /*
> >>> + * In 32bit mode, the memory operand must be a %cs reference. The data
> >>> + * segments may not be usable (vm86 mode), and the stack segment may not
> >>> + * be flat (ESPFIX32).
> >>> + */
> >>> + ALTERNATIVE "", __stringify(verw %cs:mds_verw_sel), X86_FEATURE_CLEAR_CPU_BUF
> >> Also here, no need for __stringify:
> >>
> >> ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
> >>
> >> This is in fact what Andrew proposed in his review.
> > Thanks for pointing out, I completely missed that part. Below is how it
> > looks like with stringify gone:
> >
> > --- >8 ---
> > Subject: [PATCH] x86/bugs: Use code segment selector for VERW operand
> >
> > Robert Gill reported below #GP in 32-bit mode when dosemu software was
> > executing vm86() system call:
> >
> > general protection fault: 0000 [#1] PREEMPT SMP
> > CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
> > Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
> > EIP: restore_all_switch_stack+0xbe/0xcf
> > EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
> > ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
> > DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
> > CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
> > Call Trace:
> > show_regs+0x70/0x78
> > die_addr+0x29/0x70
> > exc_general_protection+0x13c/0x348
> > exc_bounds+0x98/0x98
> > handle_exception+0x14d/0x14d
> > exc_bounds+0x98/0x98
> > restore_all_switch_stack+0xbe/0xcf
> > exc_bounds+0x98/0x98
> > restore_all_switch_stack+0xbe/0xcf
> >
> > This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
> > are enabled. This is because segment registers with an arbitrary user value
> > can result in #GP when executing VERW. Intel SDM vol. 2C documents the
> > following behavior for VERW instruction:
> >
> > #GP(0) - If a memory operand effective address is outside the CS, DS, ES,
> > FS, or GS segment limit.
> >
> > CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
> > space. Use %cs selector to reference VERW operand. This ensures VERW will
> > not #GP for an arbitrary user %ds.
> >
> > Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
> > Cc: stable@vger.kernel.org # 5.10+
> > Reported-by: Robert Gill <rtgill82@gmail.com>
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
> > Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
> > Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> > Suggested-by: Brian Gerst <brgerst@gmail.com>
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > ---
> > arch/x86/include/asm/nospec-branch.h | 11 ++++++++++-
> > 1 file changed, 10 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> > index ff5f1ecc7d1e..96b410b1d4e8 100644
> > --- a/arch/x86/include/asm/nospec-branch.h
> > +++ b/arch/x86/include/asm/nospec-branch.h
> > @@ -323,7 +323,16 @@
> > * Note: Only the memory operand variant of VERW clears the CPU buffers.
> > */
> > .macro CLEAR_CPU_BUFFERS
> > - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
> > +#ifdef CONFIG_X86_64
> > + ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
> > +#else
> > + /*
> > + * In 32bit mode, the memory operand must be a %cs reference. The data
> > + * segments may not be usable (vm86 mode), and the stack segment may not
> > + * be flat (ESPFIX32).
> > + */
> > + ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
> > +#endif
>
> You should also delete _ASM_RIP() as you're removing the only user of it.
Can we? I see that __svm_vcpu_run() and __vmx_vcpu_run() are using _ASM_RIP().
> But yes, with that, Reviewed-by: Andrew Cooper
> <andrew.cooper3@citrix.com> FWIW.
Thanks.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand
2024-09-26 16:56 ` Pawan Gupta
@ 2024-09-26 17:01 ` Andrew Cooper
0 siblings, 0 replies; 17+ messages in thread
From: Andrew Cooper @ 2024-09-26 17:01 UTC (permalink / raw)
To: Pawan Gupta
Cc: Uros Bizjak, Borislav Petkov, Dave Hansen, linux-kernel, x86,
Robert Gill, Jari Ruusu, Brian Gerst,
Linux regression tracking (Thorsten Leemhuis),
antonio.gomez.iglesias, daniel.sneddon, stable
On 26/09/2024 5:56 pm, Pawan Gupta wrote:
> On Thu, Sep 26, 2024 at 05:28:05PM +0100, Andrew Cooper wrote:
>> On 26/09/2024 5:10 pm, Pawan Gupta wrote:
>>> On Thu, Sep 26, 2024 at 04:52:53PM +0200, Uros Bizjak wrote:
>>>>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
>>>>> index e18a6aaf414c..4228a1fd2c2e 100644
>>>>> --- a/arch/x86/include/asm/nospec-branch.h
>>>>> +++ b/arch/x86/include/asm/nospec-branch.h
>>>>> @@ -318,14 +318,21 @@
>>>>> /*
>>>>> * Macro to execute VERW instruction that mitigate transient data sampling
>>>>> * attacks such as MDS. On affected systems a microcode update overloaded VERW
>>>>> - * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF. Using %cs
>>>>> - * to reference VERW operand avoids a #GP fault for an arbitrary user %ds in
>>>>> - * 32-bit mode.
>>>>> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
>>>>> *
>>>>> * Note: Only the memory operand variant of VERW clears the CPU buffers.
>>>>> */
>>>>> .macro CLEAR_CPU_BUFFERS
>>>>> - ALTERNATIVE "", __stringify(verw %cs:_ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>>>>> +#ifdef CONFIG_X86_64
>>>>> + ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>>>> You should drop _ASM_RIP here and direclty use (%rip). This way, you also
>>>> won't need __stringify:
>>>>
>>>> ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
>>>>
>>>>> +#else
>>>>> + /*
>>>>> + * In 32bit mode, the memory operand must be a %cs reference. The data
>>>>> + * segments may not be usable (vm86 mode), and the stack segment may not
>>>>> + * be flat (ESPFIX32).
>>>>> + */
>>>>> + ALTERNATIVE "", __stringify(verw %cs:mds_verw_sel), X86_FEATURE_CLEAR_CPU_BUF
>>>> Also here, no need for __stringify:
>>>>
>>>> ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
>>>>
>>>> This is in fact what Andrew proposed in his review.
>>> Thanks for pointing out, I completely missed that part. Below is how it
>>> looks like with stringify gone:
>>>
>>> --- >8 ---
>>> Subject: [PATCH] x86/bugs: Use code segment selector for VERW operand
>>>
>>> Robert Gill reported below #GP in 32-bit mode when dosemu software was
>>> executing vm86() system call:
>>>
>>> general protection fault: 0000 [#1] PREEMPT SMP
>>> CPU: 4 PID: 4610 Comm: dosemu.bin Not tainted 6.6.21-gentoo-x86 #1
>>> Hardware name: Dell Inc. PowerEdge 1950/0H723K, BIOS 2.7.0 10/30/2010
>>> EIP: restore_all_switch_stack+0xbe/0xcf
>>> EAX: 00000000 EBX: 00000000 ECX: 00000000 EDX: 00000000
>>> ESI: 00000000 EDI: 00000000 EBP: 00000000 ESP: ff8affdc
>>> DS: 0000 ES: 0000 FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010046
>>> CR0: 80050033 CR2: 00c2101c CR3: 04b6d000 CR4: 000406d0
>>> Call Trace:
>>> show_regs+0x70/0x78
>>> die_addr+0x29/0x70
>>> exc_general_protection+0x13c/0x348
>>> exc_bounds+0x98/0x98
>>> handle_exception+0x14d/0x14d
>>> exc_bounds+0x98/0x98
>>> restore_all_switch_stack+0xbe/0xcf
>>> exc_bounds+0x98/0x98
>>> restore_all_switch_stack+0xbe/0xcf
>>>
>>> This only happens in 32-bit mode when VERW based mitigations like MDS/RFDS
>>> are enabled. This is because segment registers with an arbitrary user value
>>> can result in #GP when executing VERW. Intel SDM vol. 2C documents the
>>> following behavior for VERW instruction:
>>>
>>> #GP(0) - If a memory operand effective address is outside the CS, DS, ES,
>>> FS, or GS segment limit.
>>>
>>> CLEAR_CPU_BUFFERS macro executes VERW instruction before returning to user
>>> space. Use %cs selector to reference VERW operand. This ensures VERW will
>>> not #GP for an arbitrary user %ds.
>>>
>>> Fixes: a0e2dab44d22 ("x86/entry_32: Add VERW just before userspace transition")
>>> Cc: stable@vger.kernel.org # 5.10+
>>> Reported-by: Robert Gill <rtgill82@gmail.com>
>>> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218707
>>> Closes: https://lore.kernel.org/all/8c77ccfd-d561-45a1-8ed5-6b75212c7a58@leemhuis.info/
>>> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
>>> Suggested-by: Brian Gerst <brgerst@gmail.com>
>>> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
>>> ---
>>> arch/x86/include/asm/nospec-branch.h | 11 ++++++++++-
>>> 1 file changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
>>> index ff5f1ecc7d1e..96b410b1d4e8 100644
>>> --- a/arch/x86/include/asm/nospec-branch.h
>>> +++ b/arch/x86/include/asm/nospec-branch.h
>>> @@ -323,7 +323,16 @@
>>> * Note: Only the memory operand variant of VERW clears the CPU buffers.
>>> */
>>> .macro CLEAR_CPU_BUFFERS
>>> - ALTERNATIVE "", __stringify(verw _ASM_RIP(mds_verw_sel)), X86_FEATURE_CLEAR_CPU_BUF
>>> +#ifdef CONFIG_X86_64
>>> + ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
>>> +#else
>>> + /*
>>> + * In 32bit mode, the memory operand must be a %cs reference. The data
>>> + * segments may not be usable (vm86 mode), and the stack segment may not
>>> + * be flat (ESPFIX32).
>>> + */
>>> + ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
>>> +#endif
>> You should also delete _ASM_RIP() as you're removing the only user of it.
> Can we? I see that __svm_vcpu_run() and __vmx_vcpu_run() are using _ASM_RIP().
Oh - so it is when I'm on the right branch. Sorry for the noise.
~Andrew
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 0/3] Fix dosemu vm86() fault
2024-09-25 22:25 [PATCH v7 0/3] Fix dosemu vm86() fault Pawan Gupta
` (2 preceding siblings ...)
2024-09-25 22:25 ` [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand Pawan Gupta
@ 2024-10-08 13:52 ` Thorsten Leemhuis
2024-10-08 22:48 ` Dave Hansen
3 siblings, 1 reply; 17+ messages in thread
From: Thorsten Leemhuis @ 2024-10-08 13:52 UTC (permalink / raw)
To: Pawan Gupta, Borislav Petkov, Dave Hansen
Cc: linux-kernel, x86, Robert Gill, Jari Ruusu, Brian Gerst,
antonio.gomez.iglesias, daniel.sneddon, stable,
Linux kernel regressions list
Hi, Thorsten here, the Linux kernel's regression tracker. Top-posting
for once, to make this easily accessible to everyone.
Is there hope that patches like these makes it to mainline any time
soon? I fully understand that this it a hard problem, but in the end
what triggered this were at least two regression reports afaics:
https://bugzilla.kernel.org/show_bug.cgi?id=218707
https://lore.kernel.org/lkml/IdYcxU6x6xuUqUg8cliJUnucfwfTO29TrKIlLGCCYbbIr1EQnP0ZAtTxdAM2hp5e5Gny_acIN3OFDS6v0sazocnZZ1UBaINEJ0HoDnbasSI=@protonmail.com/
Sure, the older one was in April, so one week more or less now won't
make much of a difference. But I think it still would be great to get
this fixed rather sooner than later. Or where those issues meanwhile
fixed through other patches without me noticing and I'm making a fool of
myself here?
This yet again makes me wonder if some "[regression fix]" in the subject
or "CC: regressions@lists.linux.dev" in the patches would help to make
the regression aspect obvious to everyone involved. But it would create
yet another small bit of overhead... :-/
Pawan Gupta, btw: many thx for working on this and sticking to it!
Ciao, Thorsten
On 26.09.24 00:25, Pawan Gupta wrote:
> Changes in v7:
> - Using %ss for verw fails kselftest ldt_gdt.c in 32-bit mode, use safer %cs instead (Dave).
>
> v6: https://lore.kernel.org/r/20240905-fix-dosemu-vm86-v6-0-7aff8e53cbbf@linux.intel.com
> - Use %ss in 64-bit mode as well for all VERW calls. This avoids any having
> a separate macro for 32-bit (Dave).
> - Split 32-bit mode fixes into separate patches.
>
> v5: https://lore.kernel.org/r/20240711-fix-dosemu-vm86-v5-1-e87dcd7368aa@linux.intel.com
> - Simplify the use of ALTERNATIVE construct (Uros/Jiri/Peter).
>
> v4: https://lore.kernel.org/r/20240710-fix-dosemu-vm86-v4-1-aa6464e1de6f@linux.intel.com
> - Further simplify the patch by using %ss for all VERW calls in 32-bit mode (Brian).
> - In NMI exit path move VERW after RESTORE_ALL_NMI that touches GPRs (Dave).
>
> v3: https://lore.kernel.org/r/20240701-fix-dosemu-vm86-v3-1-b1969532c75a@linux.intel.com
> - Simplify CLEAR_CPU_BUFFERS_SAFE by using %ss instead of %ds (Brian).
> - Do verw before popf in SYSEXIT path (Jari).
>
> v2: https://lore.kernel.org/r/20240627-fix-dosemu-vm86-v2-1-d5579f698e77@linux.intel.com
> - Safe guard against any other system calls like vm86() that might change %ds (Dave).
>
> v1: https://lore.kernel.org/r/20240426-fix-dosemu-vm86-v1-1-88c826a3f378@linux.intel.com
>
> Hi,
>
> This series fixes a #GP in 32-bit kernels when executing vm86() system call
> in dosemu software. In 32-bit mode, their are cases when user can set an
> arbitrary %ds that can cause a #GP when executing VERW instruction. The
> fix is to use %ss for referencing the VERW operand.
>
> Patch 1-2: Fixes the VERW callsites in 32-bit entry path.
> Patch 3: Uses %ss for VERW in 32-bit and 64-bit mode.
>
> The fix is tested with below kselftest on 32-bit kernel:
>
> ./tools/testing/selftests/x86/entry_from_vm86.c
>
> 64-bit kernel was boot tested. On a Rocket Lake, measuring the CPU cycles
> for VERW with and without the %ss shows no significant difference. This
> indicates that the scrubbing behavior of VERW is intact.
>
> Thanks,
> Pawan
>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---
> Pawan Gupta (3):
> x86/entry_32: Do not clobber user EFLAGS.ZF
> x86/entry_32: Clear CPU buffers after register restore in NMI return
> x86/bugs: Use code segment selector for VERW operand
>
> arch/x86/entry/entry_32.S | 6 ++++--
> arch/x86/include/asm/nospec-branch.h | 6 ++++--
> 2 files changed, 8 insertions(+), 4 deletions(-)
> ---
> base-commit: 431c1646e1f86b949fa3685efc50b660a364c2b6
> change-id: 20240426-fix-dosemu-vm86-dd111a01737e
>
> Best regards,
#regzbot poke
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 0/3] Fix dosemu vm86() fault
2024-10-08 13:52 ` [PATCH v7 0/3] Fix dosemu vm86() fault Thorsten Leemhuis
@ 2024-10-08 22:48 ` Dave Hansen
2024-10-09 8:50 ` Linux regression tracking (Thorsten Leemhuis)
0 siblings, 1 reply; 17+ messages in thread
From: Dave Hansen @ 2024-10-08 22:48 UTC (permalink / raw)
To: Thorsten Leemhuis, Pawan Gupta, Borislav Petkov, Dave Hansen
Cc: linux-kernel, x86, Robert Gill, Jari Ruusu, Brian Gerst,
antonio.gomez.iglesias, daniel.sneddon, stable,
Linux kernel regressions list
On 10/8/24 06:52, Thorsten Leemhuis wrote:
> Hi, Thorsten here, the Linux kernel's regression tracker. Top-posting
> for once, to make this easily accessible to everyone.
>
> Is there hope that patches like these makes it to mainline any time
> soon?
Unless it breaks something again:
> https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?h=x86/urgent&id=785bf1ab58aa1f89a5dfcb17b682b7089d69c34f
;)
> This yet again makes me wonder if some "[regression fix]" in the subject
> or "CC: regressions@lists.linux.dev" in the patches would help to make
> the regression aspect obvious to everyone involved. But it would create
> yet another small bit of overhead
In this case, not really. This was a typical email screwup where I
didn't pick up that there was an updated patch that got appended to a
reply among the normal email noise.
We've been poking at this pretty regularly since getting back from Plumbers.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v7 0/3] Fix dosemu vm86() fault
2024-10-08 22:48 ` Dave Hansen
@ 2024-10-09 8:50 ` Linux regression tracking (Thorsten Leemhuis)
0 siblings, 0 replies; 17+ messages in thread
From: Linux regression tracking (Thorsten Leemhuis) @ 2024-10-09 8:50 UTC (permalink / raw)
To: Dave Hansen, Pawan Gupta, Borislav Petkov, Dave Hansen
Cc: linux-kernel, x86, Robert Gill, Jari Ruusu, Brian Gerst,
antonio.gomez.iglesias, daniel.sneddon, stable,
Linux kernel regressions list
On 09.10.24 00:48, Dave Hansen wrote:
> On 10/8/24 06:52, Thorsten Leemhuis wrote:
>> Hi, Thorsten here, the Linux kernel's regression tracker. Top-posting
>> for once, to make this easily accessible to everyone.
>>
>> Is there hope that patches like these makes it to mainline any time
>> soon?
>
> Unless it breaks something again:
>
>> https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?h=x86/urgent&id=785bf1ab58aa1f89a5dfcb17b682b7089d69c34f
>
> ;)
:-D
Great, thx!
>> This yet again makes me wonder if some "[regression fix]" in the subject
>> or "CC: regressions@lists.linux.dev" in the patches would help to make
>> the regression aspect obvious to everyone involved. But it would create
>> yet another small bit of overhead
>
> In this case, not really. This was a typical email screwup where I
> didn't pick up that there was an updated patch that got appended to a
> reply among the normal email noise.
>
> We've been poking at this pretty regularly since getting back from Plumbers.
Many thx!
Ciao, Thorsten
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2024-10-09 8:50 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-25 22:25 [PATCH v7 0/3] Fix dosemu vm86() fault Pawan Gupta
2024-09-25 22:25 ` [PATCH v7 1/3] x86/entry_32: Do not clobber user EFLAGS.ZF Pawan Gupta
2024-09-25 22:25 ` [PATCH v7 2/3] x86/entry_32: Clear CPU buffers after register restore in NMI return Pawan Gupta
2024-09-25 22:25 ` [PATCH v7 3/3] x86/bugs: Use code segment selector for VERW operand Pawan Gupta
2024-09-25 23:29 ` Andrew Cooper
2024-09-25 23:46 ` Pawan Gupta
2024-09-26 0:17 ` Pawan Gupta
2024-09-26 0:32 ` Andrew Cooper
2024-09-26 1:04 ` Pawan Gupta
2024-09-26 14:52 ` Uros Bizjak
2024-09-26 16:10 ` Pawan Gupta
2024-09-26 16:28 ` Andrew Cooper
2024-09-26 16:56 ` Pawan Gupta
2024-09-26 17:01 ` Andrew Cooper
2024-10-08 13:52 ` [PATCH v7 0/3] Fix dosemu vm86() fault Thorsten Leemhuis
2024-10-08 22:48 ` Dave Hansen
2024-10-09 8:50 ` Linux regression tracking (Thorsten Leemhuis)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox