linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv7 00/16] x86: Enable Linear Address Space Separation support
@ 2025-06-25 12:50 Kirill A. Shutemov
  2025-06-25 12:50 ` [PATCHv7 01/16] x86/cpu: Enumerate the LASS feature bits Kirill A. Shutemov
                   ` (17 more replies)
  0 siblings, 18 replies; 33+ messages in thread
From: Kirill A. Shutemov @ 2025-06-25 12:50 UTC (permalink / raw)
  To: Andy Lutomirski, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra, Ard Biesheuvel,
	Paul E. McKenney, Josh Poimboeuf, Xiongwei Song, Xin Li,
	Mike Rapoport (IBM), Brijesh Singh, Michael Roth, Tony Luck,
	Alexey Kardashevskiy, Alexander Shishkin
  Cc: Jonathan Corbet, Sohil Mehta, Ingo Molnar, Pawan Gupta,
	Daniel Sneddon, Kai Huang, Sandipan Das, Breno Leitao,
	Rick Edgecombe, Alexei Starovoitov, Hou Tao, Juergen Gross,
	Vegard Nossum, Kees Cook, Eric Biggers, Jason Gunthorpe,
	Masami Hiramatsu (Google), Andrew Morton, Luis Chamberlain,
	Yuntao Wang, Rasmus Villemoes, Christophe Leroy, Tejun Heo,
	Changbin Du, Huang Shijie, Geert Uytterhoeven, Namhyung Kim,
	Arnaldo Carvalho de Melo, linux-doc, linux-kernel, linux-efi,
	linux-mm

Linear Address Space Separation (LASS) is a security feature that intends to
prevent malicious virtual address space accesses across user/kernel mode.

Such mode based access protection already exists today with paging and features
such as SMEP and SMAP. However, to enforce these protections, the processor
must traverse the paging structures in memory.  Malicious software can use
timing information resulting from this traversal to determine details about the
paging structures, and these details may also be used to determine the layout
of the kernel memory.

The LASS mechanism provides the same mode-based protections as paging but
without traversing the paging structures. Because the protections enforced by
LASS are applied before paging, software will not be able to derive
paging-based timing information from the various caching structures such as the
TLBs, mid-level caches, page walker, data caches, etc. LASS can avoid probing
using double page faults, TLB flush and reload, and SW prefetch instructions.
See [2], [3] and [4] for some research on the related attack vectors.

Had it been available, LASS alone would have mitigated Meltdown. (Hindsight is
20/20 :)

In addition, LASS prevents an attack vector described in a Spectre LAM (SLAM)
whitepaper [7].

LASS enforcement relies on the typical kernel implementation to divide the
64-bit virtual address space into two halves:
  Addr[63]=0 -> User address space
  Addr[63]=1 -> Kernel address space
Any data access or code execution across address spaces typically results in a
#GP fault.

Kernel accesses usually only happen to the kernel address space. However, there
are valid reasons for kernel to access memory in the user half. For these cases
(such as text poking and EFI runtime accesses), the kernel can temporarily
suspend the enforcement of LASS by toggling SMAP (Supervisor Mode Access
Prevention) using the stac()/clac() instructions and in one instance a downright
disabling LASS for an EFI runtime call.

User space cannot access any kernel address while LASS is enabled.
Unfortunately, legacy vsyscall functions are located in the address range
0xffffffffff600000 - 0xffffffffff601000 and emulated in kernel.  To avoid
breaking user applications when LASS is enabled, extend the vsyscall emulation
in execute (XONLY) mode to the #GP fault handler.

In contrast, the vsyscall EMULATE mode is deprecated and not expected to be
used by anyone.  Supporting EMULATE mode with LASS would need complex
instruction decoding in the #GP fault handler and is probably not worth the
hassle. Disable LASS in this rare case when someone absolutely needs and
enables vsyscall=emulate via the command line.

Changes from v6[10]:
- Rewrok #SS handler to work properly on FRED;
- Do not require X86_PF_INSTR to emulate vsyscall;
- Move lass_clac()/stac() definition to the patch where they are used;
- Rename lass_clac/stac() to lass_disable/enable_enforcement();
- Fix several build issues around inline memcpy and memset;
- Fix sparse warning;
- Adjust comments and commit messages;
- Drop "x86/efi: Move runtime service initialization to arch/x86" patch
  as it got applied;

Changes from v5[9]:
- Report LASS violation as NULL pointer dereference if the address is in the
  first page frame;
- Provide helpful error message on #SS due to LASS violation;
- Fold patch for vsyscall=emulate documentation into patch
  that disables LASS with vsyscall=emulate;
- Rewrite __inline_memeset() and __inline_memcpy();
- Adjust comments and commit messages;

Changes from v4[8]:
- Added PeterZ's Originally-by and SoB to 2/16
- Added lass_clac()/lass_stac() to differentiate from SMAP necessitated
  clac()/stac() and to be NOPs on CPUs that don't support LASS
- Moved LASS enabling patch to the end to avoid rendering machines
  unbootable between until the patch that disables LASS around EFI
  initialization
- Reverted Pawan's LAM disabling commit

Changes from v3[6]:
- Made LAM dependent on LASS
- Moved EFI runtime initialization to x86 side of things
- Suspended LASS validation around EFI set_virtual_address_map call
- Added a message for the case of kernel side LASS violation
- Moved inline memset/memcpy versions to the common string.h

Changes from v2[5]:
- Added myself to the SoB chain

Changes from v1[1]:
- Emulate vsyscall violations in execute mode in the #GP fault handler
- Use inline memcpy and memset while patching alternatives
- Remove CONFIG_X86_LASS
- Make LASS depend on SMAP
- Dropped the minimal KVM enabling patch


[1] https://lore.kernel.org/lkml/20230110055204.3227669-1-yian.chen@intel.com/
[2] “Practical Timing Side Channel Attacks against Kernel Space ASLR”,
https://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf
[3] “Prefetch Side-Channel Attacks: Bypassing SMAP and Kernel ASLR”, http://doi.acm.org/10.1145/2976749.2978356
[4] “Harmful prefetch on Intel”, https://ioactive.com/harmful-prefetch-on-intel/ (H/T Anders)
[5] https://lore.kernel.org/all/20230530114247.21821-1-alexander.shishkin@linux.intel.com/
[6] https://lore.kernel.org/all/20230609183632.48706-1-alexander.shishkin@linux.intel.com/
[7] https://download.vusec.net/papers/slam_sp24.pdf
[8] https://lore.kernel.org/all/20240710160655.3402786-1-alexander.shishkin@linux.intel.com/
[9] https://lore.kernel.org/all/20241028160917.1380714-1-alexander.shishkin@linux.intel.com
[10] https://lore.kernel.org/all/20250620135325.3300848-1-kirill.shutemov@linux.intel.com/

Alexander Shishkin (4):
  x86/cpu: Defer CR pinning setup until after EFI initialization
  efi: Disable LASS around set_virtual_address_map() EFI call
  x86/traps: Communicate a LASS violation in #GP message
  x86/cpu: Make LAM depend on LASS

Kirill A. Shutemov (4):
  x86/asm: Introduce inline memcpy and memset
  x86/vsyscall: Do not require X86_PF_INSTR to emulate vsyscall
  x86/traps: Handle LASS thrown #SS
  x86: Re-enable Linear Address Masking

Sohil Mehta (7):
  x86/cpu: Enumerate the LASS feature bits
  x86/alternatives: Disable LASS when patching kernel alternatives
  x86/vsyscall: Reorganize the #PF emulation code
  x86/traps: Consolidate user fixups in exc_general_protection()
  x86/vsyscall: Add vsyscall emulation for #GP
  x86/vsyscall: Disable LASS if vsyscall mode is set to EMULATE
  x86/cpu: Enable LASS during CPU initialization

Yian Chen (1):
  x86/cpu: Set LASS CR4 bit as pinning sensitive

 .../admin-guide/kernel-parameters.txt         |  4 +-
 arch/x86/Kconfig                              |  1 -
 arch/x86/Kconfig.cpufeatures                  |  4 +
 arch/x86/entry/vsyscall/vsyscall_64.c         | 69 +++++++++++------
 arch/x86/include/asm/cpufeatures.h            |  1 +
 arch/x86/include/asm/smap.h                   | 27 ++++++-
 arch/x86/include/asm/string.h                 | 46 ++++++++++++
 arch/x86/include/asm/uaccess_64.h             | 38 +++-------
 arch/x86/include/asm/vsyscall.h               | 14 +++-
 arch/x86/include/uapi/asm/processor-flags.h   |  2 +
 arch/x86/kernel/alternative.c                 | 14 +++-
 arch/x86/kernel/cpu/common.c                  | 21 ++++--
 arch/x86/kernel/cpu/cpuid-deps.c              |  2 +
 arch/x86/kernel/traps.c                       | 75 +++++++++++++++----
 arch/x86/kernel/umip.c                        |  3 +
 arch/x86/lib/clear_page_64.S                  | 10 ++-
 arch/x86/mm/fault.c                           |  2 +-
 arch/x86/platform/efi/efi.c                   | 15 ++++
 tools/arch/x86/include/asm/cpufeatures.h      |  1 +
 19 files changed, 264 insertions(+), 85 deletions(-)

-- 
2.47.2


^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2025-06-30  9:50 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-25 12:50 [PATCHv7 00/16] x86: Enable Linear Address Space Separation support Kirill A. Shutemov
2025-06-25 12:50 ` [PATCHv7 01/16] x86/cpu: Enumerate the LASS feature bits Kirill A. Shutemov
2025-06-26 15:22   ` Borislav Petkov
2025-06-26 18:00   ` Xin Li
2025-06-25 12:50 ` [PATCH] x86/vsyscall: Do not require X86_PF_INSTR to emulate vsyscall Kirill A. Shutemov
2025-06-25 12:55   ` Kirill A. Shutemov
2025-06-25 12:50 ` [PATCHv7 02/16] x86/asm: Introduce inline memcpy and memset Kirill A. Shutemov
2025-06-25 12:50 ` [PATCHv7 03/16] x86/alternatives: Disable LASS when patching kernel alternatives Kirill A. Shutemov
2025-06-26 13:49   ` Peter Zijlstra
2025-06-26 14:18     ` Dave Hansen
2025-06-27 10:27       ` Kirill A. Shutemov
2025-06-25 12:50 ` [PATCHv7 04/16] x86/cpu: Defer CR pinning setup until after EFI initialization Kirill A. Shutemov
2025-06-25 12:50 ` [PATCHv7 05/16] efi: Disable LASS around set_virtual_address_map() EFI call Kirill A. Shutemov
2025-06-25 12:50 ` [PATCHv7 06/16] x86/vsyscall: Do not require X86_PF_INSTR to emulate vsyscall Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 07/16] x86/vsyscall: Reorganize the #PF emulation code Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 08/16] x86/traps: Consolidate user fixups in exc_general_protection() Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 09/16] x86/vsyscall: Add vsyscall emulation for #GP Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 10/16] x86/vsyscall: Disable LASS if vsyscall mode is set to EMULATE Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 11/16] x86/cpu: Set LASS CR4 bit as pinning sensitive Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 12/16] x86/traps: Communicate a LASS violation in #GP message Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 13/16] x86/traps: Handle LASS thrown #SS Kirill A. Shutemov
2025-06-26 17:57   ` Xin Li
2025-06-27 10:31     ` Kirill A. Shutemov
2025-06-30  8:30       ` David Laight
2025-06-30  9:50         ` Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 14/16] x86/cpu: Make LAM depend on LASS Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 15/16] x86/cpu: Enable LASS during CPU initialization Kirill A. Shutemov
2025-06-25 12:51 ` [PATCHv7 16/16] x86: Re-enable Linear Address Masking Kirill A. Shutemov
2025-06-26  9:22 ` [PATCHv7 00/16] x86: Enable Linear Address Space Separation support Vegard Nossum
2025-06-26  9:35   ` Vegard Nossum
2025-06-26 12:47     ` Kirill A. Shutemov
2025-06-26 13:15       ` Vegard Nossum
2025-06-29 11:40       ` David Laight

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).