linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86
@ 2025-08-25 20:24 Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
                   ` (18 more replies)
  0 siblings, 19 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

======= Introduction
The patchset aims to add a KASAN tag-based mode for the x86 architecture
with the help of the new CPU feature called Linear Address Masking
(LAM). Main improvement introduced by the series is 2x lower memory
usage compared to KASAN's generic mode, the only currently available
mode on x86. The tag based mode may also find errors that the generic
mode couldn't because of differences in how these modes operate.

======= How does KASAN' tag-based mode work?
When enabled, memory accesses and allocations are augmented by the
compiler during kernel compilation. Instrumentation functions are added
to each memory allocation and each pointer dereference.

The allocation related functions generate a random tag and save it in
two places: in shadow memory that maps to the allocated memory, and in
the top bits of the pointer that points to the allocated memory. Storing
the tag in the top of the pointer is possible because of Top-Byte Ignore
(TBI) on arm64 architecture and LAM on x86.

The access related functions are performing a comparison between the tag
stored in the pointer and the one stored in shadow memory. If the tags
don't match an out of bounds error must have occurred and so an error
report is generated.

The general idea for the tag-based mode is very well explained in the
series with the original implementation [1].

[1] https://lore.kernel.org/all/cover.1544099024.git.andreyknvl@google.com/

======= Differences summary compared to the arm64 tag-based mode
- Tag width:
	- Tag width influences the chance of a tag mismatch due to two
	  tags from different allocations having the same value. The
	  bigger the possible range of tag values the lower the chance
	  of that happening.
	- Shortening the tag width from 8 bits to 4, while it can help
	  with memory usage, it also increases the chance of not
	  reporting an error. 4 bit tags have a ~7% chance of a tag
	  mismatch.

- Address masking mechanism
	- TBI in arm64 allows for storing metadata in the top 8 bits of
	  the virtual address.
	- LAM in x86 allows storing tags in bits [62:57] of the pointer.
	  To maximize memory savings the tag width is reduced to bits
	  [60:57].

- Inline mode mismatch reporting
	- Arm64 inserts a BRK instruction to pass metadata about a tag
	  mismatch to the KASAN report.
	- On x86 the INT3 instruction is used for the same purpose.

======= Testing
Checked all the kunits for both software tags and generic KASAN after
making changes.

In generic mode the results were:

kasan: pass:59 fail:0 skip:13 total:72
Totals: pass:59 fail:0 skip:13 total:72
ok 1 kasan

and for software tags:

kasan: pass:63 fail:0 skip:9 total:72
Totals: pass:63 fail:0 skip:9 total:72
ok 1 kasan

======= Benchmarks [1]
All tests were ran on a Sierra Forest server platform. The only
differences between the tests were kernel options:
	- CONFIG_KASAN
	- CONFIG_KASAN_GENERIC
	- CONFIG_KASAN_SW_TAGS
	- CONFIG_KASAN_INLINE [1]
	- CONFIG_KASAN_OUTLINE

Boot time (until login prompt):
* 02:55 for clean kernel
* 05:42 / 06:32 for generic KASAN (inline/outline)
* 05:58 for tag-based KASAN (outline) [2]

Total memory usage (512GB present on the system - MemAvailable just
after boot):
* 12.56 GB for clean kernel
* 81.74 GB for generic KASAN
* 44.39 GB for tag-based KASAN

Kernel size:
* 14 MB for clean kernel
* 24.7 MB / 19.5 MB for generic KASAN (inline/outline)
* 27.1 MB / 18.1 MB for tag-based KASAN (inline/outline)

Work under load time comparison (compiling the mainline kernel) (200 cores):
*  62s for clean kernel
* 171s / 125s for generic KASAN (outline/inline)
* 145s for tag-based KASAN (outline) [2]

[1] Currently inline mode doesn't work on x86 due to things missing in
the compiler. I have written a patch for clang that seems to fix the
inline mode and I was able to boot and check that all patches regarding
the inline mode work as expected. My hope is to post the patch to LLVM
once this series is completed, and then make inline mode available in
the kernel config.

[2] While I was able to boot the inline tag-based kernel with my
compiler changes in a simulated environment, due to toolchain
difficulties I couldn't get it to boot on the machine I had access to.
Also boot time results from the simulation seem too good to be true, and
they're much too worse for the generic case to be believable. Therefore
I'm posting only results from the physical server platform.

======= Compilation
Clang was used to compile the series (make LLVM=1) since gcc doesn't
seem to have support for KASAN tag-based compiler instrumentation on
x86.

======= Dependencies
The base branch for the series is the mainline kernel, tag 6.17-rc3.

======= Enabling LAM for testing
Since LASS is needed for LAM and it can't be compiled without it I
applied the LASS series [1] first, then applied my patches.

[1] https://lore.kernel.org/all/20250707080317.3791624-1-kirill.shutemov@linux.intel.com/

Changes v5:
- Fix a bunch of arm64 compilation errors I didn't catch earlier.
  Thank You Ada for testing the series!
- Simplify the usage of the tag handling x86 functions (virt_to_page,
  phys_addr etc.).
- Remove within() and within_range() from the EXECMEM_ROX patch.
- Count time it takes to compile a kernel when running kernels with generic
  KASAN, tag based KASAN and a clean kernel. Put data in the cover letter
  benchmark section.

Changes v4:
- Revert x86 kasan_mem_to_shadow() scheme to the same on used in generic
  KASAN. Keep the arithmetic shift idea for the KASAN in general since
  it makes more sense for arm64 and in risc-v.
- Fix inline mode but leave it unavailable until a complementary
  compiler patch can be merged.
- Apply Dave Hansen's comments on series formatting, patch style and
  code simplifications.

Changes v3:
- Remove the runtime_const patch and setup a unified offset for both 5
  and 4 paging levels.
- Add a fix for inline mode on x86 tag-based KASAN. Add a handler for
  int3 that is generated on inline tag mismatches.
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
  reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
  account.
- Made changes to the kasan_non_canonical_hook() according to upstream
  discussion.
- Remove patches 2 and 3 since they related to risc-v and this series
  adds only x86 related things.
- Reorder __tag_*() functions so they're before arch_kasan_*(). Remove
  CONFIG_KASAN condition from __tag_set().

Changes v2:
- Split the series into one adding KASAN tag-based mode (this one) and
  another one that adds the dense mode to KASAN (will post later).
- Removed exporting kasan_poison() and used a wrapper instead in
  kasan_init_64.c
- Prepended series with 4 patches from the risc-v series and applied
  review comments to the first patch as the rest already are reviewed.

Maciej Wieczor-Retman (17):
  kasan: Fix inline mode for x86 tag-based mode
  x86: Add arch specific kasan functions
  kasan: arm64: x86: Make special tags arch specific
  x86: Reset tag for virtual to physical address conversions
  mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic
  x86: Physical address comparisons in fill_p*d/pte
  x86: KASAN raw shadow memory PTE init
  x86: LAM compatible non-canonical definition
  x86: LAM initialization
  x86: Minimal SLAB alignment
  kasan: x86: Handle int3 for inline KASAN reports
  arm64: Unify software tag-based KASAN inline recovery path
  kasan: x86: Apply multishot to the inline report handler
  kasan: x86: Logical bit shift for kasan_mem_to_shadow
  mm: Unpoison pcpu chunks with base address tag
  mm: Unpoison vms[area] addresses with a common tag
  x86: Make software tag-based kasan available

Samuel Holland (2):
  kasan: sw_tags: Use arithmetic shift for shadow computation
  kasan: sw_tags: Support tag widths less than 8 bits

 Documentation/arch/arm64/kasan-offsets.sh |  8 ++-
 Documentation/arch/x86/x86_64/mm.rst      |  6 +-
 MAINTAINERS                               |  4 +-
 arch/arm64/Kconfig                        | 10 ++--
 arch/arm64/include/asm/kasan-tags.h       | 13 +++++
 arch/arm64/include/asm/kasan.h            |  2 -
 arch/arm64/include/asm/memory.h           | 14 ++++-
 arch/arm64/include/asm/uaccess.h          |  1 +
 arch/arm64/kernel/traps.c                 | 17 +-----
 arch/arm64/mm/kasan_init.c                |  7 ++-
 arch/x86/Kconfig                          |  4 +-
 arch/x86/boot/compressed/misc.h           |  1 +
 arch/x86/include/asm/cache.h              |  4 ++
 arch/x86/include/asm/kasan-tags.h         |  9 +++
 arch/x86/include/asm/kasan.h              | 71 ++++++++++++++++++++++-
 arch/x86/include/asm/page.h               | 18 ++++++
 arch/x86/include/asm/page_64.h            |  1 +
 arch/x86/kernel/alternative.c             |  4 +-
 arch/x86/kernel/head_64.S                 |  3 +
 arch/x86/kernel/setup.c                   |  2 +
 arch/x86/kernel/traps.c                   |  4 ++
 arch/x86/mm/Makefile                      |  2 +
 arch/x86/mm/init.c                        |  3 +
 arch/x86/mm/init_64.c                     | 11 ++--
 arch/x86/mm/kasan_init_64.c               | 19 +++++-
 arch/x86/mm/kasan_inline.c                | 26 +++++++++
 arch/x86/mm/physaddr.c                    |  2 +
 include/linux/kasan-tags.h                | 21 +++++--
 include/linux/kasan.h                     | 51 +++++++++++++++-
 include/linux/mm.h                        |  6 +-
 include/linux/mmzone.h                    |  1 -
 include/linux/page-flags-layout.h         |  9 +--
 lib/Kconfig.kasan                         |  3 +-
 mm/execmem.c                              |  2 +-
 mm/kasan/hw_tags.c                        | 11 ++++
 mm/kasan/report.c                         | 45 ++++++++++++--
 mm/kasan/shadow.c                         | 18 ++++++
 mm/vmalloc.c                              |  6 +-
 scripts/Makefile.kasan                    |  3 +
 scripts/gdb/linux/kasan.py                |  5 +-
 scripts/gdb/linux/mm.py                   |  5 +-
 41 files changed, 375 insertions(+), 77 deletions(-)
 mode change 100644 => 100755 Documentation/arch/arm64/kasan-offsets.sh
 create mode 100644 arch/arm64/include/asm/kasan-tags.h
 create mode 100644 arch/x86/include/asm/kasan-tags.h
 create mode 100644 arch/x86/mm/kasan_inline.c

-- 
2.50.1



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-26 19:35   ` Catalin Marinas
  2025-08-25 20:24 ` [PATCH v5 02/19] kasan: sw_tags: Support tag widths less than 8 bits Maciej Wieczor-Retman
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

From: Samuel Holland <samuel.holland@sifive.com>

Currently, kasan_mem_to_shadow() uses a logical right shift, which turns
canonical kernel addresses into non-canonical addresses by clearing the
high KASAN_SHADOW_SCALE_SHIFT bits. The value of KASAN_SHADOW_OFFSET is
then chosen so that the addition results in a canonical address for the
shadow memory.

For KASAN_GENERIC, this shift/add combination is ABI with the compiler,
because KASAN_SHADOW_OFFSET is used in compiler-generated inline tag
checks[1], which must only attempt to dereference canonical addresses.

However, for KASAN_SW_TAGS we have some freedom to change the algorithm
without breaking the ABI. Because TBI is enabled for kernel addresses,
the top bits of shadow memory addresses computed during tag checks are
irrelevant, and so likewise are the top bits of KASAN_SHADOW_OFFSET.
This is demonstrated by the fact that LLVM uses a logical right shift
in the tag check fast path[2] but a sbfx (signed bitfield extract)
instruction in the slow path[3] without causing any issues.

Using an arithmetic shift in kasan_mem_to_shadow() provides a number of
benefits:

1) The memory layout doesn't change but is easier to understand.
KASAN_SHADOW_OFFSET becomes a canonical memory address, and the shifted
pointer becomes a negative offset, so KASAN_SHADOW_OFFSET ==
KASAN_SHADOW_END regardless of the shift amount or the size of the
virtual address space.

2) KASAN_SHADOW_OFFSET becomes a simpler constant, requiring only one
instruction to load instead of two. Since it must be loaded in each
function with a tag check, this decreases kernel text size by 0.5%.

3) This shift and the sign extension from kasan_reset_tag() can be
combined into a single sbfx instruction. When this same algorithm change
is applied to the compiler, it removes an instruction from each inline
tag check, further reducing kernel text size by an additional 4.6%.

These benefits extend to other architectures as well. On RISC-V, where
the baseline ISA does not shifted addition or have an equivalent to the
sbfx instruction, loading KASAN_SHADOW_OFFSET is reduced from 3 to 2
instructions, and kasan_mem_to_shadow(kasan_reset_tag(addr)) similarly
combines two consecutive right shifts.

Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp#L1316 [1]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp#L895 [2]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp#L669 [3]
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v5: (Maciej)
- (u64) -> (unsigned long) in report.c

Changelog v4: (Maciej)
- Revert x86 to signed mem_to_shadow mapping.
- Remove last two paragraphs since they were just poorer duplication of
  the comments in kasan_non_canonical_hook().

Changelog v3: (Maciej)
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
  reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
  account.
- Made changes to the kasan_non_canonical_hook() according to upstream
  discussion. Settled on overflow on both ranges and separate checks for
  x86 and arm.

Changelog v2: (Maciej)
- Correct address range that's checked in kasan_non_canonical_hook().
  Adjust the comment inside.
- Remove part of comment from arch/arm64/include/asm/memory.h.
- Append patch message paragraph about the overflow in
  kasan_non_canonical_hook().

 Documentation/arch/arm64/kasan-offsets.sh |  8 +++--
 arch/arm64/Kconfig                        | 10 +++----
 arch/arm64/include/asm/memory.h           | 14 ++++++++-
 arch/arm64/mm/kasan_init.c                |  7 +++--
 include/linux/kasan.h                     | 10 +++++--
 mm/kasan/report.c                         | 36 ++++++++++++++++++++---
 scripts/gdb/linux/kasan.py                |  3 ++
 scripts/gdb/linux/mm.py                   |  5 ++--
 8 files changed, 75 insertions(+), 18 deletions(-)
 mode change 100644 => 100755 Documentation/arch/arm64/kasan-offsets.sh

diff --git a/Documentation/arch/arm64/kasan-offsets.sh b/Documentation/arch/arm64/kasan-offsets.sh
old mode 100644
new mode 100755
index 2dc5f9e18039..ce777c7c7804
--- a/Documentation/arch/arm64/kasan-offsets.sh
+++ b/Documentation/arch/arm64/kasan-offsets.sh
@@ -5,8 +5,12 @@
 
 print_kasan_offset () {
 	printf "%02d\t" $1
-	printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
-			- (1 << (64 - 32 - $2)) ))
+	if [[ $2 -ne 4 ]] then
+		printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+				- (1 << (64 - 32 - $2)) ))
+	else
+		printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) ))
+	fi
 }
 
 echo KASAN_SHADOW_SCALE_SHIFT = 3
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e9bbfacc35a6..82cbfc7d1233 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -431,11 +431,11 @@ config KASAN_SHADOW_OFFSET
 	default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
 	default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
 	default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
-	default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
-	default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
-	default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
-	default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
-	default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+	default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
+	default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
+	default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+	default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+	default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
 	default 0xffffffffffffffff
 
 config UNWIND_TABLES
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 5213248e081b..277d56ceeb01 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -89,7 +89,15 @@
  *
  * KASAN_SHADOW_END is defined first as the shadow address that corresponds to
  * the upper bound of possible virtual kernel memory addresses UL(1) << 64
- * according to the mapping formula.
+ * according to the mapping formula. For Generic KASAN, the address in the
+ * mapping formula is treated as unsigned (part of the compiler's ABI), so the
+ * end of the shadow memory region is at a large positive offset from
+ * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
+ * formula is treated as signed. Since all kernel addresses are negative, they
+ * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
+ * itself the end of the shadow memory region. (User pointers are positive and
+ * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
+ * not allocated for them.)
  *
  * KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
  * memory start must map to the lowest possible kernel virtual memory address
@@ -100,7 +108,11 @@
  */
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
 #define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_GENERIC
 #define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
+#else
+#define KASAN_SHADOW_END	KASAN_SHADOW_OFFSET
+#endif
 #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
 #define KASAN_SHADOW_START	_KASAN_SHADOW_START(vabits_actual)
 #define PAGE_END		KASAN_SHADOW_START
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d541ce45daeb..dc2de12c4f26 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
 /* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
-	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
-		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+		BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
+			KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+	else
+		BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
 	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
 	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 890011071f2b..b396feca714f 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -61,8 +61,14 @@ int kasan_populate_early_shadow(const void *shadow_start,
 #ifndef kasan_mem_to_shadow
 static inline void *kasan_mem_to_shadow(const void *addr)
 {
-	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
-		+ KASAN_SHADOW_OFFSET;
+	void *scaled;
+
+	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+		scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+	else
+		scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+
+	return KASAN_SHADOW_OFFSET + scaled;
 }
 #endif
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 62c01b4527eb..50d487a0687a 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -642,11 +642,39 @@ void kasan_non_canonical_hook(unsigned long addr)
 	const char *bug_type;
 
 	/*
-	 * All addresses that came as a result of the memory-to-shadow mapping
-	 * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
+	 * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+	 * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
+	 * both x86 and arm64). Thus, the possible shadow addresses (even for
+	 * bogus pointers) belong to a single contiguous region that is the
+	 * result of kasan_mem_to_shadow() applied to the whole address space.
 	 */
-	if (addr < KASAN_SHADOW_OFFSET)
-		return;
+	if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+		if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0UL)) ||
+		    addr > (unsigned long)kasan_mem_to_shadow((void *)(~0UL)))
+			return;
+	}
+
+	/*
+	 * For Software Tag-Based KASAN, kasan_mem_to_shadow() uses the
+	 * arithmetic shift. Normally, this would make checking for a possible
+	 * shadow address complicated, as the shadow address computation
+	 * operation would overflow only for some memory addresses. However, due
+	 * to the chosen KASAN_SHADOW_OFFSET values and the fact the
+	 * kasan_mem_to_shadow() only operates on pointers with the tag reset,
+	 * the overflow always happens.
+	 *
+	 * For arm64, the top byte of the pointer gets reset to 0xFF. Thus, the
+	 * possible shadow addresses belong to a region that is the result of
+	 * kasan_mem_to_shadow() applied to the memory range
+	 * [0xFF000000000000, 0xFFFFFFFFFFFFFFFF]. Despite the overflow, the
+	 * resulting possible shadow region is contiguous, as the overflow
+	 * happens for both 0xFF000000000000 and 0xFFFFFFFFFFFFFFFF.
+	 */
+	if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) && IS_ENABLED(CONFIG_ARM64)) {
+		if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0xFFUL << 56)) ||
+		    addr > (unsigned long)kasan_mem_to_shadow((void *)(~0UL)))
+			return;
+	}
 
 	orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);
 
diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py
index 56730b3fde0b..fca39968d308 100644
--- a/scripts/gdb/linux/kasan.py
+++ b/scripts/gdb/linux/kasan.py
@@ -8,6 +8,7 @@
 
 import gdb
 from linux import constants, mm
+from ctypes import c_int64 as s64
 
 def help():
     t = """Usage: lx-kasan_mem_to_shadow [Hex memory addr]
@@ -39,6 +40,8 @@ class KasanMemToShadow(gdb.Command):
         else:
             help()
     def kasan_mem_to_shadow(self, addr):
+        if constants.CONFIG_KASAN_SW_TAGS:
+            addr = s64(addr)
         return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET
 
 KasanMemToShadow()
diff --git a/scripts/gdb/linux/mm.py b/scripts/gdb/linux/mm.py
index 7571aebbe650..2e63f3dedd53 100644
--- a/scripts/gdb/linux/mm.py
+++ b/scripts/gdb/linux/mm.py
@@ -110,12 +110,13 @@ class aarch64_page_ops():
         self.KERNEL_END = gdb.parse_and_eval("_end")
 
         if constants.LX_CONFIG_KASAN_GENERIC or constants.LX_CONFIG_KASAN_SW_TAGS:
+            self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
             if constants.LX_CONFIG_KASAN_GENERIC:
                 self.KASAN_SHADOW_SCALE_SHIFT = 3
+                self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
             else:
                 self.KASAN_SHADOW_SCALE_SHIFT = 4
-            self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
-            self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
+                self.KASAN_SHADOW_END = self.KASAN_SHADOW_OFFSET
             self.PAGE_END = self.KASAN_SHADOW_END - (1 << (self.vabits_actual - self.KASAN_SHADOW_SCALE_SHIFT))
         else:
             self.PAGE_END = self._PAGE_END(self.VA_BITS_MIN)
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 02/19] kasan: sw_tags: Support tag widths less than 8 bits
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 03/19] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

From: Samuel Holland <samuel.holland@sifive.com>

Allow architectures to override KASAN_TAG_KERNEL in asm/kasan.h. This
is needed on RISC-V, which supports 57-bit virtual addresses and 7-bit
pointer tags. For consistency, move the arm64 MTE definition of
KASAN_TAG_MIN to asm/kasan.h, since it is also architecture-dependent;
RISC-V's equivalent extension is expected to support 7-bit hardware
memory tags.

Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
 arch/arm64/include/asm/kasan.h   |  6 ++++--
 arch/arm64/include/asm/uaccess.h |  1 +
 include/linux/kasan-tags.h       | 13 ++++++++-----
 3 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index e1b57c13f8a4..4ab419df8b93 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -6,8 +6,10 @@
 
 #include <linux/linkage.h>
 #include <asm/memory.h>
-#include <asm/mte-kasan.h>
-#include <asm/pgtable-types.h>
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KASAN_TAG_MIN			0xF0 /* minimum value for random tags */
+#endif
 
 #define arch_kasan_set_tag(addr, tag)	__tag_set(addr, tag)
 #define arch_kasan_reset_tag(addr)	__tag_reset(addr)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 5b91803201ef..f890dadc7b4e 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -22,6 +22,7 @@
 #include <asm/cpufeature.h>
 #include <asm/mmu.h>
 #include <asm/mte.h>
+#include <asm/mte-kasan.h>
 #include <asm/ptrace.h>
 #include <asm/memory.h>
 #include <asm/extable.h>
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index 4f85f562512c..e07c896f95d3 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -2,13 +2,16 @@
 #ifndef _LINUX_KASAN_TAGS_H
 #define _LINUX_KASAN_TAGS_H
 
+#include <asm/kasan.h>
+
+#ifndef KASAN_TAG_KERNEL
 #define KASAN_TAG_KERNEL	0xFF /* native kernel pointers tag */
-#define KASAN_TAG_INVALID	0xFE /* inaccessible memory tag */
-#define KASAN_TAG_MAX		0xFD /* maximum value for random tags */
+#endif
+
+#define KASAN_TAG_INVALID	(KASAN_TAG_KERNEL - 1) /* inaccessible memory tag */
+#define KASAN_TAG_MAX		(KASAN_TAG_KERNEL - 2) /* maximum value for random tags */
 
-#ifdef CONFIG_KASAN_HW_TAGS
-#define KASAN_TAG_MIN		0xF0 /* minimum value for random tags */
-#else
+#ifndef KASAN_TAG_MIN
 #define KASAN_TAG_MIN		0x00 /* minimum value for random tags */
 #endif
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 03/19] kasan: Fix inline mode for x86 tag-based mode
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 02/19] kasan: sw_tags: Support tag widths less than 8 bits Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 04/19] x86: Add arch specific kasan functions Maciej Wieczor-Retman
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

The LLVM compiler uses hwasan-instrument-with-calls parameter to setup
inline or outline mode in tag-based KASAN. If zeroed, it means the
instrumentation implementation will be pasted into each relevant
location along with KASAN related constants during compilation. If set
to one all function instrumentation will be done with function calls
instead.

The default hwasan-instrument-with-calls value for the x86 architecture
in the compiler is "1", which is not true for other architectures.
Because of this, enabling inline mode in software tag-based KASAN
doesn't work on x86 as the kernel script doesn't zero out the parameter
and always sets up the outline mode.

Explicitly zero out hwasan-instrument-with-calls when enabling inline
mode in tag-based KASAN.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v3:
- Add this patch to the series.

 scripts/Makefile.kasan | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 693dbbebebba..2c7be96727ac 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -76,8 +76,11 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress
 RUSTFLAGS_KASAN := -Zsanitizer=kernel-hwaddress \
 		   -Zsanitizer-recover=kernel-hwaddress
 
+# LLVM sets hwasan-instrument-with-calls to 1 on x86 by default. Set it to 0
+# when inline mode is enabled.
 ifdef CONFIG_KASAN_INLINE
 	kasan_params += hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
+	kasan_params += hwasan-instrument-with-calls=0
 else
 	kasan_params += hwasan-instrument-with-calls=1
 endif
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 04/19] x86: Add arch specific kasan functions
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (2 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 03/19] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 05/19] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

KASAN's software tag-based mode needs multiple macros/functions to
handle tag and pointer interactions - to set, retrieve and reset tags
from the top bits of a pointer.

Mimic functions currently used by arm64 but change the tag's position to
bits [60:57] in the pointer.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Rewrite __tag_set() without pointless casts and make it more readable.

Changelog v3:
- Reorder functions so that __tag_*() etc are above the
  arch_kasan_*() ones.
- Remove CONFIG_KASAN condition from __tag_set()

 arch/x86/include/asm/kasan.h | 36 ++++++++++++++++++++++++++++++++++--
 1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index d7e33c7f096b..1963eb2fcff3 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -3,6 +3,8 @@
 #define _ASM_X86_KASAN_H
 
 #include <linux/const.h>
+#include <linux/kasan-tags.h>
+#include <linux/types.h>
 #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
 #define KASAN_SHADOW_SCALE_SHIFT 3
 
@@ -24,8 +26,37 @@
 						  KASAN_SHADOW_SCALE_SHIFT)))
 
 #ifndef __ASSEMBLER__
+#include <linux/bitops.h>
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+
+#ifdef CONFIG_KASAN_SW_TAGS
+
+#define __tag_shifted(tag)		FIELD_PREP(GENMASK_ULL(60, 57), tag)
+#define __tag_reset(addr)		(sign_extend64((u64)(addr), 56))
+#define __tag_get(addr)			((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+#else
+#define __tag_shifted(tag)		0UL
+#define __tag_reset(addr)		(addr)
+#define __tag_get(addr)			0
+#endif /* CONFIG_KASAN_SW_TAGS */
+
+static inline void *__tag_set(const void *__addr, u8 tag)
+{
+	u64 addr = (u64)__addr;
+
+	addr &= ~__tag_shifted(KASAN_TAG_MASK);
+	addr |= __tag_shifted(tag);
+
+	return (void *)addr;
+}
+
+#define arch_kasan_set_tag(addr, tag)	__tag_set(addr, tag)
+#define arch_kasan_reset_tag(addr)	__tag_reset(addr)
+#define arch_kasan_get_tag(addr)	__tag_get(addr)
 
 #ifdef CONFIG_KASAN
+
 void __init kasan_early_init(void);
 void __init kasan_init(void);
 void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
@@ -34,8 +65,9 @@ static inline void kasan_early_init(void) { }
 static inline void kasan_init(void) { }
 static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
 						   int nid) { }
-#endif
 
-#endif
+#endif /* CONFIG_KASAN */
+
+#endif /* __ASSEMBLER__ */
 
 #endif
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 05/19] kasan: arm64: x86: Make special tags arch specific
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (3 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 04/19] x86: Add arch specific kasan functions Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 06/19] x86: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

KASAN's tag-based mode defines multiple special tag values. They're
reserved for:
- Native kernel value. On arm64 it's 0xFF and it causes an early return
  in the tag checking function.
- Invalid value. 0xFE marks an area as freed / unallocated. It's also
  the value that is used to initialize regions of shadow memory.
- Max value. 0xFD is the highest value that can be randomly generated
  for a new tag.

Metadata macro is also defined:
- Tag width equal to 8.

Tag-based mode on x86 is going to use 4 bit wide tags so all the above
values need to be changed accordingly.

Make native kernel tag arch specific for x86 and arm64.

Replace hardcoded kernel tag value and tag width with macros in KASAN's
non-arch specific code.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v5:
- Move KASAN_TAG_MIN to the arm64 kasan-tags.h for the hardware KASAN
  mode case.

Changelog v4:
- Move KASAN_TAG_MASK to kasan-tags.h.

Changelog v2:
- Remove risc-v from the patch.

 MAINTAINERS                         |  2 +-
 arch/arm64/include/asm/kasan-tags.h | 13 +++++++++++++
 arch/arm64/include/asm/kasan.h      |  4 ----
 arch/x86/include/asm/kasan-tags.h   |  9 +++++++++
 include/linux/kasan-tags.h          | 10 +++++++++-
 include/linux/kasan.h               |  4 +++-
 include/linux/mm.h                  |  6 +++---
 include/linux/mmzone.h              |  1 -
 include/linux/page-flags-layout.h   |  9 +--------
 9 files changed, 39 insertions(+), 19 deletions(-)
 create mode 100644 arch/arm64/include/asm/kasan-tags.h
 create mode 100644 arch/x86/include/asm/kasan-tags.h

diff --git a/MAINTAINERS b/MAINTAINERS
index fed6cd812d79..788532771832 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13176,7 +13176,7 @@ L:	kasan-dev@googlegroups.com
 S:	Maintained
 B:	https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
 F:	Documentation/dev-tools/kasan.rst
-F:	arch/*/include/asm/*kasan.h
+F:	arch/*/include/asm/*kasan*.h
 F:	arch/*/mm/kasan_init*
 F:	include/linux/kasan*.h
 F:	lib/Kconfig.kasan
diff --git a/arch/arm64/include/asm/kasan-tags.h b/arch/arm64/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..152465d03508
--- /dev/null
+++ b/arch/arm64/include/asm/kasan-tags.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL	0xFF /* native kernel pointers tag */
+
+#define KASAN_TAG_WIDTH		8
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KASAN_TAG_MIN			0xF0 /* minimum value for random tags */
+#endif
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 4ab419df8b93..d2841e0fb908 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -7,10 +7,6 @@
 #include <linux/linkage.h>
 #include <asm/memory.h>
 
-#ifdef CONFIG_KASAN_HW_TAGS
-#define KASAN_TAG_MIN			0xF0 /* minimum value for random tags */
-#endif
-
 #define arch_kasan_set_tag(addr, tag)	__tag_set(addr, tag)
 #define arch_kasan_reset_tag(addr)	__tag_reset(addr)
 #define arch_kasan_get_tag(addr)	__tag_get(addr)
diff --git a/arch/x86/include/asm/kasan-tags.h b/arch/x86/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..68ba385bc75c
--- /dev/null
+++ b/arch/x86/include/asm/kasan-tags.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL	0xF /* native kernel pointers tag */
+
+#define KASAN_TAG_WIDTH		4
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index e07c896f95d3..fe80fa8f3315 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -2,7 +2,15 @@
 #ifndef _LINUX_KASAN_TAGS_H
 #define _LINUX_KASAN_TAGS_H
 
-#include <asm/kasan.h>
+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+#include <asm/kasan-tags.h>
+#endif
+
+#ifndef KASAN_TAG_WIDTH
+#define KASAN_TAG_WIDTH		0
+#endif
+
+#define KASAN_TAG_MASK		((1UL << KASAN_TAG_WIDTH) - 1)
 
 #ifndef KASAN_TAG_KERNEL
 #define KASAN_TAG_KERNEL	0xFF /* native kernel pointers tag */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b396feca714f..54481f8c30c5 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -40,7 +40,9 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t;
 
 #ifdef CONFIG_KASAN_SW_TAGS
 /* This matches KASAN_TAG_INVALID. */
-#define KASAN_SHADOW_INIT 0xFE
+#ifndef KASAN_SHADOW_INIT
+#define KASAN_SHADOW_INIT KASAN_TAG_INVALID
+#endif
 #else
 #define KASAN_SHADOW_INIT 0
 #endif
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1ae97a0b8ec7..bb494cb1d5af 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1692,7 +1692,7 @@ static inline u8 page_kasan_tag(const struct page *page)
 
 	if (kasan_enabled()) {
 		tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
-		tag ^= 0xff;
+		tag ^= KASAN_TAG_KERNEL;
 	}
 
 	return tag;
@@ -1705,7 +1705,7 @@ static inline void page_kasan_tag_set(struct page *page, u8 tag)
 	if (!kasan_enabled())
 		return;
 
-	tag ^= 0xff;
+	tag ^= KASAN_TAG_KERNEL;
 	old_flags = READ_ONCE(page->flags);
 	do {
 		flags = old_flags;
@@ -1724,7 +1724,7 @@ static inline void page_kasan_tag_reset(struct page *page)
 
 static inline u8 page_kasan_tag(const struct page *page)
 {
-	return 0xff;
+	return KASAN_TAG_KERNEL;
 }
 
 static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 0c5da9141983..c139fb3d862d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1166,7 +1166,6 @@ static inline bool zone_is_empty(struct zone *zone)
 #define NODES_MASK		((1UL << NODES_WIDTH) - 1)
 #define SECTIONS_MASK		((1UL << SECTIONS_WIDTH) - 1)
 #define LAST_CPUPID_MASK	((1UL << LAST_CPUPID_SHIFT) - 1)
-#define KASAN_TAG_MASK		((1UL << KASAN_TAG_WIDTH) - 1)
 #define ZONEID_MASK		((1UL << ZONEID_SHIFT) - 1)
 
 static inline enum zone_type page_zonenum(const struct page *page)
diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h
index 760006b1c480..b2cc4cb870e0 100644
--- a/include/linux/page-flags-layout.h
+++ b/include/linux/page-flags-layout.h
@@ -3,6 +3,7 @@
 #define PAGE_FLAGS_LAYOUT_H
 
 #include <linux/numa.h>
+#include <linux/kasan-tags.h>
 #include <generated/bounds.h>
 
 /*
@@ -72,14 +73,6 @@
 #define NODE_NOT_IN_PAGE_FLAGS	1
 #endif
 
-#if defined(CONFIG_KASAN_SW_TAGS)
-#define KASAN_TAG_WIDTH 8
-#elif defined(CONFIG_KASAN_HW_TAGS)
-#define KASAN_TAG_WIDTH 4
-#else
-#define KASAN_TAG_WIDTH 0
-#endif
-
 #ifdef CONFIG_NUMA_BALANCING
 #define LAST__PID_SHIFT 8
 #define LAST__PID_MASK  ((1 << LAST__PID_SHIFT)-1)
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 06/19] x86: Reset tag for virtual to physical address conversions
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (4 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 05/19] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 07/19] mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

Any place where pointer arithmetic is used to convert a virtual address
into a physical one can raise errors if the virtual address is tagged.

Reset the pointer's tag by sign extending the tag bits in macros that do
pointer arithmetic in address conversions. There will be no change in
compiled code with KASAN disabled since the compiler will optimize the
__tag_reset() out.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v5:
- Move __tag_reset() calls into __phys_addr_nodebug() and
  __virt_addr_valid() instead of calling it on the arguments of higher
  level functions.

Changelog v4:
- Simplify page_to_virt() by removing pointless casts.
- Remove change in __is_canonical_address() because it's taken care of
  in a later patch due to a LAM compatible definition of canonical.

 arch/x86/include/asm/page.h    | 8 ++++++++
 arch/x86/include/asm/page_64.h | 1 +
 arch/x86/mm/physaddr.c         | 2 ++
 3 files changed, 11 insertions(+)

diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index 9265f2fca99a..bcf5cad3da36 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -7,6 +7,7 @@
 #ifdef __KERNEL__
 
 #include <asm/page_types.h>
+#include <asm/kasan.h>
 
 #ifdef CONFIG_X86_64
 #include <asm/page_64.h>
@@ -65,6 +66,13 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
  * virt_to_page(kaddr) returns a valid pointer if and only if
  * virt_addr_valid(kaddr) returns true.
  */
+
+#ifdef CONFIG_KASAN_SW_TAGS
+#define page_to_virt(x) ({							\
+	void *__addr = __va(page_to_pfn((struct page *)x) << PAGE_SHIFT);	\
+	__tag_set(__addr, page_kasan_tag(x));					\
+})
+#endif
 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
 extern bool __virt_addr_valid(unsigned long kaddr);
 #define virt_addr_valid(kaddr)	__virt_addr_valid((unsigned long) (kaddr))
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index 015d23f3e01f..b18fef43dd34 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -21,6 +21,7 @@ extern unsigned long direct_map_physmem_end;
 
 static __always_inline unsigned long __phys_addr_nodebug(unsigned long x)
 {
+	x = __tag_reset(x);
 	unsigned long y = x - __START_KERNEL_map;
 
 	/* use the carry flag to determine if x was < __START_KERNEL_map */
diff --git a/arch/x86/mm/physaddr.c b/arch/x86/mm/physaddr.c
index fc3f3d3e2ef2..d6aa3589c798 100644
--- a/arch/x86/mm/physaddr.c
+++ b/arch/x86/mm/physaddr.c
@@ -14,6 +14,7 @@
 #ifdef CONFIG_DEBUG_VIRTUAL
 unsigned long __phys_addr(unsigned long x)
 {
+	x = __tag_reset(x);
 	unsigned long y = x - __START_KERNEL_map;
 
 	/* use the carry flag to determine if x was < __START_KERNEL_map */
@@ -46,6 +47,7 @@ EXPORT_SYMBOL(__phys_addr_symbol);
 
 bool __virt_addr_valid(unsigned long x)
 {
+	x = __tag_reset(x);
 	unsigned long y = x - __START_KERNEL_map;
 
 	/* use the carry flag to determine if x was < __START_KERNEL_map */
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 07/19] mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (5 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 06/19] x86: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-28  9:50   ` Mike Rapoport
  2025-08-25 20:24 ` [PATCH v5 08/19] x86: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
                   ` (11 subsequent siblings)
  18 siblings, 1 reply; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release.
Related code has multiple spots where page virtual addresses end up used
as arguments in arithmetic operations. Combined with enabled tag-based
KASAN it can result in pointers that don't point where they should or
logical operations not giving expected results.

vm_reset_perms() calculates range's start and end addresses using min()
and max() functions. To do that it compares pointers but some are not
tagged - addr variable is, start and end variables aren't.

within() and within_range() can receive tagged addresses which get
compared to untagged start and end variables.

Reset tags in addresses used as function arguments in min(), max(),
within().

execmem_cache_add() adds tagged pointers to a maple tree structure,
which then are incorrectly compared when walking the tree. That results
in different pointers being returned later and page permission violation
errors panicking the kernel.

Reset tag of the address range inserted into the maple tree inside
execmem_cache_add().

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v5:
- Remove the within_range() change.
- arch_kasan_reset_tag -> kasan_reset_tag.

Changelog v4:
- Add patch to the series.

 mm/execmem.c | 2 +-
 mm/vmalloc.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/execmem.c b/mm/execmem.c
index 0822305413ec..f7b7bdacaec5 100644
--- a/mm/execmem.c
+++ b/mm/execmem.c
@@ -186,7 +186,7 @@ static DECLARE_WORK(execmem_cache_clean_work, execmem_cache_clean);
 static int execmem_cache_add_locked(void *ptr, size_t size, gfp_t gfp_mask)
 {
 	struct maple_tree *free_areas = &execmem_cache.free_areas;
-	unsigned long addr = (unsigned long)ptr;
+	unsigned long addr = (unsigned long)kasan_reset_tag(ptr);
 	MA_STATE(mas, free_areas, addr - 1, addr + 1);
 	unsigned long lower, upper;
 	void *area = NULL;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6dbcdceecae1..c93893fb8dd4 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3322,7 +3322,7 @@ static void vm_reset_perms(struct vm_struct *area)
 	 * the vm_unmap_aliases() flush includes the direct map.
 	 */
 	for (i = 0; i < area->nr_pages; i += 1U << page_order) {
-		unsigned long addr = (unsigned long)page_address(area->pages[i]);
+		unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));
 
 		if (addr) {
 			unsigned long page_size;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 08/19] x86: Physical address comparisons in fill_p*d/pte
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (6 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 07/19] mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 09/19] x86: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

Calculating page offset returns a pointer without a tag. When comparing
the calculated offset to a tagged page pointer an error is raised
because they are not equal.

Change pointer comparisons to physical address comparisons as to avoid
issues with tagged pointers that pointer arithmetic would create. Open
code pte_offset_kernel(), pmd_offset(), pud_offset() and p4d_offset().
Because one parameter is always zero and the rest of the function
insides are enclosed inside __va(), removing that layer lowers the
complexity of final assembly.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v2:
- Open code *_offset() to avoid it's internal __va().

 arch/x86/mm/init_64.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 76e33bd7c556..51a247e258b1 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -251,7 +251,10 @@ static p4d_t *fill_p4d(pgd_t *pgd, unsigned long vaddr)
 	if (pgd_none(*pgd)) {
 		p4d_t *p4d = (p4d_t *)spp_getpage();
 		pgd_populate(&init_mm, pgd, p4d);
-		if (p4d != p4d_offset(pgd, 0))
+
+		if (__pa(p4d) != (pgtable_l5_enabled() ?
+				  __pa(pgd) :
+				  (unsigned long)pgd_val(*pgd) & PTE_PFN_MASK))
 			printk(KERN_ERR "PAGETABLE BUG #00! %p <-> %p\n",
 			       p4d, p4d_offset(pgd, 0));
 	}
@@ -263,7 +266,7 @@ static pud_t *fill_pud(p4d_t *p4d, unsigned long vaddr)
 	if (p4d_none(*p4d)) {
 		pud_t *pud = (pud_t *)spp_getpage();
 		p4d_populate(&init_mm, p4d, pud);
-		if (pud != pud_offset(p4d, 0))
+		if (__pa(pud) != (p4d_val(*p4d) & p4d_pfn_mask(*p4d)))
 			printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n",
 			       pud, pud_offset(p4d, 0));
 	}
@@ -275,7 +278,7 @@ static pmd_t *fill_pmd(pud_t *pud, unsigned long vaddr)
 	if (pud_none(*pud)) {
 		pmd_t *pmd = (pmd_t *) spp_getpage();
 		pud_populate(&init_mm, pud, pmd);
-		if (pmd != pmd_offset(pud, 0))
+		if (__pa(pmd) != (pud_val(*pud) & pud_pfn_mask(*pud)))
 			printk(KERN_ERR "PAGETABLE BUG #02! %p <-> %p\n",
 			       pmd, pmd_offset(pud, 0));
 	}
@@ -287,7 +290,7 @@ static pte_t *fill_pte(pmd_t *pmd, unsigned long vaddr)
 	if (pmd_none(*pmd)) {
 		pte_t *pte = (pte_t *) spp_getpage();
 		pmd_populate_kernel(&init_mm, pmd, pte);
-		if (pte != pte_offset_kernel(pmd, 0))
+		if (__pa(pte) != (pmd_val(*pmd) & pmd_pfn_mask(*pmd)))
 			printk(KERN_ERR "PAGETABLE BUG #03!\n");
 	}
 	return pte_offset_kernel(pmd, vaddr);
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 09/19] x86: KASAN raw shadow memory PTE init
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (7 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 08/19] x86: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 10/19] x86: LAM compatible non-canonical definition Maciej Wieczor-Retman
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

In KASAN's generic mode the default value in shadow memory is zero.
During initialization of shadow memory pages they are allocated and
zeroed.

In KASAN's tag-based mode the default tag for the arm64 architecture is
0xFE which corresponds to any memory that should not be accessed. On x86
(where tags are 4-bit wide instead of 8-bit wide) that tag is 0xE so
during the initializations all the bytes in shadow memory pages should
be filled with it.

Use memblock_alloc_try_nid_raw() instead of memblock_alloc_try_nid() to
avoid zeroing out the memory so it can be set with the KASAN invalid
tag.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v2:
- Remove dense mode references, use memset() instead of kasan_poison().

 arch/x86/mm/kasan_init_64.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0539efd0d216..e8a451cafc8c 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -34,6 +34,18 @@ static __init void *early_alloc(size_t size, int nid, bool should_panic)
 	return ptr;
 }
 
+static __init void *early_raw_alloc(size_t size, int nid, bool should_panic)
+{
+	void *ptr = memblock_alloc_try_nid_raw(size, size,
+			__pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+
+	if (!ptr && should_panic)
+		panic("%pS: Failed to allocate page, nid=%d from=%lx\n",
+		      (void *)_RET_IP_, nid, __pa(MAX_DMA_ADDRESS));
+
+	return ptr;
+}
+
 static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
 				      unsigned long end, int nid)
 {
@@ -63,8 +75,9 @@ static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
 		if (!pte_none(*pte))
 			continue;
 
-		p = early_alloc(PAGE_SIZE, nid, true);
-		entry = pfn_pte(PFN_DOWN(__pa(p)), PAGE_KERNEL);
+		p = early_raw_alloc(PAGE_SIZE, nid, true);
+		memset(p, PAGE_SIZE, KASAN_SHADOW_INIT);
+		entry = pfn_pte(PFN_DOWN(__pa_nodebug(p)), PAGE_KERNEL);
 		set_pte_at(&init_mm, addr, pte, entry);
 	} while (pte++, addr += PAGE_SIZE, addr != end);
 }
@@ -436,7 +449,7 @@ void __init kasan_init(void)
 	 * it may contain some garbage. Now we can clear and write protect it,
 	 * since after the TLB flush no one should write to it.
 	 */
-	memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+	memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
 	for (i = 0; i < PTRS_PER_PTE; i++) {
 		pte_t pte;
 		pgprot_t prot;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 10/19] x86: LAM compatible non-canonical definition
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (8 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 09/19] x86: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:59   ` Samuel Holland
  2025-08-25 21:36   ` Dave Hansen
  2025-08-25 20:24 ` [PATCH v5 11/19] x86: LAM initialization Maciej Wieczor-Retman
                   ` (8 subsequent siblings)
  18 siblings, 2 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

For an address to be canonical it has to have its top bits equal to each
other. The number of bits depends on the paging level and whether
they're supposed to be ones or zeroes depends on whether the address
points to kernel or user space.

With Linear Address Masking (LAM) enabled, the definition of linear
address canonicality is modified. Not all of the previously required
bits need to be equal, only the first and last from the previously equal
bitmask. So for example a 5-level paging kernel address needs to have
bits [63] and [56] set.

Add separate __canonical_address() implementation for
CONFIG_KASAN_SW_TAGS since it's the only thing right now that enables
LAM for kernel addresses (LAM_SUP bit in CR4).

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Add patch to the series.

 arch/x86/include/asm/page.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index bcf5cad3da36..a83f23a71f35 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -82,10 +82,20 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
 	return __va(pfn << PAGE_SHIFT);
 }
 
+/*
+ * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
+ */
+#ifdef CONFIG_KASAN_SW_TAGS
+static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
+{
+	return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
+}
+#else
 static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
 {
 	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
 }
+#endif
 
 static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
 {
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 11/19] x86: LAM initialization
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (9 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 10/19] x86: LAM compatible non-canonical definition Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 12/19] x86: Minimal SLAB alignment Maciej Wieczor-Retman
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

To make use of KASAN's tag based mode on x86, Linear Address Masking
(LAM) needs to be enabled. To do that the 28th bit in CR4 has to be set.

Set the bit in early memory initialization.

When launching secondary CPUs the LAM bit gets lost. To avoid this add
it in a mask in head_64.S. The bitmask permits some bits of CR4 to pass
from the primary CPU to the secondary CPUs without being cleared.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
 arch/x86/kernel/head_64.S | 3 +++
 arch/x86/mm/init.c        | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 3e9b3a3bd039..18ca77daa481 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -209,6 +209,9 @@ SYM_INNER_LABEL(common_startup_64, SYM_L_LOCAL)
 	 *  there will be no global TLB entries after the execution."
 	 */
 	movl	$(X86_CR4_PAE | X86_CR4_LA57), %edx
+#ifdef CONFIG_ADDRESS_MASKING
+	orl	$X86_CR4_LAM_SUP, %edx
+#endif
 #ifdef CONFIG_X86_MCE
 	/*
 	 * Preserve CR4.MCE if the kernel will enable #MC support.
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index bb57e93b4caf..756bd96c3b8b 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -763,6 +763,9 @@ void __init init_mem_mapping(void)
 	probe_page_size_mask();
 	setup_pcid();
 
+	if (boot_cpu_has(X86_FEATURE_LAM) && IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+		cr4_set_bits_and_update_boot(X86_CR4_LAM_SUP);
+
 #ifdef CONFIG_X86_64
 	end = max_pfn << PAGE_SHIFT;
 #else
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 12/19] x86: Minimal SLAB alignment
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (10 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 11/19] x86: LAM initialization Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 13/19] kasan: x86: Handle int3 for inline KASAN reports Maciej Wieczor-Retman
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

8 byte minimal SLAB alignment interferes with KASAN's granularity of 16
bytes. It causes a lot of out-of-bounds errors for unaligned 8 byte
allocations.

Compared to a kernel with KASAN disabled, the memory footprint increases
because all kmalloc-8 allocations now are realized as kmalloc-16, which
has twice the object size. But more meaningfully, when compared to a
kernel with generic KASAN enabled, there is no difference. Because of
redzones in generic KASAN, kmalloc-8' and kmalloc-16' object size is the
same (48 bytes). So changing the minimal SLAB alignment of the tag-based
mode doesn't have any negative impact when compared to the other
software KASAN mode.

Adjust x86 minimal SLAB alignment to match KASAN granularity size.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Extend the patch message with some more context and impact
  information.

Changelog v3:
- Fix typo in patch message 4 -> 16.
- Change define location to arch/x86/include/asm/cache.c.

 arch/x86/include/asm/cache.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
index 69404eae9983..3232583b5487 100644
--- a/arch/x86/include/asm/cache.h
+++ b/arch/x86/include/asm/cache.h
@@ -21,4 +21,8 @@
 #endif
 #endif
 
+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#endif
+
 #endif /* _ASM_X86_CACHE_H */
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 13/19] kasan: x86: Handle int3 for inline KASAN reports
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (11 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 12/19] x86: Minimal SLAB alignment Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 14/19] arm64: Unify software tag-based KASAN inline recovery path Maciej Wieczor-Retman
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

Inline KASAN on x86 does tag mismatch reports by passing the faulty
address and metadata through the INT3 instruction - scheme that's setup
in the LLVM's compiler code (specifically HWAddressSanitizer.cpp).

Add a kasan hook to the INT3 handling function.

Disable KASAN in an INT3 core kernel selftest function since it can raise
a false tag mismatch report and potentially panic the kernel.

Make part of that hook - which decides whether to die or recover from a
tag mismatch - arch independent to avoid duplicating a long comment on
both x86 and arm64 architectures.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v5:
- Add die to argument list of kasan_inline_recover() in
  arch/arm64/kernel/traps.c.

Changelog v4:
- Make kasan_handler() a stub in a header file. Remove #ifdef from
  traps.c.
- Consolidate the "recover" comment into one place.
- Make small changes to the patch message.

 MAINTAINERS                   |  2 +-
 arch/x86/include/asm/kasan.h  | 26 ++++++++++++++++++++++++++
 arch/x86/kernel/alternative.c |  4 +++-
 arch/x86/kernel/traps.c       |  4 ++++
 arch/x86/mm/Makefile          |  2 ++
 arch/x86/mm/kasan_inline.c    | 23 +++++++++++++++++++++++
 include/linux/kasan.h         | 24 ++++++++++++++++++++++++
 7 files changed, 83 insertions(+), 2 deletions(-)
 create mode 100644 arch/x86/mm/kasan_inline.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 788532771832..f5b1ce242002 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13177,7 +13177,7 @@ S:	Maintained
 B:	https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
 F:	Documentation/dev-tools/kasan.rst
 F:	arch/*/include/asm/*kasan*.h
-F:	arch/*/mm/kasan_init*
+F:	arch/*/mm/kasan_*
 F:	include/linux/kasan*.h
 F:	lib/Kconfig.kasan
 F:	mm/kasan/
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 1963eb2fcff3..5bf38bb836e1 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -6,7 +6,28 @@
 #include <linux/kasan-tags.h>
 #include <linux/types.h>
 #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_SW_TAGS
+
+/*
+ * LLVM ABI for reporting tag mismatches in inline KASAN mode.
+ * On x86 the INT3 instruction is used to carry metadata in RAX
+ * to the KASAN report.
+ *
+ * SIZE refers to how many bytes the faulty memory access
+ * requested.
+ * WRITE bit, when set, indicates the access was a write, otherwise
+ * it was a read.
+ * RECOVER bit, when set, should allow the kernel to carry on after
+ * a tag mismatch. Otherwise die() is called.
+ */
+#define KASAN_RAX_RECOVER	0x20
+#define KASAN_RAX_WRITE		0x10
+#define KASAN_RAX_SIZE_MASK	0x0f
+#define KASAN_RAX_SIZE(rax)	(1 << ((rax) & KASAN_RAX_SIZE_MASK))
+
+#else
 #define KASAN_SHADOW_SCALE_SHIFT 3
+#endif
 
 /*
  * Compiler uses shadow offset assuming that addresses start
@@ -35,10 +56,15 @@
 #define __tag_shifted(tag)		FIELD_PREP(GENMASK_ULL(60, 57), tag)
 #define __tag_reset(addr)		(sign_extend64((u64)(addr), 56))
 #define __tag_get(addr)			((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+bool kasan_inline_handler(struct pt_regs *regs);
 #else
 #define __tag_shifted(tag)		0UL
 #define __tag_reset(addr)		(addr)
 #define __tag_get(addr)			0
+static inline bool kasan_inline_handler(struct pt_regs *regs)
+{
+	return false;
+}
 #endif /* CONFIG_KASAN_SW_TAGS */
 
 static inline void *__tag_set(const void *__addr, u8 tag)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 2a330566e62b..4cb085daad31 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -2228,7 +2228,7 @@ int3_exception_notify(struct notifier_block *self, unsigned long val, void *data
 }
 
 /* Must be noinline to ensure uniqueness of int3_selftest_ip. */
-static noinline void __init int3_selftest(void)
+static noinline __no_sanitize_address void __init int3_selftest(void)
 {
 	static __initdata struct notifier_block int3_exception_nb = {
 		.notifier_call	= int3_exception_notify,
@@ -2236,6 +2236,7 @@ static noinline void __init int3_selftest(void)
 	};
 	unsigned int val = 0;
 
+	kasan_disable_current();
 	BUG_ON(register_die_notifier(&int3_exception_nb));
 
 	/*
@@ -2253,6 +2254,7 @@ static noinline void __init int3_selftest(void)
 
 	BUG_ON(val != 1);
 
+	kasan_enable_current();
 	unregister_die_notifier(&int3_exception_nb);
 }
 
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 0f6f187b1a9e..2a119279980f 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -912,6 +912,10 @@ static bool do_int3(struct pt_regs *regs)
 	if (kprobe_int3_handler(regs))
 		return true;
 #endif
+
+	if (kasan_inline_handler(regs))
+		return true;
+
 	res = notify_die(DIE_INT3, "int3", regs, 0, X86_TRAP_BP, SIGTRAP);
 
 	return res == NOTIFY_STOP;
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 5b9908f13dcf..1dc18090cbe7 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -36,7 +36,9 @@ obj-$(CONFIG_PTDUMP)		+= dump_pagetables.o
 obj-$(CONFIG_PTDUMP_DEBUGFS)	+= debug_pagetables.o
 
 KASAN_SANITIZE_kasan_init_$(BITS).o := n
+KASAN_SANITIZE_kasan_inline.o := n
 obj-$(CONFIG_KASAN)		+= kasan_init_$(BITS).o
+obj-$(CONFIG_KASAN_SW_TAGS)	+= kasan_inline.o
 
 KMSAN_SANITIZE_kmsan_shadow.o	:= n
 obj-$(CONFIG_KMSAN)		+= kmsan_shadow.o
diff --git a/arch/x86/mm/kasan_inline.c b/arch/x86/mm/kasan_inline.c
new file mode 100644
index 000000000000..9f85dfd1c38b
--- /dev/null
+++ b/arch/x86/mm/kasan_inline.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+
+bool kasan_inline_handler(struct pt_regs *regs)
+{
+	int metadata = regs->ax;
+	u64 addr = regs->di;
+	u64 pc = regs->ip;
+	bool recover = metadata & KASAN_RAX_RECOVER;
+	bool write = metadata & KASAN_RAX_WRITE;
+	size_t size = KASAN_RAX_SIZE(metadata);
+
+	if (user_mode(regs))
+		return false;
+
+	if (!kasan_report((void *)addr, size, write, pc))
+		return false;
+
+	kasan_inline_recover(recover, "Oops - KASAN", regs, metadata, die);
+
+	return true;
+}
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 54481f8c30c5..8691ad870f3b 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -663,4 +663,28 @@ void kasan_non_canonical_hook(unsigned long addr);
 static inline void kasan_non_canonical_hook(unsigned long addr) { }
 #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
 
+#ifdef CONFIG_KASAN_SW_TAGS
+/*
+ * The instrumentation allows to control whether we can proceed after
+ * a crash was detected. This is done by passing the -recover flag to
+ * the compiler. Disabling recovery allows to generate more compact
+ * code.
+ *
+ * Unfortunately disabling recovery doesn't work for the kernel right
+ * now. KASAN reporting is disabled in some contexts (for example when
+ * the allocator accesses slab object metadata; this is controlled by
+ * current->kasan_depth). All these accesses are detected by the tool,
+ * even though the reports for them are not printed.
+ *
+ * This is something that might be fixed at some point in the future.
+ */
+static inline void kasan_inline_recover(
+	bool recover, char *msg, struct pt_regs *regs, unsigned long err,
+	void die_fn(const char *str, struct pt_regs *regs, long err))
+{
+	if (!recover)
+		die_fn(msg, regs, err);
+}
+#endif
+
 #endif /* LINUX_KASAN_H */
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 14/19] arm64: Unify software tag-based KASAN inline recovery path
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (12 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 13/19] kasan: x86: Handle int3 for inline KASAN reports Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-26 19:35   ` Catalin Marinas
  2025-08-25 20:24 ` [PATCH v5 15/19] kasan: x86: Apply multishot to the inline report handler Maciej Wieczor-Retman
                   ` (4 subsequent siblings)
  18 siblings, 1 reply; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

To avoid having a copy of a long comment explaining the intricacies of
the inline KASAN recovery system and issues for every architecture that
uses the software tag-based mode, a unified kasan_inline_recover()
function was added.

Use kasan_inline_recover() in the kasan brk handler to cleanup the long
comment, that's kept in the non-arch KASAN code.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v5:
- Split arm64 portion of patch 13/18 into this one. (Peter Zijlstra)

 arch/arm64/kernel/traps.c | 17 +----------------
 1 file changed, 1 insertion(+), 16 deletions(-)

diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index f528b6041f6a..fe3c0104fe31 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -1068,22 +1068,7 @@ int kasan_brk_handler(struct pt_regs *regs, unsigned long esr)
 
 	kasan_report(addr, size, write, pc);
 
-	/*
-	 * The instrumentation allows to control whether we can proceed after
-	 * a crash was detected. This is done by passing the -recover flag to
-	 * the compiler. Disabling recovery allows to generate more compact
-	 * code.
-	 *
-	 * Unfortunately disabling recovery doesn't work for the kernel right
-	 * now. KASAN reporting is disabled in some contexts (for example when
-	 * the allocator accesses slab object metadata; this is controlled by
-	 * current->kasan_depth). All these accesses are detected by the tool,
-	 * even though the reports for them are not printed.
-	 *
-	 * This is something that might be fixed at some point in the future.
-	 */
-	if (!recover)
-		die("Oops - KASAN", regs, esr);
+	kasan_inline_recover(recover, "Oops - KASAN", regs, esr, die);
 
 	/* If thread survives, skip over the brk instruction and continue: */
 	arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 15/19] kasan: x86: Apply multishot to the inline report handler
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (13 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 14/19] arm64: Unify software tag-based KASAN inline recovery path Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 16/19] kasan: x86: Logical bit shift for kasan_mem_to_shadow Maciej Wieczor-Retman
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

KASAN by default reports only one tag mismatch and based on other
command line parameters either keeps going or panics. The multishot
mechanism - enabled either through a command line parameter or by inline
enable/disable function calls - lifts that restriction and allows an
infinite number of tag mismatch reports to be shown.

Inline KASAN uses the INT3 instruction to pass metadata to the report
handling function. Currently the "recover" field in that metadata is
broken in the compiler layer and causes every inline tag mismatch to
panic the kernel.

Check the multishot state in the KASAN hook called inside the INT3
handling function.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Add this patch to the series.

 arch/x86/mm/kasan_inline.c | 3 +++
 include/linux/kasan.h      | 3 +++
 mm/kasan/report.c          | 8 +++++++-
 3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kasan_inline.c b/arch/x86/mm/kasan_inline.c
index 9f85dfd1c38b..f837caf32e6c 100644
--- a/arch/x86/mm/kasan_inline.c
+++ b/arch/x86/mm/kasan_inline.c
@@ -17,6 +17,9 @@ bool kasan_inline_handler(struct pt_regs *regs)
 	if (!kasan_report((void *)addr, size, write, pc))
 		return false;
 
+	if (kasan_multi_shot_enabled())
+		return true;
+
 	kasan_inline_recover(recover, "Oops - KASAN", regs, metadata, die);
 
 	return true;
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 8691ad870f3b..7a2527794549 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -663,7 +663,10 @@ void kasan_non_canonical_hook(unsigned long addr);
 static inline void kasan_non_canonical_hook(unsigned long addr) { }
 #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
 
+bool kasan_multi_shot_enabled(void);
+
 #ifdef CONFIG_KASAN_SW_TAGS
+
 /*
  * The instrumentation allows to control whether we can proceed after
  * a crash was detected. This is done by passing the -recover flag to
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 50d487a0687a..9e830639e1b2 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -121,6 +121,12 @@ static void report_suppress_stop(void)
 #endif
 }
 
+bool kasan_multi_shot_enabled(void)
+{
+	return test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags);
+}
+EXPORT_SYMBOL(kasan_multi_shot_enabled);
+
 /*
  * Used to avoid reporting more than one KASAN bug unless kasan_multi_shot
  * is enabled. Note that KASAN tests effectively enable kasan_multi_shot
@@ -128,7 +134,7 @@ static void report_suppress_stop(void)
  */
 static bool report_enabled(void)
 {
-	if (test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
+	if (kasan_multi_shot_enabled())
 		return true;
 	return !test_and_set_bit(KASAN_BIT_REPORTED, &kasan_flags);
 }
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 16/19] kasan: x86: Logical bit shift for kasan_mem_to_shadow
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (14 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 15/19] kasan: x86: Apply multishot to the inline report handler Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 17/19] mm: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

While generally tag-based KASAN adopts an arithemitc bit shift to
convert a memory address to a shadow memory address, it doesn't work for
all cases on x86. Testing different shadow memory offsets proved that
either 4 or 5 level paging didn't work correctly or inline mode ran into
issues. Thus the best working scheme is the logical bit shift and
non-canonical shadow offset that x86 uses for generic KASAN, of course
adjusted for the increased granularity from 8 to 16 bytes.

Add an arch specific implementation of kasan_mem_to_shadow() that uses
the logical bit shift.

The non-canonical hook tries to calculate whether an address came from
kasan_mem_to_shadow(). First it checks whether this address fits into
the legal set of values possible to output from the mem to shadow
function.

Tie both generic and tag-based x86 KASAN modes to the address range
check associated with generic KASAN.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Add this patch to the series.

 arch/x86/include/asm/kasan.h | 8 ++++++++
 mm/kasan/report.c            | 5 +++--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 5bf38bb836e1..f3e34a9754d2 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -53,6 +53,14 @@
 
 #ifdef CONFIG_KASAN_SW_TAGS
 
+static inline void *__kasan_mem_to_shadow(const void *addr)
+{
+	return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+		+ KASAN_SHADOW_OFFSET;
+}
+
+#define kasan_mem_to_shadow(addr)	__kasan_mem_to_shadow(addr)
+
 #define __tag_shifted(tag)		FIELD_PREP(GENMASK_ULL(60, 57), tag)
 #define __tag_reset(addr)		(sign_extend64((u64)(addr), 56))
 #define __tag_get(addr)			((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 9e830639e1b2..ee440ed1ecd3 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -648,13 +648,14 @@ void kasan_non_canonical_hook(unsigned long addr)
 	const char *bug_type;
 
 	/*
-	 * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+	 * For Generic KASAN and Software Tag-Based mode on the x86
+	 * architecture, kasan_mem_to_shadow() uses the logical right shift
 	 * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
 	 * both x86 and arm64). Thus, the possible shadow addresses (even for
 	 * bogus pointers) belong to a single contiguous region that is the
 	 * result of kasan_mem_to_shadow() applied to the whole address space.
 	 */
-	if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+	if (IS_ENABLED(CONFIG_KASAN_GENERIC) || IS_ENABLED(CONFIG_X86_64)) {
 		if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0UL)) ||
 		    addr > (unsigned long)kasan_mem_to_shadow((void *)(~0UL)))
 			return;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 17/19] mm: Unpoison pcpu chunks with base address tag
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (15 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 16/19] kasan: x86: Logical bit shift for kasan_mem_to_shadow Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 18/19] mm: Unpoison vms[area] addresses with a common tag Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 19/19] x86: Make software tag-based kasan available Maciej Wieczor-Retman
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

The problem presented here is related to NUMA systems and tag-based
KASAN mode. It can be explained in the following points:

	1. There can be more than one virtual memory chunk.
	2. Chunk's base address has a tag.
	3. The base address points at the first chunk and thus inherits
	   the tag of the first chunk.
	4. The subsequent chunks will be accessed with the tag from the
	   first chunk.
	5. Thus, the subsequent chunks need to have their tag set to
	   match that of the first chunk.

Refactor code by moving it into a helper in preparation for the actual
fix.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Redo the patch message numbered list.
- Do the refactoring in this patch and move additions to the next new
  one.

Changelog v3:
- Remove last version of this patch that just resets the tag on
  base_addr and add this patch that unpoisons all areas with the same
  tag instead.

 include/linux/kasan.h | 10 ++++++++++
 mm/kasan/hw_tags.c    | 11 +++++++++++
 mm/kasan/shadow.c     | 10 ++++++++++
 mm/vmalloc.c          |  4 +---
 4 files changed, 32 insertions(+), 3 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7a2527794549..3ec432d7df9a 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -613,6 +613,13 @@ static __always_inline void kasan_poison_vmalloc(const void *start,
 		__kasan_poison_vmalloc(start, size);
 }
 
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms);
+static __always_inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+	if (kasan_enabled())
+		__kasan_unpoison_vmap_areas(vms, nr_vms);
+}
+
 #else /* CONFIG_KASAN_VMALLOC */
 
 static inline void kasan_populate_early_vm_area_shadow(void *start,
@@ -637,6 +644,9 @@ static inline void *kasan_unpoison_vmalloc(const void *start,
 static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
 { }
 
+static inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{ }
+
 #endif /* CONFIG_KASAN_VMALLOC */
 
 #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 9a6927394b54..1f569df313c3 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -382,6 +382,17 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
 	 */
 }
 
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+	int area;
+
+	for (area = 0 ; area < nr_vms ; area++) {
+		vms[area]->addr = __kasan_unpoison_vmalloc(
+			vms[area]->addr, vms[area]->size,
+			KASAN_VMALLOC_PROT_NORMAL);
+	}
+}
+
 #endif
 
 void kasan_enable_hw_tags(void)
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index d2c70cd2afb1..b41f74d68916 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -646,6 +646,16 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
 	kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
 }
 
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+	int area;
+
+	for (area = 0 ; area < nr_vms ; area++) {
+		kasan_poison(vms[area]->addr, vms[area]->size,
+			     arch_kasan_get_tag(vms[area]->addr), false);
+	}
+}
+
 #else /* CONFIG_KASAN_VMALLOC */
 
 int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c93893fb8dd4..00be0abcaf60 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4847,9 +4847,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
 	 * With hardware tag-based KASAN, marking is skipped for
 	 * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
 	 */
-	for (area = 0; area < nr_vms; area++)
-		vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
-				vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);
+	kasan_unpoison_vmap_areas(vms, nr_vms);
 
 	kfree(vas);
 	return vms;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 18/19] mm: Unpoison vms[area] addresses with a common tag
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (16 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 17/19] mm: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  2025-08-25 20:24 ` [PATCH v5 19/19] x86: Make software tag-based kasan available Maciej Wieczor-Retman
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

The problem presented here is related to NUMA systems and tag-based
KASAN mode. It can be explained in the following points:

	1. There can be more than one virtual memory chunk.
	2. Chunk's base address has a tag.
	3. The base address points at the first chunk and thus inherits
	   the tag of the first chunk.
	4. The subsequent chunks will be accessed with the tag from the
	   first chunk.
	5. Thus, the subsequent chunks need to have their tag set to
	   match that of the first chunk.

Unpoison all vms[]->addr memory and pointers with the same tag to
resolve the mismatch.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Move tagging the vms[]->addr to this new patch and leave refactoring
  there.
- Comment the fix to provide some context.

 mm/kasan/shadow.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index b41f74d68916..ee2488371784 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -646,13 +646,21 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
 	kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
 }
 
+/*
+ * A tag mismatch happens when calculating per-cpu chunk addresses, because
+ * they all inherit the tag from vms[0]->addr, even when nr_vms is bigger
+ * than 1. This is a problem because all the vms[]->addr come from separate
+ * allocations and have different tags so while the calculated address is
+ * correct the tag isn't.
+ */
 void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
 {
 	int area;
 
 	for (area = 0 ; area < nr_vms ; area++) {
 		kasan_poison(vms[area]->addr, vms[area]->size,
-			     arch_kasan_get_tag(vms[area]->addr), false);
+			     arch_kasan_get_tag(vms[0]->addr), false);
+		arch_kasan_set_tag(vms[area]->addr, arch_kasan_get_tag(vms[0]->addr));
 	}
 }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v5 19/19] x86: Make software tag-based kasan available
  2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
                   ` (17 preceding siblings ...)
  2025-08-25 20:24 ` [PATCH v5 18/19] mm: Unpoison vms[area] addresses with a common tag Maciej Wieczor-Retman
@ 2025-08-25 20:24 ` Maciej Wieczor-Retman
  18 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-25 20:24 UTC (permalink / raw)
  To: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	maciej.wieczor-retman, leitao, peterz, wangkefeng.wang, surenb,
	ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie, akpm,
	guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier, will,
	andreyknvl, jhubbard, bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

Make CONFIG_KASAN_SW_TAGS available for x86 machines if they have
ADDRESS_MASKING enabled (LAM) as that works similarly to Top-Byte Ignore
(TBI) that allows the software tag-based mode on arm64 platform.

Set scale macro based on KASAN mode: in software tag-based mode 16 bytes
of memory map to one shadow byte and 8 in generic mode.

Disable CONFIG_KASAN_INLINE and CONFIG_KASAN_STACK when
CONFIG_KASAN_SW_TAGS is enabled on x86 until the appropriate compiler
support is available.

Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Add x86 specific kasan_mem_to_shadow().
- Revert x86 to the older unsigned KASAN_SHADOW_OFFSET. Do the same to
  KASAN_SHADOW_START/END.
- Modify scripts/gdb/linux/kasan.py to keep x86 using unsigned offset.
- Disable inline and stack support when software tags are enabled on
  x86.

Changelog v3:
- Remove runtime_const from previous patch and merge the rest here.
- Move scale shift definition back to header file.
- Add new kasan offset for software tag based mode.
- Fix patch message typo 32 -> 16, and 16 -> 8.
- Update lib/Kconfig.kasan with x86 now having software tag-based
  support.

Changelog v2:
- Remove KASAN dense code.

 Documentation/arch/x86/x86_64/mm.rst | 6 ++++--
 arch/x86/Kconfig                     | 4 +++-
 arch/x86/boot/compressed/misc.h      | 1 +
 arch/x86/include/asm/kasan.h         | 1 +
 arch/x86/kernel/setup.c              | 2 ++
 lib/Kconfig.kasan                    | 3 ++-
 scripts/gdb/linux/kasan.py           | 4 ++--
 7 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/Documentation/arch/x86/x86_64/mm.rst b/Documentation/arch/x86/x86_64/mm.rst
index a6cf05d51bd8..ccbdbb4cda36 100644
--- a/Documentation/arch/x86/x86_64/mm.rst
+++ b/Documentation/arch/x86/x86_64/mm.rst
@@ -60,7 +60,8 @@ Complete virtual memory map with 4-level page tables
    ffffe90000000000 |  -23    TB | ffffe9ffffffffff |    1 TB | ... unused hole
    ffffea0000000000 |  -22    TB | ffffeaffffffffff |    1 TB | virtual memory map (vmemmap_base)
    ffffeb0000000000 |  -21    TB | ffffebffffffffff |    1 TB | ... unused hole
-   ffffec0000000000 |  -20    TB | fffffbffffffffff |   16 TB | KASAN shadow memory
+   ffffec0000000000 |  -20    TB | fffffbffffffffff |   16 TB | KASAN shadow memory (generic mode)
+   fffff40000000000 |   -8    TB | fffffbffffffffff |    8 TB | KASAN shadow memory (software tag-based mode)
   __________________|____________|__________________|_________|____________________________________________________________
                                                               |
                                                               | Identical layout to the 56-bit one from here on:
@@ -130,7 +131,8 @@ Complete virtual memory map with 5-level page tables
    ffd2000000000000 |  -11.5  PB | ffd3ffffffffffff |  0.5 PB | ... unused hole
    ffd4000000000000 |  -11    PB | ffd5ffffffffffff |  0.5 PB | virtual memory map (vmemmap_base)
    ffd6000000000000 |  -10.5  PB | ffdeffffffffffff | 2.25 PB | ... unused hole
-   ffdf000000000000 |   -8.25 PB | fffffbffffffffff |   ~8 PB | KASAN shadow memory
+   ffdf000000000000 |   -8.25 PB | fffffbffffffffff |   ~8 PB | KASAN shadow memory (generic mode)
+   ffeffc0000000000 |   -6    PB | fffffbffffffffff |    4 PB | KASAN shadow memory (software tag-based mode)
   __________________|____________|__________________|_________|____________________________________________________________
                                                               |
                                                               | Identical layout to the 47-bit one from here on:
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b8df57ac0f28..f44fec1190b6 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -69,6 +69,7 @@ config X86
 	select ARCH_CLOCKSOURCE_INIT
 	select ARCH_CONFIGURES_CPU_MITIGATIONS
 	select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
+	select ARCH_DISABLE_KASAN_INLINE	if X86_64 && KASAN_SW_TAGS
 	select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION
 	select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64
 	select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG
@@ -199,6 +200,7 @@ config X86
 	select HAVE_ARCH_JUMP_LABEL_RELATIVE
 	select HAVE_ARCH_KASAN			if X86_64
 	select HAVE_ARCH_KASAN_VMALLOC		if X86_64
+	select HAVE_ARCH_KASAN_SW_TAGS		if ADDRESS_MASKING
 	select HAVE_ARCH_KFENCE
 	select HAVE_ARCH_KMSAN			if X86_64
 	select HAVE_ARCH_KGDB
@@ -403,7 +405,7 @@ config AUDIT_ARCH
 
 config KASAN_SHADOW_OFFSET
 	hex
-	depends on KASAN
+	default 0xeffffc0000000000 if KASAN_SW_TAGS
 	default 0xdffffc0000000000
 
 config HAVE_INTEL_TXT
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index db1048621ea2..ded92b439ada 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -13,6 +13,7 @@
 #undef CONFIG_PARAVIRT_SPINLOCKS
 #undef CONFIG_KASAN
 #undef CONFIG_KASAN_GENERIC
+#undef CONFIG_KASAN_SW_TAGS
 
 #define __NO_FORTIFY
 
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index f3e34a9754d2..385f4e9daab3 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -7,6 +7,7 @@
 #include <linux/types.h>
 #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
 #ifdef CONFIG_KASAN_SW_TAGS
+#define KASAN_SHADOW_SCALE_SHIFT 4
 
 /*
  * LLVM ABI for reporting tag mismatches in inline KASAN mode.
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 1b2edd07a3e1..5b819f84f6db 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1207,6 +1207,8 @@ void __init setup_arch(char **cmdline_p)
 
 	kasan_init();
 
+	kasan_init_sw_tags();
+
 	/*
 	 * Sync back kernel address range.
 	 *
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f82889a830fa..9ddbc6aeb5d5 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -100,7 +100,8 @@ config KASAN_SW_TAGS
 
 	  Requires GCC 11+ or Clang.
 
-	  Supported only on arm64 CPUs and relies on Top Byte Ignore.
+	  Supported on arm64 CPUs that support Top Byte Ignore and on x86 CPUs
+	  that support Linear Address Masking.
 
 	  Consumes about 1/16th of available memory at kernel start and
 	  add an overhead of ~20% for dynamic allocations.
diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py
index fca39968d308..4b86202b155f 100644
--- a/scripts/gdb/linux/kasan.py
+++ b/scripts/gdb/linux/kasan.py
@@ -7,7 +7,7 @@
 #
 
 import gdb
-from linux import constants, mm
+from linux import constants, utils, mm
 from ctypes import c_int64 as s64
 
 def help():
@@ -40,7 +40,7 @@ class KasanMemToShadow(gdb.Command):
         else:
             help()
     def kasan_mem_to_shadow(self, addr):
-        if constants.CONFIG_KASAN_SW_TAGS:
+        if constants.CONFIG_KASAN_SW_TAGS and not utils.is_target_arch('x86'):
             addr = s64(addr)
         return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 10/19] x86: LAM compatible non-canonical definition
  2025-08-25 20:24 ` [PATCH v5 10/19] x86: LAM compatible non-canonical definition Maciej Wieczor-Retman
@ 2025-08-25 20:59   ` Samuel Holland
  2025-08-27  6:32     ` Maciej Wieczor-Retman
  2025-08-25 21:36   ` Dave Hansen
  1 sibling, 1 reply; 31+ messages in thread
From: Samuel Holland @ 2025-08-25 20:59 UTC (permalink / raw)
  To: Maciej Wieczor-Retman
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel, sohil.mehta, baohua, david,
	kbingham, weixugc, Liam.Howlett, alexandre.chartre, kas,
	mark.rutland, trintaeoitogc, axelrasmussen, yuanchu, joey.gouly,
	samitolvanen, joel.granados, graf, vincenzo.frascino, kees, ardb,
	thiago.bauermann, glider, thuth, kuan-ying.lee, pasha.tatashin,
	nick.desaulniers+lkml, vbabka, kaleshsingh, justinstitt,
	catalin.marinas, alexander.shishkin, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	leitao, peterz, wangkefeng.wang, surenb, ziy, smostafa,
	ryabinin.a.a, ubizjak, jbohac, broonie, akpm, guoweikang.kernel,
	rppt, pcc, jan.kiszka, nicolas.schier, will, andreyknvl, jhubbard,
	bp

Hi Maciej,

On 2025-08-25 3:24 PM, Maciej Wieczor-Retman wrote:
> For an address to be canonical it has to have its top bits equal to each
> other. The number of bits depends on the paging level and whether
> they're supposed to be ones or zeroes depends on whether the address
> points to kernel or user space.
> 
> With Linear Address Masking (LAM) enabled, the definition of linear
> address canonicality is modified. Not all of the previously required
> bits need to be equal, only the first and last from the previously equal
> bitmask. So for example a 5-level paging kernel address needs to have
> bits [63] and [56] set.
> 
> Add separate __canonical_address() implementation for
> CONFIG_KASAN_SW_TAGS since it's the only thing right now that enables
> LAM for kernel addresses (LAM_SUP bit in CR4).
> 
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
> ---
> Changelog v4:
> - Add patch to the series.
> 
>  arch/x86/include/asm/page.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
> index bcf5cad3da36..a83f23a71f35 100644
> --- a/arch/x86/include/asm/page.h
> +++ b/arch/x86/include/asm/page.h
> @@ -82,10 +82,20 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
>  	return __va(pfn << PAGE_SHIFT);
>  }
>  
> +/*
> + * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
> + */
> +#ifdef CONFIG_KASAN_SW_TAGS
> +static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
> +{
> +	return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
> +}
> +#else
>  static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>  {
>  	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
>  }
> +#endif

These two implementations have different semantics. The new function works only
on kernel addresses, whereas the existing one works on user addresses as well.
It looks like at least KVM's use of __is_canonical_address() expects the
function to work with user addresses.

Regards,
Samuel

>  
>  static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
>  {



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 10/19] x86: LAM compatible non-canonical definition
  2025-08-25 20:24 ` [PATCH v5 10/19] x86: LAM compatible non-canonical definition Maciej Wieczor-Retman
  2025-08-25 20:59   ` Samuel Holland
@ 2025-08-25 21:36   ` Dave Hansen
  2025-08-26  8:08     ` Maciej Wieczor-Retman
  1 sibling, 1 reply; 31+ messages in thread
From: Dave Hansen @ 2025-08-25 21:36 UTC (permalink / raw)
  To: Maciej Wieczor-Retman, sohil.mehta, baohua, david, kbingham,
	weixugc, Liam.Howlett, alexandre.chartre, kas, mark.rutland,
	trintaeoitogc, axelrasmussen, yuanchu, joey.gouly, samitolvanen,
	joel.granados, graf, vincenzo.frascino, kees, ardb,
	thiago.bauermann, glider, thuth, kuan-ying.lee, pasha.tatashin,
	nick.desaulniers+lkml, vbabka, kaleshsingh, justinstitt,
	catalin.marinas, alexander.shishkin, samuel.holland, dave.hansen,
	corbet, xin, dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	leitao, peterz, wangkefeng.wang, surenb, ziy, smostafa,
	ryabinin.a.a, ubizjak, jbohac, broonie, akpm, guoweikang.kernel,
	rppt, pcc, jan.kiszka, nicolas.schier, will, andreyknvl, jhubbard,
	bp
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

On 8/25/25 13:24, Maciej Wieczor-Retman wrote:
> +/*
> + * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
> + */
> +#ifdef CONFIG_KASAN_SW_TAGS
> +static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
> +{
> +	return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
> +}
> +#else
>  static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>  {
>  	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
>  }
> +#endif

This is the kind of thing that's bound to break. Could we distill it
down to something simpler, perhaps?

In the end, the canonical enforcement mask is the thing that's changing.
So perhaps it should be all common code except for the mask definition:

#ifdef CONFIG_KASAN_SW_TAGS
#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL(vaddr_bits-1))
#else
#define CANONICAL_MASK(vaddr_bits) GENMASK_UL(63, vaddr_bits)
#endif

(modulo off-by-one bugs ;)

Then the canonical check itself becomes something like:

	unsigned long cmask = CANONICAL_MASK(vaddr_bits);
	return (vaddr & mask) == mask;

That, to me, is the most straightforward way to do it.

I don't see it addressed in the cover letter, but what happens when a
CONFIG_KASAN_SW_TAGS=y kernel is booted on non-LAM hardware?


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 10/19] x86: LAM compatible non-canonical definition
  2025-08-25 21:36   ` Dave Hansen
@ 2025-08-26  8:08     ` Maciej Wieczor-Retman
  2025-08-27  0:46       ` Samuel Holland
  0 siblings, 1 reply; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-26  8:08 UTC (permalink / raw)
  To: Dave Hansen
  Cc: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	leitao, peterz, wangkefeng.wang, surenb, ziy, smostafa,
	ryabinin.a.a, ubizjak, jbohac, broonie, akpm, guoweikang.kernel,
	rppt, pcc, jan.kiszka, nicolas.schier, will, andreyknvl, jhubbard,
	bp, x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

On 2025-08-25 at 14:36:35 -0700, Dave Hansen wrote:
>On 8/25/25 13:24, Maciej Wieczor-Retman wrote:
>> +/*
>> + * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
>> + */
>> +#ifdef CONFIG_KASAN_SW_TAGS
>> +static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>> +{
>> +	return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
>> +}
>> +#else
>>  static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>>  {
>>  	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
>>  }
>> +#endif
>
>This is the kind of thing that's bound to break. Could we distill it
>down to something simpler, perhaps?
>
>In the end, the canonical enforcement mask is the thing that's changing.
>So perhaps it should be all common code except for the mask definition:
>
>#ifdef CONFIG_KASAN_SW_TAGS
>#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL(vaddr_bits-1))
>#else
>#define CANONICAL_MASK(vaddr_bits) GENMASK_UL(63, vaddr_bits)
>#endif
>
>(modulo off-by-one bugs ;)
>
>Then the canonical check itself becomes something like:
>
>	unsigned long cmask = CANONICAL_MASK(vaddr_bits);
>	return (vaddr & mask) == mask;
>
>That, to me, is the most straightforward way to do it.

Thanks, I'll try something like this. I will also have to investigate what
Samuel brought up that KVM possibly wants to pass user addresses to this
function as well.

>
>I don't see it addressed in the cover letter, but what happens when a
>CONFIG_KASAN_SW_TAGS=y kernel is booted on non-LAM hardware?

That's a good point, I need to add it to the cover letter. On non-LAM hardware
the kernel just doesn't boot. Disabling KASAN in runtime on unsupported hardware
isn't that difficult in outline mode, but I'm not sure it can work in inline
mode (where checks into shadow memory are just pasted into code by the
compiler).

Since for now there is no compiler support for the inline mode anyway, I'll try to
disable KASAN on non-LAM hardware in runtime.

-- 
Kind regards
Maciej Wieczór-Retman


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 14/19] arm64: Unify software tag-based KASAN inline recovery path
  2025-08-25 20:24 ` [PATCH v5 14/19] arm64: Unify software tag-based KASAN inline recovery path Maciej Wieczor-Retman
@ 2025-08-26 19:35   ` Catalin Marinas
  0 siblings, 0 replies; 31+ messages in thread
From: Catalin Marinas @ 2025-08-26 19:35 UTC (permalink / raw)
  To: Maciej Wieczor-Retman
  Cc: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, alexander.shishkin,
	samuel.holland, dave.hansen, corbet, xin, dvyukov, tglx, scott,
	jason.andryuk, morbo, nathan, lorenzo.stoakes, mingo, brgerst,
	kristina.martsenko, bigeasy, luto, jgross, jpoimboe, urezki,
	mhocko, ada.coupriediaz, hpa, leitao, peterz, wangkefeng.wang,
	surenb, ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie,
	akpm, guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier,
	will, andreyknvl, jhubbard, bp, x86, linux-doc, linux-mm, llvm,
	linux-kbuild, kasan-dev, linux-kernel, linux-arm-kernel

On Mon, Aug 25, 2025 at 10:24:39PM +0200, Maciej Wieczor-Retman wrote:
> To avoid having a copy of a long comment explaining the intricacies of
> the inline KASAN recovery system and issues for every architecture that
> uses the software tag-based mode, a unified kasan_inline_recover()
> function was added.
> 
> Use kasan_inline_recover() in the kasan brk handler to cleanup the long
> comment, that's kept in the non-arch KASAN code.
> 
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation
  2025-08-25 20:24 ` [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
@ 2025-08-26 19:35   ` Catalin Marinas
  2025-08-27  6:26     ` Maciej Wieczor-Retman
  0 siblings, 1 reply; 31+ messages in thread
From: Catalin Marinas @ 2025-08-26 19:35 UTC (permalink / raw)
  To: Maciej Wieczor-Retman
  Cc: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, alexander.shishkin,
	samuel.holland, dave.hansen, corbet, xin, dvyukov, tglx, scott,
	jason.andryuk, morbo, nathan, lorenzo.stoakes, mingo, brgerst,
	kristina.martsenko, bigeasy, luto, jgross, jpoimboe, urezki,
	mhocko, ada.coupriediaz, hpa, leitao, peterz, wangkefeng.wang,
	surenb, ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie,
	akpm, guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier,
	will, andreyknvl, jhubbard, bp, x86, linux-doc, linux-mm, llvm,
	linux-kbuild, kasan-dev, linux-kernel, linux-arm-kernel

On Mon, Aug 25, 2025 at 10:24:26PM +0200, Maciej Wieczor-Retman wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index e9bbfacc35a6..82cbfc7d1233 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -431,11 +431,11 @@ config KASAN_SHADOW_OFFSET
>  	default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
>  	default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
>  	default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
> -	default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
> -	default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
> -	default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
> -	default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
> -	default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
> +	default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
> +	default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
> +	default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
> +	default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
> +	default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
>  	default 0xffffffffffffffff
>  
>  config UNWIND_TABLES
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 5213248e081b..277d56ceeb01 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -89,7 +89,15 @@
>   *
>   * KASAN_SHADOW_END is defined first as the shadow address that corresponds to
>   * the upper bound of possible virtual kernel memory addresses UL(1) << 64
> - * according to the mapping formula.
> + * according to the mapping formula. For Generic KASAN, the address in the
> + * mapping formula is treated as unsigned (part of the compiler's ABI), so the
> + * end of the shadow memory region is at a large positive offset from
> + * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
> + * formula is treated as signed. Since all kernel addresses are negative, they
> + * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
> + * itself the end of the shadow memory region. (User pointers are positive and
> + * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
> + * not allocated for them.)
>   *
>   * KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
>   * memory start must map to the lowest possible kernel virtual memory address
> @@ -100,7 +108,11 @@
>   */
>  #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
>  #define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
> +#ifdef CONFIG_KASAN_GENERIC
>  #define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
> +#else
> +#define KASAN_SHADOW_END	KASAN_SHADOW_OFFSET
> +#endif
>  #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
>  #define KASAN_SHADOW_START	_KASAN_SHADOW_START(vabits_actual)
>  #define PAGE_END		KASAN_SHADOW_START
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index d541ce45daeb..dc2de12c4f26 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
>  /* The early shadow maps everything to a single page of zeroes */
>  asmlinkage void __init kasan_early_init(void)
>  {
> -	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
> -		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
> +	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> +		BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
> +			KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
> +	else
> +		BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
>  	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
>  	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
>  	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));

For the arm64 parts:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

I wonder whether it's worth keeping the generic KASAN mode for arm64.
We've had the hardware TBI from the start, so the architecture version
is not an issue. The compiler support may differ though.

Anyway, that would be more suitable for a separate cleanup patch.

-- 
Catalin


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 10/19] x86: LAM compatible non-canonical definition
  2025-08-26  8:08     ` Maciej Wieczor-Retman
@ 2025-08-27  0:46       ` Samuel Holland
  2025-08-27  6:08         ` Maciej Wieczor-Retman
  0 siblings, 1 reply; 31+ messages in thread
From: Samuel Holland @ 2025-08-27  0:46 UTC (permalink / raw)
  To: Maciej Wieczor-Retman, Dave Hansen
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

Hi Maciej,

On 2025-08-26 3:08 AM, Maciej Wieczor-Retman wrote:
> On 2025-08-25 at 14:36:35 -0700, Dave Hansen wrote:
>> On 8/25/25 13:24, Maciej Wieczor-Retman wrote:
>>> +/*
>>> + * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
>>> + */
>>> +#ifdef CONFIG_KASAN_SW_TAGS
>>> +static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>>> +{
>>> +	return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
>>> +}
>>> +#else
>>>  static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>>>  {
>>>  	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
>>>  }
>>> +#endif
>>
>> This is the kind of thing that's bound to break. Could we distill it
>> down to something simpler, perhaps?
>>
>> In the end, the canonical enforcement mask is the thing that's changing.
>> So perhaps it should be all common code except for the mask definition:
>>
>> #ifdef CONFIG_KASAN_SW_TAGS
>> #define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL(vaddr_bits-1))
>> #else
>> #define CANONICAL_MASK(vaddr_bits) GENMASK_UL(63, vaddr_bits)
>> #endif
>>
>> (modulo off-by-one bugs ;)
>>
>> Then the canonical check itself becomes something like:
>>
>> 	unsigned long cmask = CANONICAL_MASK(vaddr_bits);
>> 	return (vaddr & mask) == mask;
>>
>> That, to me, is the most straightforward way to do it.
> 
> Thanks, I'll try something like this. I will also have to investigate what
> Samuel brought up that KVM possibly wants to pass user addresses to this
> function as well.
> 
>>
>> I don't see it addressed in the cover letter, but what happens when a
>> CONFIG_KASAN_SW_TAGS=y kernel is booted on non-LAM hardware?
> 
> That's a good point, I need to add it to the cover letter. On non-LAM hardware
> the kernel just doesn't boot. Disabling KASAN in runtime on unsupported hardware
> isn't that difficult in outline mode, but I'm not sure it can work in inline
> mode (where checks into shadow memory are just pasted into code by the
> compiler).

On RISC-V at least, I was able to run inline mode with missing hardware support.
The shadow memory is still allocated, so the inline tag checks do not fault. And
with a patch to make kasan_enabled() return false[1], all pointers remain
canonical (they match the MatchAllTag), so the inline tag checks all succeed.

[1]:
https://lore.kernel.org/linux-riscv/20241022015913.3524425-3-samuel.holland@sifive.com/

Regards,
Samuel

> Since for now there is no compiler support for the inline mode anyway, I'll try to
> disable KASAN on non-LAM hardware in runtime.
> 



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 10/19] x86: LAM compatible non-canonical definition
  2025-08-27  0:46       ` Samuel Holland
@ 2025-08-27  6:08         ` Maciej Wieczor-Retman
  0 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-27  6:08 UTC (permalink / raw)
  To: Samuel Holland
  Cc: Dave Hansen, x86, linux-doc, linux-mm, llvm, linux-kbuild,
	kasan-dev, linux-kernel, linux-arm-kernel

On 2025-08-26 at 19:46:19 -0500, Samuel Holland wrote:
>Hi Maciej,
>
>On 2025-08-26 3:08 AM, Maciej Wieczor-Retman wrote:
>> On 2025-08-25 at 14:36:35 -0700, Dave Hansen wrote:
>>> On 8/25/25 13:24, Maciej Wieczor-Retman wrote:
>>>> +/*
>>>> + * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
>>>> + */
>>>> +#ifdef CONFIG_KASAN_SW_TAGS
>>>> +static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>>>> +{
>>>> +	return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
>>>> +}
>>>> +#else
>>>>  static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>>>>  {
>>>>  	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
>>>>  }
>>>> +#endif
>>>
>>> This is the kind of thing that's bound to break. Could we distill it
>>> down to something simpler, perhaps?
>>>
>>> In the end, the canonical enforcement mask is the thing that's changing.
>>> So perhaps it should be all common code except for the mask definition:
>>>
>>> #ifdef CONFIG_KASAN_SW_TAGS
>>> #define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL(vaddr_bits-1))
>>> #else
>>> #define CANONICAL_MASK(vaddr_bits) GENMASK_UL(63, vaddr_bits)
>>> #endif
>>>
>>> (modulo off-by-one bugs ;)
>>>
>>> Then the canonical check itself becomes something like:
>>>
>>> 	unsigned long cmask = CANONICAL_MASK(vaddr_bits);
>>> 	return (vaddr & mask) == mask;
>>>
>>> That, to me, is the most straightforward way to do it.
>> 
>> Thanks, I'll try something like this. I will also have to investigate what
>> Samuel brought up that KVM possibly wants to pass user addresses to this
>> function as well.
>> 
>>>
>>> I don't see it addressed in the cover letter, but what happens when a
>>> CONFIG_KASAN_SW_TAGS=y kernel is booted on non-LAM hardware?
>> 
>> That's a good point, I need to add it to the cover letter. On non-LAM hardware
>> the kernel just doesn't boot. Disabling KASAN in runtime on unsupported hardware
>> isn't that difficult in outline mode, but I'm not sure it can work in inline
>> mode (where checks into shadow memory are just pasted into code by the
>> compiler).
>
>On RISC-V at least, I was able to run inline mode with missing hardware support.
>The shadow memory is still allocated, so the inline tag checks do not fault. And
>with a patch to make kasan_enabled() return false[1], all pointers remain
>canonical (they match the MatchAllTag), so the inline tag checks all succeed.
>
>[1]:
>https://lore.kernel.org/linux-riscv/20241022015913.3524425-3-samuel.holland@sifive.com/

Thanks, that should work :)

I'll test it and apply to the series.

>
>Regards,
>Samuel
>
>> Since for now there is no compiler support for the inline mode anyway, I'll try to
>> disable KASAN on non-LAM hardware in runtime.
>> 
>

-- 
Kind regards
Maciej Wieczór-Retman


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation
  2025-08-26 19:35   ` Catalin Marinas
@ 2025-08-27  6:26     ` Maciej Wieczor-Retman
  0 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-27  6:26 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, alexander.shishkin,
	samuel.holland, dave.hansen, corbet, xin, dvyukov, tglx, scott,
	jason.andryuk, morbo, nathan, lorenzo.stoakes, mingo, brgerst,
	kristina.martsenko, bigeasy, luto, jgross, jpoimboe, urezki,
	mhocko, ada.coupriediaz, hpa, leitao, peterz, wangkefeng.wang,
	surenb, ziy, smostafa, ryabinin.a.a, ubizjak, jbohac, broonie,
	akpm, guoweikang.kernel, rppt, pcc, jan.kiszka, nicolas.schier,
	will, andreyknvl, jhubbard, bp, x86, linux-doc, linux-mm, llvm,
	linux-kbuild, kasan-dev, linux-kernel, linux-arm-kernel

On 2025-08-26 at 20:35:49 +0100, Catalin Marinas wrote:
>On Mon, Aug 25, 2025 at 10:24:26PM +0200, Maciej Wieczor-Retman wrote:
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index e9bbfacc35a6..82cbfc7d1233 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -431,11 +431,11 @@ config KASAN_SHADOW_OFFSET
>>  	default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
>>  	default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
>>  	default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
>> -	default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
>> -	default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
>> -	default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
>> -	default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
>> -	default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
>> +	default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
>> +	default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
>> +	default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
>> +	default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
>> +	default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
>>  	default 0xffffffffffffffff
>>  
>>  config UNWIND_TABLES
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 5213248e081b..277d56ceeb01 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -89,7 +89,15 @@
>>   *
>>   * KASAN_SHADOW_END is defined first as the shadow address that corresponds to
>>   * the upper bound of possible virtual kernel memory addresses UL(1) << 64
>> - * according to the mapping formula.
>> + * according to the mapping formula. For Generic KASAN, the address in the
>> + * mapping formula is treated as unsigned (part of the compiler's ABI), so the
>> + * end of the shadow memory region is at a large positive offset from
>> + * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
>> + * formula is treated as signed. Since all kernel addresses are negative, they
>> + * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
>> + * itself the end of the shadow memory region. (User pointers are positive and
>> + * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
>> + * not allocated for them.)
>>   *
>>   * KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
>>   * memory start must map to the lowest possible kernel virtual memory address
>> @@ -100,7 +108,11 @@
>>   */
>>  #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
>>  #define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
>> +#ifdef CONFIG_KASAN_GENERIC
>>  #define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
>> +#else
>> +#define KASAN_SHADOW_END	KASAN_SHADOW_OFFSET
>> +#endif
>>  #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
>>  #define KASAN_SHADOW_START	_KASAN_SHADOW_START(vabits_actual)
>>  #define PAGE_END		KASAN_SHADOW_START
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index d541ce45daeb..dc2de12c4f26 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
>>  /* The early shadow maps everything to a single page of zeroes */
>>  asmlinkage void __init kasan_early_init(void)
>>  {
>> -	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
>> -		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
>> +	if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>> +		BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
>> +			KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
>> +	else
>> +		BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
>>  	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
>>  	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
>>  	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));
>
>For the arm64 parts:
>
>Acked-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks :)

>
>I wonder whether it's worth keeping the generic KASAN mode for arm64.
>We've had the hardware TBI from the start, so the architecture version
>is not an issue. The compiler support may differ though.
>
>Anyway, that would be more suitable for a separate cleanup patch.
>
>-- 
>Catalin

I want to test it at some point, but I was always under the impression (that at
least in theory) different modes should be able to catch slightly different
errors. Not a big set but an example being accessing wrong address, but
allocated memory - on Generic it should be okay since shadow memory only says if
and how much is allocated. On sw-tags it will fault because randomized tags
would mismatch. Now I can't think of any examples the other way around but I
assume there is a few.

-- 
Kind regards
Maciej Wieczór-Retman


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 10/19] x86: LAM compatible non-canonical definition
  2025-08-25 20:59   ` Samuel Holland
@ 2025-08-27  6:32     ` Maciej Wieczor-Retman
  0 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-27  6:32 UTC (permalink / raw)
  To: Samuel Holland
  Cc: x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel, sohil.mehta, baohua, david,
	kbingham, weixugc, Liam.Howlett, alexandre.chartre, kas,
	mark.rutland, trintaeoitogc, axelrasmussen, yuanchu, joey.gouly,
	samitolvanen, joel.granados, graf, vincenzo.frascino, kees, ardb,
	thiago.bauermann, glider, thuth, kuan-ying.lee, pasha.tatashin,
	nick.desaulniers+lkml, vbabka, kaleshsingh, justinstitt,
	catalin.marinas, alexander.shishkin, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	leitao, peterz, wangkefeng.wang, surenb, ziy, smostafa,
	ryabinin.a.a, ubizjak, jbohac, broonie, akpm, guoweikang.kernel,
	rppt, pcc, jan.kiszka, nicolas.schier, will, andreyknvl, jhubbard,
	bp

On 2025-08-25 at 15:59:46 -0500, Samuel Holland wrote:
>Hi Maciej,
>
>On 2025-08-25 3:24 PM, Maciej Wieczor-Retman wrote:
>> For an address to be canonical it has to have its top bits equal to each
>> other. The number of bits depends on the paging level and whether
>> they're supposed to be ones or zeroes depends on whether the address
>> points to kernel or user space.
>> 
>> With Linear Address Masking (LAM) enabled, the definition of linear
>> address canonicality is modified. Not all of the previously required
>> bits need to be equal, only the first and last from the previously equal
>> bitmask. So for example a 5-level paging kernel address needs to have
>> bits [63] and [56] set.
>> 
>> Add separate __canonical_address() implementation for
>> CONFIG_KASAN_SW_TAGS since it's the only thing right now that enables
>> LAM for kernel addresses (LAM_SUP bit in CR4).
>> 
>> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>> ---
>> Changelog v4:
>> - Add patch to the series.
>> 
>>  arch/x86/include/asm/page.h | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>> 
>> diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
>> index bcf5cad3da36..a83f23a71f35 100644
>> --- a/arch/x86/include/asm/page.h
>> +++ b/arch/x86/include/asm/page.h
>> @@ -82,10 +82,20 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
>>  	return __va(pfn << PAGE_SHIFT);
>>  }
>>  
>> +/*
>> + * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
>> + */
>> +#ifdef CONFIG_KASAN_SW_TAGS
>> +static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>> +{
>> +	return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
>> +}
>> +#else
>>  static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
>>  {
>>  	return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
>>  }
>> +#endif
>
>These two implementations have different semantics. The new function works only
>on kernel addresses, whereas the existing one works on user addresses as well.
>It looks like at least KVM's use of __is_canonical_address() expects the
>function to work with user addresses.

Thanks for noticing that, I'll think of a way to make it work for user addresses
too :)

>
>Regards,
>Samuel
>
>>  
>>  static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
>>  {
>

-- 
Kind regards
Maciej Wieczór-Retman


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 07/19] mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic
  2025-08-25 20:24 ` [PATCH v5 07/19] mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
@ 2025-08-28  9:50   ` Mike Rapoport
  2025-08-28 16:22     ` Maciej Wieczor-Retman
  0 siblings, 1 reply; 31+ messages in thread
From: Mike Rapoport @ 2025-08-28  9:50 UTC (permalink / raw)
  To: Maciej Wieczor-Retman
  Cc: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	leitao, peterz, wangkefeng.wang, surenb, ziy, smostafa,
	ryabinin.a.a, ubizjak, jbohac, broonie, akpm, guoweikang.kernel,
	pcc, jan.kiszka, nicolas.schier, will, andreyknvl, jhubbard, bp,
	x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

On Mon, Aug 25, 2025 at 10:24:32PM +0200, Maciej Wieczor-Retman wrote:
> ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release.
> Related code has multiple spots where page virtual addresses end up used
> as arguments in arithmetic operations. Combined with enabled tag-based
> KASAN it can result in pointers that don't point where they should or
> logical operations not giving expected results.
> 
> vm_reset_perms() calculates range's start and end addresses using min()
> and max() functions. To do that it compares pointers but some are not
> tagged - addr variable is, start and end variables aren't.
> 
> within() and within_range() can receive tagged addresses which get
> compared to untagged start and end variables.
> 
> Reset tags in addresses used as function arguments in min(), max(),
> within().
> 
> execmem_cache_add() adds tagged pointers to a maple tree structure,
> which then are incorrectly compared when walking the tree. That results
> in different pointers being returned later and page permission violation
> errors panicking the kernel.
> 
> Reset tag of the address range inserted into the maple tree inside
> execmem_cache_add().
> 
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
> ---
> Changelog v5:
> - Remove the within_range() change.
> - arch_kasan_reset_tag -> kasan_reset_tag.
> 
> Changelog v4:
> - Add patch to the series.
> 
>  mm/execmem.c | 2 +-
>  mm/vmalloc.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/execmem.c b/mm/execmem.c
> index 0822305413ec..f7b7bdacaec5 100644
> --- a/mm/execmem.c
> +++ b/mm/execmem.c
> @@ -186,7 +186,7 @@ static DECLARE_WORK(execmem_cache_clean_work, execmem_cache_clean);
>  static int execmem_cache_add_locked(void *ptr, size_t size, gfp_t gfp_mask)
>  {
>  	struct maple_tree *free_areas = &execmem_cache.free_areas;
> -	unsigned long addr = (unsigned long)ptr;
> +	unsigned long addr = (unsigned long)kasan_reset_tag(ptr);

Thinking more about it, we anyway reset tag in execmem_alloc() and return
untagged pointer to the caller. Let's just move kasan_reset_tag() to
execmem_vmalloc() so that we always use untagged pointers. Seems more
robust to me.

>  	MA_STATE(mas, free_areas, addr - 1, addr + 1);
>  	unsigned long lower, upper;
>  	void *area = NULL;
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6dbcdceecae1..c93893fb8dd4 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3322,7 +3322,7 @@ static void vm_reset_perms(struct vm_struct *area)
>  	 * the vm_unmap_aliases() flush includes the direct map.
>  	 */
>  	for (i = 0; i < area->nr_pages; i += 1U << page_order) {
> -		unsigned long addr = (unsigned long)page_address(area->pages[i]);
> +		unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));

This is not strictly related to execemem, there may other users of
VM_FLUSH_RESET_PERMS.

Regardless, I wonder how this works on arm64 with tags enabled?

Also, it's not the only place in the kernel that does (unsigned
long)page_address(page). Do other sites need to reset the tag as well?

>  
>  		if (addr) {
>  			unsigned long page_size;
> -- 
> 2.50.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v5 07/19] mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic
  2025-08-28  9:50   ` Mike Rapoport
@ 2025-08-28 16:22     ` Maciej Wieczor-Retman
  0 siblings, 0 replies; 31+ messages in thread
From: Maciej Wieczor-Retman @ 2025-08-28 16:22 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: sohil.mehta, baohua, david, kbingham, weixugc, Liam.Howlett,
	alexandre.chartre, kas, mark.rutland, trintaeoitogc,
	axelrasmussen, yuanchu, joey.gouly, samitolvanen, joel.granados,
	graf, vincenzo.frascino, kees, ardb, thiago.bauermann, glider,
	thuth, kuan-ying.lee, pasha.tatashin, nick.desaulniers+lkml,
	vbabka, kaleshsingh, justinstitt, catalin.marinas,
	alexander.shishkin, samuel.holland, dave.hansen, corbet, xin,
	dvyukov, tglx, scott, jason.andryuk, morbo, nathan,
	lorenzo.stoakes, mingo, brgerst, kristina.martsenko, bigeasy,
	luto, jgross, jpoimboe, urezki, mhocko, ada.coupriediaz, hpa,
	leitao, peterz, wangkefeng.wang, surenb, ziy, smostafa,
	ryabinin.a.a, ubizjak, jbohac, broonie, akpm, guoweikang.kernel,
	pcc, jan.kiszka, nicolas.schier, will, andreyknvl, jhubbard, bp,
	x86, linux-doc, linux-mm, llvm, linux-kbuild, kasan-dev,
	linux-kernel, linux-arm-kernel

On 2025-08-28 at 12:50:19 +0300, Mike Rapoport wrote:
>On Mon, Aug 25, 2025 at 10:24:32PM +0200, Maciej Wieczor-Retman wrote:
>> ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release.
>> Related code has multiple spots where page virtual addresses end up used
>> as arguments in arithmetic operations. Combined with enabled tag-based
>> KASAN it can result in pointers that don't point where they should or
>> logical operations not giving expected results.
>> 
>> vm_reset_perms() calculates range's start and end addresses using min()
>> and max() functions. To do that it compares pointers but some are not
>> tagged - addr variable is, start and end variables aren't.
>> 
>> within() and within_range() can receive tagged addresses which get
>> compared to untagged start and end variables.
>> 
>> Reset tags in addresses used as function arguments in min(), max(),
>> within().
>> 
>> execmem_cache_add() adds tagged pointers to a maple tree structure,
>> which then are incorrectly compared when walking the tree. That results
>> in different pointers being returned later and page permission violation
>> errors panicking the kernel.
>> 
>> Reset tag of the address range inserted into the maple tree inside
>> execmem_cache_add().
>> 
>> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>> ---
>> Changelog v5:
>> - Remove the within_range() change.
>> - arch_kasan_reset_tag -> kasan_reset_tag.
>> 
>> Changelog v4:
>> - Add patch to the series.
>> 
>>  mm/execmem.c | 2 +-
>>  mm/vmalloc.c | 2 +-
>>  2 files changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/mm/execmem.c b/mm/execmem.c
>> index 0822305413ec..f7b7bdacaec5 100644
>> --- a/mm/execmem.c
>> +++ b/mm/execmem.c
>> @@ -186,7 +186,7 @@ static DECLARE_WORK(execmem_cache_clean_work, execmem_cache_clean);
>>  static int execmem_cache_add_locked(void *ptr, size_t size, gfp_t gfp_mask)
>>  {
>>  	struct maple_tree *free_areas = &execmem_cache.free_areas;
>> -	unsigned long addr = (unsigned long)ptr;
>> +	unsigned long addr = (unsigned long)kasan_reset_tag(ptr);
>
>Thinking more about it, we anyway reset tag in execmem_alloc() and return
>untagged pointer to the caller. Let's just move kasan_reset_tag() to
>execmem_vmalloc() so that we always use untagged pointers. Seems more
>robust to me.

Sure, I'll test if it works and change it :)

>
>>  	MA_STATE(mas, free_areas, addr - 1, addr + 1);
>>  	unsigned long lower, upper;
>>  	void *area = NULL;
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 6dbcdceecae1..c93893fb8dd4 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -3322,7 +3322,7 @@ static void vm_reset_perms(struct vm_struct *area)
>>  	 * the vm_unmap_aliases() flush includes the direct map.
>>  	 */
>>  	for (i = 0; i < area->nr_pages; i += 1U << page_order) {
>> -		unsigned long addr = (unsigned long)page_address(area->pages[i]);
>> +		unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));
>
>This is not strictly related to execemem, there may other users of
>VM_FLUSH_RESET_PERMS.
>
>Regardless, I wonder how this works on arm64 with tags enabled?

Hmm, good point, I'll check it out in qemu if this function is called on arm64.

However this issue didn't pop up for me before 6.14 when EXECMEM_ROX was
enabled, so maybe it just didn't hit tagged pages before? I'll try to recheck
that on x86 too.

>Also, it's not the only place in the kernel that does (unsigned
>long)page_address(page). Do other sites need to reset the tag as well?

This place is special in the sense that it does "start = min(addr, start)" and
"end = max(addr, end)" just a few lines later. And start and end seem to always be
untagged, while addr sometimes gets tagged. So with software KASAN and vmalloc
support enabled it would get the final start and end values wrong and then a
page permission error would happen someplace else.

I don't think all other page_address(page) sites need resetting, but I'll double
check if there is any pointer arithmetic there.

>
>>  
>>  		if (addr) {
>>  			unsigned long page_size;
>> -- 
>> 2.50.1
>> 
>
>-- 
>Sincerely yours,
>Mike.

-- 
Kind regards
Maciej Wieczór-Retman


^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2025-08-28 16:26 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
2025-08-26 19:35   ` Catalin Marinas
2025-08-27  6:26     ` Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 02/19] kasan: sw_tags: Support tag widths less than 8 bits Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 03/19] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 04/19] x86: Add arch specific kasan functions Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 05/19] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 06/19] x86: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 07/19] mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
2025-08-28  9:50   ` Mike Rapoport
2025-08-28 16:22     ` Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 08/19] x86: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 09/19] x86: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 10/19] x86: LAM compatible non-canonical definition Maciej Wieczor-Retman
2025-08-25 20:59   ` Samuel Holland
2025-08-27  6:32     ` Maciej Wieczor-Retman
2025-08-25 21:36   ` Dave Hansen
2025-08-26  8:08     ` Maciej Wieczor-Retman
2025-08-27  0:46       ` Samuel Holland
2025-08-27  6:08         ` Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 11/19] x86: LAM initialization Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 12/19] x86: Minimal SLAB alignment Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 13/19] kasan: x86: Handle int3 for inline KASAN reports Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 14/19] arm64: Unify software tag-based KASAN inline recovery path Maciej Wieczor-Retman
2025-08-26 19:35   ` Catalin Marinas
2025-08-25 20:24 ` [PATCH v5 15/19] kasan: x86: Apply multishot to the inline report handler Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 16/19] kasan: x86: Logical bit shift for kasan_mem_to_shadow Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 17/19] mm: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 18/19] mm: Unpoison vms[area] addresses with a common tag Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 19/19] x86: Make software tag-based kasan available Maciej Wieczor-Retman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).