* [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
@ 2026-01-12 17:26 Maciej Wieczor-Retman
2026-01-12 17:27 ` [PATCH v8 01/14] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
` (4 more replies)
0 siblings, 5 replies; 17+ messages in thread
From: Maciej Wieczor-Retman @ 2026-01-12 17:26 UTC (permalink / raw)
To: corbet, morbo, rppt, lorenzo.stoakes, ubizjak, mingo,
vincenzo.frascino, maciej.wieczor-retman, maz, catalin.marinas,
yeoreum.yun, will, jackmanb, samuel.holland, glider, osandov, nsc,
luto, jpoimboe, akpm, Liam.Howlett, kees, jan.kiszka,
thomas.lendacky, jeremy.linton, dvyukov, axelrasmussen, leitao,
ryabinin.a.a, bigeasy, peterz, mark.rutland, urezki, brgerst, hpa,
mhocko, andreyknvl, weixugc, kbingham, vbabka, nathan,
trintaeoitogc, samitolvanen, tglx, thuth, surenb,
anshuman.khandual, smostafa, yuanchu, ada.coupriediaz,
dave.hansen, kas, nick.desaulniers+lkml, david, bp, ardb,
justinstitt
Cc: linux-kernel, linux-mm, kasan-dev, llvm, linux-arm-kernel,
linux-doc, linux-kbuild, x86, m.wieczorretman
======= Introduction
The patchset aims to add a KASAN tag-based mode for the x86 architecture
with the help of the new CPU feature called Linear Address Masking
(LAM). Main improvement introduced by the series is 2x lower memory
usage compared to KASAN's generic mode, the only currently available
mode on x86. The tag based mode may also find errors that the generic
mode couldn't because of differences in how these modes operate.
======= How does KASAN' tag-based mode work?
When enabled, memory accesses and allocations are augmented by the
compiler during kernel compilation. Instrumentation functions are added
to each memory allocation and each pointer dereference.
The allocation related functions generate a random tag and save it in
two places: in shadow memory that maps to the allocated memory, and in
the top bits of the pointer that points to the allocated memory. Storing
the tag in the top of the pointer is possible because of Top-Byte Ignore
(TBI) on arm64 architecture and LAM on x86.
The access related functions are performing a comparison between the tag
stored in the pointer and the one stored in shadow memory. If the tags
don't match an out of bounds error must have occurred and so an error
report is generated.
The general idea for the tag-based mode is very well explained in the
series with the original implementation [1].
[1] https://lore.kernel.org/all/cover.1544099024.git.andreyknvl@google.com/
======= Differences summary compared to the arm64 tag-based mode
- Tag width:
- Tag width influences the chance of a tag mismatch due to two
tags from different allocations having the same value. The
bigger the possible range of tag values the lower the chance
of that happening.
- Shortening the tag width from 8 bits to 4, while it can help
with memory usage, it also increases the chance of not
reporting an error. 4 bit tags have a ~7% chance of a tag
mismatch.
- Address masking mechanism
- TBI in arm64 allows for storing metadata in the top 8 bits of
the virtual address.
- LAM in x86 allows storing tags in bits [62:57] of the pointer.
To maximize memory savings the tag width is reduced to bits
[60:57].
- Inline mode mismatch reporting
- Arm64 inserts a BRK instruction to pass metadata about a tag
mismatch to the KASAN report.
- Right now on x86 the INT3 instruction is used for the same
purpose. The attempt to move it over to use UD1 is already
implemented and tested but relies on another series that needs
merging first. Therefore this patch will be posted separately
once the dependency is satisfied by being merged upstream.
======= Testing
Checked all the kunits for both software tags and generic KASAN after
making changes.
In generic mode (both with these patches and without) the results were:
kasan: pass:61 fail:1 skip:14 total:76
Totals: pass:61 fail:1 skip:14 total:76
not ok 1 kasan
and for software tags:
kasan: pass:65 fail:1 skip:10 total:76
Totals: pass:65 fail:1 skip:10 total:76
not ok 1 kasan
At the time of testing the one failing case is also present on generic
mode without this patchset applied. This seems to point to something
else being at fault for the one case not passing. The test case in
question concerns strscpy() out of bounds error not getting caught.
======= Benchmarks [1]
All tests were ran on a Sierra Forest server platform. The only
differences between the tests were kernel options:
- CONFIG_KASAN
- CONFIG_KASAN_GENERIC
- CONFIG_KASAN_SW_TAGS
- CONFIG_KASAN_INLINE [1]
- CONFIG_KASAN_OUTLINE
Boot time (until login prompt):
* 02:55 for clean kernel
* 05:42 / 06:32 for generic KASAN (inline/outline)
* 05:58 for tag-based KASAN (outline) [2]
Total memory usage (512GB present on the system - MemAvailable just
after boot):
* 12.56 GB for clean kernel
* 81.74 GB for generic KASAN
* 44.39 GB for tag-based KASAN
Kernel size:
* 14 MB for clean kernel
* 24.7 MB / 19.5 MB for generic KASAN (inline/outline)
* 27.1 MB / 18.1 MB for tag-based KASAN (inline/outline)
Work under load time comparison (compiling the mainline kernel) (200 cores):
* 62s for clean kernel
* 171s / 125s for generic KASAN (outline/inline)
* 145s for tag-based KASAN (outline) [2]
[1] Currently inline mode doesn't work on x86 due to things missing in
the compiler. I have written a patch for clang that seems to fix the
inline mode and I was able to boot and check that all patches regarding
the inline mode work as expected. My hope is to post the patch to LLVM
once this series is completed, and then make inline mode available in
the kernel config.
[2] While I was able to boot the inline tag-based kernel with my
compiler changes in a simulated environment, due to toolchain
difficulties I couldn't get it to boot on the machine I had access to.
Also boot time results from the simulation seem too good to be true, and
they're much too worse for the generic case to be believable. Therefore
I'm posting only results from the physical server platform.
======= Compilation
Clang was used to compile the series (make LLVM=1) since gcc doesn't
seem to have support for KASAN tag-based compiler instrumentation on
x86.
======= Dependencies
The series is based on 6.19-rc5.
======= Previous versions
v7: https://lore.kernel.org/all/cover.1765386422.git.m.wieczorretman@pm.me/
v6: https://lore.kernel.org/all/cover.1761763681.git.m.wieczorretman@pm.me/
v5: https://lore.kernel.org/all/cover.1756151769.git.maciej.wieczor-retman@intel.com/
v4: https://lore.kernel.org/all/cover.1755004923.git.maciej.wieczor-retman@intel.com/
v3: https://lore.kernel.org/all/cover.1743772053.git.maciej.wieczor-retman@intel.com/
v2: https://lore.kernel.org/all/cover.1739866028.git.maciej.wieczor-retman@intel.com/
v1: https://lore.kernel.org/all/cover.1738686764.git.maciej.wieczor-retman@intel.com/
=== (two fixes patches were split off after v6) (merged into mm-unstable)
v1: https://lore.kernel.org/all/cover.1762267022.git.m.wieczorretman@pm.me/
v2: https://lore.kernel.org/all/cover.1764685296.git.m.wieczorretman@pm.me/
v3: https://lore.kernel.org/all/cover.1764874575.git.m.wieczorretman@pm.me/
v4: https://lore.kernel.org/all/cover.1764945396.git.m.wieczorretman@pm.me/
Changes v8:
- Detached the UD1/INT3 inline patch from the series so the whole
patchset can be merged without waiting on other dependency series. For
now with lack of compiler support for the inline mode that patch
didn't work anyway so this delay is not an issue.
- Rebased patches onto 6.19-rc5.
- Added acked-by tag to "kasan: arm64: x86: Make special tags arch
specific".
Changes v7:
- Rebased the series onto Peter Zijlstra's "WARN() hackery" v2 patchset.
- Fix flipped memset arguments in "x86/kasan: KASAN raw shadow memory
PTE init".
- Reorder tag width defines on arm64 to avoid redefinition warnings.
- Split off the pcpu unpoison patches into a separate fix oriented
series.
- Redid the canonicality checks so it works for KVM too (didn't change
the __canonical_address() function previously).
- A lot of fixes pointed out by Alexander in his great review:
- Fixed "x86/mm: Physical address comparisons in fill_p*d/pte"
- Merged "Support tag widths less than 8 bits" and "Make special
tags arch specific".
- Added comments and extended patch messages for patches
"x86/kasan: Make software tag-based kasan available" and
"mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic",
- Fixed KASAN_TAG_MASK definition order so all patches compile
individually.
- Renamed kasan_inline.c to kasan_sw_tags.c.
Changes v6:
- Initialize sw-tags only when LAM is available.
- Move inline mode to use UD1 instead of INT3
- Remove inline multishot patch.
- Fix the canonical check to work for user addresses too.
- Revise patch names and messages to align to tip tree rules.
- Fix vdso compilation issue.
Changes v5:
- Fix a bunch of arm64 compilation errors I didn't catch earlier.
Thank You Ada for testing the series!
- Simplify the usage of the tag handling x86 functions (virt_to_page,
phys_addr etc.).
- Remove within() and within_range() from the EXECMEM_ROX patch.
Changes v4:
- Revert x86 kasan_mem_to_shadow() scheme to the same on used in generic
KASAN. Keep the arithmetic shift idea for the KASAN in general since
it makes more sense for arm64 and in risc-v.
- Fix inline mode but leave it unavailable until a complementary
compiler patch can be merged.
- Apply Dave Hansen's comments on series formatting, patch style and
code simplifications.
Changes v3:
- Remove the runtime_const patch and setup a unified offset for both 5
and 4 paging levels.
- Add a fix for inline mode on x86 tag-based KASAN. Add a handler for
int3 that is generated on inline tag mismatches.
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
account.
- Made changes to the kasan_non_canonical_hook() according to upstream
discussion.
- Remove patches 2 and 3 since they related to risc-v and this series
adds only x86 related things.
- Reorder __tag_*() functions so they're before arch_kasan_*(). Remove
CONFIG_KASAN condition from __tag_set().
Changes v2:
- Split the series into one adding KASAN tag-based mode (this one) and
another one that adds the dense mode to KASAN (will post later).
- Removed exporting kasan_poison() and used a wrapper instead in
kasan_init_64.c
- Prepended series with 4 patches from the risc-v series and applied
review comments to the first patch as the rest already are reviewed.
Maciej Wieczor-Retman (12):
kasan: Fix inline mode for x86 tag-based mode
x86/kasan: Add arch specific kasan functions
x86/mm: Reset tag for virtual to physical address conversions
mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic
x86/mm: Physical address comparisons in fill_p*d/pte
x86/kasan: KASAN raw shadow memory PTE init
x86/mm: LAM compatible non-canonical definition
x86/mm: LAM initialization
x86: Minimal SLAB alignment
arm64: Unify software tag-based KASAN inline recovery path
x86/kasan: Logical bit shift for kasan_mem_to_shadow
x86/kasan: Make software tag-based kasan available
Samuel Holland (2):
kasan: sw_tags: Use arithmetic shift for shadow computation
kasan: arm64: x86: Make special tags arch specific
Documentation/arch/arm64/kasan-offsets.sh | 8 ++-
Documentation/arch/x86/x86_64/mm.rst | 6 ++-
MAINTAINERS | 2 +-
arch/arm64/Kconfig | 10 ++--
arch/arm64/include/asm/kasan-tags.h | 14 +++++
arch/arm64/include/asm/kasan.h | 2 -
arch/arm64/include/asm/memory.h | 14 ++++-
arch/arm64/include/asm/uaccess.h | 1 +
arch/arm64/kernel/traps.c | 17 +------
arch/arm64/mm/kasan_init.c | 7 ++-
arch/x86/Kconfig | 4 ++
arch/x86/boot/compressed/misc.h | 1 +
arch/x86/include/asm/cache.h | 4 ++
arch/x86/include/asm/kasan-tags.h | 9 ++++
arch/x86/include/asm/kasan.h | 62 ++++++++++++++++++++++-
arch/x86/include/asm/page.h | 23 ++++++++-
arch/x86/include/asm/page_64.h | 1 +
arch/x86/kernel/head_64.S | 3 ++
arch/x86/mm/init.c | 3 ++
arch/x86/mm/init_64.c | 11 ++--
arch/x86/mm/kasan_init_64.c | 25 +++++++--
arch/x86/mm/physaddr.c | 2 +
include/linux/kasan-tags.h | 21 ++++++--
include/linux/kasan.h | 13 +++--
include/linux/mm.h | 6 +--
include/linux/mmzone.h | 2 +-
include/linux/page-flags-layout.h | 9 +---
lib/Kconfig.kasan | 3 +-
mm/execmem.c | 9 +++-
mm/kasan/report.c | 37 ++++++++++++--
mm/vmalloc.c | 7 ++-
scripts/Makefile.kasan | 3 ++
scripts/gdb/linux/kasan.py | 5 +-
scripts/gdb/linux/mm.py | 5 +-
34 files changed, 277 insertions(+), 72 deletions(-)
create mode 100644 arch/arm64/include/asm/kasan-tags.h
create mode 100644 arch/x86/include/asm/kasan-tags.h
--
2.52.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v8 01/14] kasan: sw_tags: Use arithmetic shift for shadow computation
2026-01-12 17:26 [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
@ 2026-01-12 17:27 ` Maciej Wieczor-Retman
2026-01-15 22:42 ` Andrey Ryabinin
2026-01-12 17:27 ` [PATCH v8 03/14] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
` (3 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Maciej Wieczor-Retman @ 2026-01-12 17:27 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon, Jonathan Corbet, Andrey Ryabinin,
Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Andrew Morton, Jan Kiszka, Kieran Bingham,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt
Cc: m.wieczorretman, Samuel Holland, Maciej Wieczor-Retman,
linux-arm-kernel, linux-doc, linux-kernel, kasan-dev, linux-mm,
llvm
From: Samuel Holland <samuel.holland@sifive.com>
Currently, kasan_mem_to_shadow() uses a logical right shift, which turns
canonical kernel addresses into non-canonical addresses by clearing the
high KASAN_SHADOW_SCALE_SHIFT bits. The value of KASAN_SHADOW_OFFSET is
then chosen so that the addition results in a canonical address for the
shadow memory.
For KASAN_GENERIC, this shift/add combination is ABI with the compiler,
because KASAN_SHADOW_OFFSET is used in compiler-generated inline tag
checks[1], which must only attempt to dereference canonical addresses.
However, for KASAN_SW_TAGS there is some freedom to change the algorithm
without breaking the ABI. Because TBI is enabled for kernel addresses,
the top bits of shadow memory addresses computed during tag checks are
irrelevant, and so likewise are the top bits of KASAN_SHADOW_OFFSET.
This is demonstrated by the fact that LLVM uses a logical right shift in
the tag check fast path[2] but a sbfx (signed bitfield extract)
instruction in the slow path[3] without causing any issues.
Using an arithmetic shift in kasan_mem_to_shadow() provides a number of
benefits:
1) The memory layout doesn't change but is easier to understand.
KASAN_SHADOW_OFFSET becomes a canonical memory address, and the shifted
pointer becomes a negative offset, so KASAN_SHADOW_OFFSET ==
KASAN_SHADOW_END regardless of the shift amount or the size of the
virtual address space.
2) KASAN_SHADOW_OFFSET becomes a simpler constant, requiring only one
instruction to load instead of two. Since it must be loaded in each
function with a tag check, this decreases kernel text size by 0.5%.
3) This shift and the sign extension from kasan_reset_tag() can be
combined into a single sbfx instruction. When this same algorithm change
is applied to the compiler, it removes an instruction from each inline
tag check, further reducing kernel text size by an additional 4.6%.
These benefits extend to other architectures as well. On RISC-V, where
the baseline ISA does not shifted addition or have an equivalent to the
sbfx instruction, loading KASAN_SHADOW_OFFSET is reduced from 3 to 2
instructions, and kasan_mem_to_shadow(kasan_reset_tag(addr)) similarly
combines two consecutive right shifts.
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp#L1316 [1]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp#L895 [2]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp#L669 [3]
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Co-developed-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
Changelog v7: (Maciej)
- Change UL to ULL in report.c to fix some compilation warnings.
Changelog v6: (Maciej)
- Add Catalin's acked-by.
- Move x86 gdb snippet here from the last patch.
Changelog v5: (Maciej)
- (u64) -> (unsigned long) in report.c
Changelog v4: (Maciej)
- Revert x86 to signed mem_to_shadow mapping.
- Remove last two paragraphs since they were just poorer duplication of
the comments in kasan_non_canonical_hook().
Changelog v3: (Maciej)
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
account.
- Made changes to the kasan_non_canonical_hook() according to upstream
discussion. Settled on overflow on both ranges and separate checks for
x86 and arm.
Changelog v2: (Maciej)
- Correct address range that's checked in kasan_non_canonical_hook().
Adjust the comment inside.
- Remove part of comment from arch/arm64/include/asm/memory.h.
- Append patch message paragraph about the overflow in
kasan_non_canonical_hook().
Documentation/arch/arm64/kasan-offsets.sh | 8 +++--
arch/arm64/Kconfig | 10 +++----
arch/arm64/include/asm/memory.h | 14 ++++++++-
arch/arm64/mm/kasan_init.c | 7 +++--
include/linux/kasan.h | 10 +++++--
mm/kasan/report.c | 36 ++++++++++++++++++++---
scripts/gdb/linux/kasan.py | 5 +++-
scripts/gdb/linux/mm.py | 5 ++--
8 files changed, 76 insertions(+), 19 deletions(-)
diff --git a/Documentation/arch/arm64/kasan-offsets.sh b/Documentation/arch/arm64/kasan-offsets.sh
index 2dc5f9e18039..ce777c7c7804 100644
--- a/Documentation/arch/arm64/kasan-offsets.sh
+++ b/Documentation/arch/arm64/kasan-offsets.sh
@@ -5,8 +5,12 @@
print_kasan_offset () {
printf "%02d\t" $1
- printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
- - (1 << (64 - 32 - $2)) ))
+ if [[ $2 -ne 4 ]] then
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+ - (1 << (64 - 32 - $2)) ))
+ else
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) ))
+ fi
}
echo KASAN_SHADOW_SCALE_SHIFT = 3
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 93173f0a09c7..c1b7261cdb96 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -434,11 +434,11 @@ config KASAN_SHADOW_OFFSET
default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
- default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
- default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
- default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
- default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
- default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+ default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
+ default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
+ default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+ default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+ default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
default 0xffffffffffffffff
config UNWIND_TABLES
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 9d54b2ea49d6..f127fbf691ac 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -89,7 +89,15 @@
*
* KASAN_SHADOW_END is defined first as the shadow address that corresponds to
* the upper bound of possible virtual kernel memory addresses UL(1) << 64
- * according to the mapping formula.
+ * according to the mapping formula. For Generic KASAN, the address in the
+ * mapping formula is treated as unsigned (part of the compiler's ABI), so the
+ * end of the shadow memory region is at a large positive offset from
+ * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
+ * formula is treated as signed. Since all kernel addresses are negative, they
+ * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
+ * itself the end of the shadow memory region. (User pointers are positive and
+ * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
+ * not allocated for them.)
*
* KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
* memory start must map to the lowest possible kernel virtual memory address
@@ -100,7 +108,11 @@
*/
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_GENERIC
#define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
+#else
+#define KASAN_SHADOW_END KASAN_SHADOW_OFFSET
+#endif
#define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
#define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual)
#define PAGE_END KASAN_SHADOW_START
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index abeb81bf6ebd..937f6eb8115b 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
/* The early shadow maps everything to a single page of zeroes */
asmlinkage void __init kasan_early_init(void)
{
- BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
- KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
+ KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ else
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9c6ac4b62eb9..0f65e88cc3f6 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -62,8 +62,14 @@ int kasan_populate_early_shadow(const void *shadow_start,
#ifndef kasan_mem_to_shadow
static inline void *kasan_mem_to_shadow(const void *addr)
{
- return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
- + KASAN_SHADOW_OFFSET;
+ void *scaled;
+
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+ else
+ scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+
+ return KASAN_SHADOW_OFFSET + scaled;
}
#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 62c01b4527eb..b5beb1b10bd2 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -642,11 +642,39 @@ void kasan_non_canonical_hook(unsigned long addr)
const char *bug_type;
/*
- * All addresses that came as a result of the memory-to-shadow mapping
- * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
+ * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+ * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
+ * both x86 and arm64). Thus, the possible shadow addresses (even for
+ * bogus pointers) belong to a single contiguous region that is the
+ * result of kasan_mem_to_shadow() applied to the whole address space.
*/
- if (addr < KASAN_SHADOW_OFFSET)
- return;
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+ if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) ||
+ addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
+ return;
+ }
+
+ /*
+ * For Software Tag-Based KASAN, kasan_mem_to_shadow() uses the
+ * arithmetic shift. Normally, this would make checking for a possible
+ * shadow address complicated, as the shadow address computation
+ * operation would overflow only for some memory addresses. However, due
+ * to the chosen KASAN_SHADOW_OFFSET values and the fact the
+ * kasan_mem_to_shadow() only operates on pointers with the tag reset,
+ * the overflow always happens.
+ *
+ * For arm64, the top byte of the pointer gets reset to 0xFF. Thus, the
+ * possible shadow addresses belong to a region that is the result of
+ * kasan_mem_to_shadow() applied to the memory range
+ * [0xFF000000000000, 0xFFFFFFFFFFFFFFFF]. Despite the overflow, the
+ * resulting possible shadow region is contiguous, as the overflow
+ * happens for both 0xFF000000000000 and 0xFFFFFFFFFFFFFFFF.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) && IS_ENABLED(CONFIG_ARM64)) {
+ if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0xFFULL << 56)) ||
+ addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
+ return;
+ }
orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);
diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py
index 56730b3fde0b..4b86202b155f 100644
--- a/scripts/gdb/linux/kasan.py
+++ b/scripts/gdb/linux/kasan.py
@@ -7,7 +7,8 @@
#
import gdb
-from linux import constants, mm
+from linux import constants, utils, mm
+from ctypes import c_int64 as s64
def help():
t = """Usage: lx-kasan_mem_to_shadow [Hex memory addr]
@@ -39,6 +40,8 @@ class KasanMemToShadow(gdb.Command):
else:
help()
def kasan_mem_to_shadow(self, addr):
+ if constants.CONFIG_KASAN_SW_TAGS and not utils.is_target_arch('x86'):
+ addr = s64(addr)
return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET
KasanMemToShadow()
diff --git a/scripts/gdb/linux/mm.py b/scripts/gdb/linux/mm.py
index 7571aebbe650..2e63f3dedd53 100644
--- a/scripts/gdb/linux/mm.py
+++ b/scripts/gdb/linux/mm.py
@@ -110,12 +110,13 @@ class aarch64_page_ops():
self.KERNEL_END = gdb.parse_and_eval("_end")
if constants.LX_CONFIG_KASAN_GENERIC or constants.LX_CONFIG_KASAN_SW_TAGS:
+ self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
if constants.LX_CONFIG_KASAN_GENERIC:
self.KASAN_SHADOW_SCALE_SHIFT = 3
+ self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
else:
self.KASAN_SHADOW_SCALE_SHIFT = 4
- self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
- self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
+ self.KASAN_SHADOW_END = self.KASAN_SHADOW_OFFSET
self.PAGE_END = self.KASAN_SHADOW_END - (1 << (self.vabits_actual - self.KASAN_SHADOW_SCALE_SHIFT))
else:
self.PAGE_END = self._PAGE_END(self.VA_BITS_MIN)
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v8 03/14] kasan: Fix inline mode for x86 tag-based mode
2026-01-12 17:26 [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2026-01-12 17:27 ` [PATCH v8 01/14] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
@ 2026-01-12 17:27 ` Maciej Wieczor-Retman
2026-01-16 13:33 ` Andrey Ryabinin
2026-01-12 18:29 ` [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
` (2 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Maciej Wieczor-Retman @ 2026-01-12 17:27 UTC (permalink / raw)
To: Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Nathan Chancellor,
Nicolas Schier, Nick Desaulniers, Bill Wendling, Justin Stitt
Cc: m.wieczorretman, Maciej Wieczor-Retman, kasan-dev, linux-kbuild,
linux-kernel, llvm
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
The LLVM compiler uses hwasan-instrument-with-calls parameter to setup
inline or outline mode in tag-based KASAN. If zeroed, it means the
instrumentation implementation will be pasted into each relevant
location along with KASAN related constants during compilation. If set
to one all function instrumentation will be done with function calls
instead.
The default hwasan-instrument-with-calls value for the x86 architecture
in the compiler is "1", which is not true for other architectures.
Because of this, enabling inline mode in software tag-based KASAN
doesn't work on x86 as the kernel script doesn't zero out the parameter
and always sets up the outline mode.
Explicitly zero out hwasan-instrument-with-calls when enabling inline
mode in tag-based KASAN.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
---
Changelog v7:
- Add Alexander's Reviewed-by tag.
Changelog v6:
- Add Andrey's Reviewed-by tag.
Changelog v3:
- Add this patch to the series.
scripts/Makefile.kasan | 3 +++
1 file changed, 3 insertions(+)
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 0ba2aac3b8dc..e485814df3e9 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -76,8 +76,11 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress
RUSTFLAGS_KASAN := -Zsanitizer=kernel-hwaddress \
-Zsanitizer-recover=kernel-hwaddress
+# LLVM sets hwasan-instrument-with-calls to 1 on x86 by default. Set it to 0
+# when inline mode is enabled.
ifdef CONFIG_KASAN_INLINE
kasan_params += hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
+ kasan_params += hwasan-instrument-with-calls=0
else
kasan_params += hwasan-instrument-with-calls=1
endif
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-12 17:26 [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2026-01-12 17:27 ` [PATCH v8 01/14] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
2026-01-12 17:27 ` [PATCH v8 03/14] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
@ 2026-01-12 18:29 ` Andrew Morton
2026-01-12 20:08 ` Maciej Wieczór-Retman
` (2 more replies)
2026-01-13 1:44 ` Andrey Konovalov
2026-01-19 16:33 ` Andrey Ryabinin
4 siblings, 3 replies; 17+ messages in thread
From: Andrew Morton @ 2026-01-12 18:29 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: corbet, morbo, rppt, lorenzo.stoakes, ubizjak, mingo,
vincenzo.frascino, maciej.wieczor-retman, maz, catalin.marinas,
yeoreum.yun, will, jackmanb, samuel.holland, glider, osandov, nsc,
luto, jpoimboe, Liam.Howlett, kees, jan.kiszka, thomas.lendacky,
jeremy.linton, dvyukov, axelrasmussen, leitao, ryabinin.a.a,
bigeasy, peterz, mark.rutland, urezki, brgerst, hpa, mhocko,
andreyknvl, weixugc, kbingham, vbabka, nathan, trintaeoitogc,
samitolvanen, tglx, thuth, surenb, anshuman.khandual, smostafa,
yuanchu, ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml,
david, bp, ardb, justinstitt, linux-kernel, linux-mm, kasan-dev,
llvm, linux-arm-kernel, linux-doc, linux-kbuild, x86
On Mon, 12 Jan 2026 17:26:29 +0000 Maciej Wieczor-Retman <m.wieczorretman@pm.me> wrote:
> The patchset aims to add a KASAN tag-based mode for the x86 architecture
> with the help of the new CPU feature called Linear Address Masking
> (LAM). Main improvement introduced by the series is 2x lower memory
> usage compared to KASAN's generic mode, the only currently available
> mode on x86. The tag based mode may also find errors that the generic
> mode couldn't because of differences in how these modes operate.
Well this is a hearty mixture of arm, x86 and MM. I guess that means
mm.git.
The review process seems to be proceeding OK so I'll add this to
mm.git's mm-new branch, which is not included in linux-next. I'll aim
to hold it there for a week while people check the patches over and
send out their acks (please). Then I hope I can move it into mm.git's
mm-unstable branch where it will receive linux-next exposure.
> [1] Currently inline mode doesn't work on x86 due to things missing in
> the compiler. I have written a patch for clang that seems to fix the
> inline mode and I was able to boot and check that all patches regarding
> the inline mode work as expected. My hope is to post the patch to LLVM
> once this series is completed, and then make inline mode available in
> the kernel config.
>
> [2] While I was able to boot the inline tag-based kernel with my
> compiler changes in a simulated environment, due to toolchain
> difficulties I couldn't get it to boot on the machine I had access to.
> Also boot time results from the simulation seem too good to be true, and
> they're much too worse for the generic case to be believable. Therefore
> I'm posting only results from the physical server platform.
>
> ======= Compilation
> Clang was used to compile the series (make LLVM=1) since gcc doesn't
> seem to have support for KASAN tag-based compiler instrumentation on
> x86.
OK, known issues and they are understandable. With this patchset is
there any way in which our testers can encounter these things? If so
can we make changes to protect them from hitting known issues?
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-12 18:29 ` [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
@ 2026-01-12 20:08 ` Maciej Wieczór-Retman
2026-01-12 20:53 ` Andrew Morton
2026-01-12 20:27 ` Dave Hansen
2026-01-13 11:47 ` Borislav Petkov
2 siblings, 1 reply; 17+ messages in thread
From: Maciej Wieczór-Retman @ 2026-01-12 20:08 UTC (permalink / raw)
To: Andrew Morton
Cc: corbet, morbo, rppt, lorenzo.stoakes, ubizjak, mingo,
vincenzo.frascino, maciej.wieczor-retman, maz, catalin.marinas,
yeoreum.yun, will, jackmanb, samuel.holland, glider, osandov, nsc,
luto, jpoimboe, Liam.Howlett, kees, jan.kiszka, thomas.lendacky,
jeremy.linton, dvyukov, axelrasmussen, leitao, ryabinin.a.a,
bigeasy, peterz, mark.rutland, urezki, brgerst, hpa, mhocko,
andreyknvl, weixugc, kbingham, vbabka, nathan, trintaeoitogc,
samitolvanen, tglx, thuth, surenb, anshuman.khandual, smostafa,
yuanchu, ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml,
david, bp, ardb, justinstitt, linux-kernel, linux-mm, kasan-dev,
llvm, linux-arm-kernel, linux-doc, linux-kbuild, x86
On 2026-01-12 at 10:29:57 -0800, Andrew Morton wrote:
>On Mon, 12 Jan 2026 17:26:29 +0000 Maciej Wieczor-Retman <m.wieczorretman@pm.me> wrote:
>
>> The patchset aims to add a KASAN tag-based mode for the x86 architecture
>> with the help of the new CPU feature called Linear Address Masking
>> (LAM). Main improvement introduced by the series is 2x lower memory
>> usage compared to KASAN's generic mode, the only currently available
>> mode on x86. The tag based mode may also find errors that the generic
>> mode couldn't because of differences in how these modes operate.
>
>Well this is a hearty mixture of arm, x86 and MM. I guess that means
>mm.git.
>
>The review process seems to be proceeding OK so I'll add this to
>mm.git's mm-new branch, which is not included in linux-next. I'll aim
>to hold it there for a week while people check the patches over and
>send out their acks (please). Then I hope I can move it into mm.git's
>mm-unstable branch where it will receive linux-next exposure.
Thank you :)
>
>> [1] Currently inline mode doesn't work on x86 due to things missing in
>> the compiler. I have written a patch for clang that seems to fix the
>> inline mode and I was able to boot and check that all patches regarding
>> the inline mode work as expected. My hope is to post the patch to LLVM
>> once this series is completed, and then make inline mode available in
>> the kernel config.
>>
>> [2] While I was able to boot the inline tag-based kernel with my
>> compiler changes in a simulated environment, due to toolchain
>> difficulties I couldn't get it to boot on the machine I had access to.
>> Also boot time results from the simulation seem too good to be true, and
>> they're much too worse for the generic case to be believable. Therefore
>> I'm posting only results from the physical server platform.
>>
>> ======= Compilation
>> Clang was used to compile the series (make LLVM=1) since gcc doesn't
>> seem to have support for KASAN tag-based compiler instrumentation on
>> x86.
>
>OK, known issues and they are understandable. With this patchset is
>there any way in which our testers can encounter these things? If so
>can we make changes to protect them from hitting known issues?
The gcc documentation states that the -fsanitize=kernel-hwaddress is
similar to -fsanitize=hwaddress, which only works on AArch64. So that
hints that it shouldn't work.
But while with KASAN sw_tags enabled the kernel compiles fine with gcc,
at least in my patched qemu it doesn't run. I remember Ada Couprie Diaz
mention that passing -march=arrowlake might help since the tag support
seems to be based on arch.
I'll check if there's a non-hacky way to have gcc work too, but perhaps
to minimize hitting known issue, for now HAVE_ARCH_KASAN_SW_TAGS should
be locked behind both ADDRESS_MASKING and CC_IS_CLANG in the Kconfig?
--
Kind regards
Maciej Wieczór-Retman
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-12 18:29 ` [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
2026-01-12 20:08 ` Maciej Wieczór-Retman
@ 2026-01-12 20:27 ` Dave Hansen
2026-01-13 11:47 ` Borislav Petkov
2 siblings, 0 replies; 17+ messages in thread
From: Dave Hansen @ 2026-01-12 20:27 UTC (permalink / raw)
To: Andrew Morton, Maciej Wieczor-Retman
Cc: corbet, morbo, rppt, lorenzo.stoakes, ubizjak, mingo,
vincenzo.frascino, maciej.wieczor-retman, maz, catalin.marinas,
yeoreum.yun, will, jackmanb, samuel.holland, glider, osandov, nsc,
luto, jpoimboe, Liam.Howlett, kees, jan.kiszka, thomas.lendacky,
jeremy.linton, dvyukov, axelrasmussen, leitao, ryabinin.a.a,
bigeasy, peterz, mark.rutland, urezki, brgerst, hpa, mhocko,
andreyknvl, weixugc, kbingham, vbabka, nathan, trintaeoitogc,
samitolvanen, tglx, thuth, surenb, anshuman.khandual, smostafa,
yuanchu, ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml,
david, bp, ardb, justinstitt, linux-kernel, linux-mm, kasan-dev,
llvm, linux-arm-kernel, linux-doc, linux-kbuild, x86
On 1/12/26 10:29, Andrew Morton wrote:
> On Mon, 12 Jan 2026 17:26:29 +0000 Maciej Wieczor-Retman <m.wieczorretman@pm.me> wrote:
>> The patchset aims to add a KASAN tag-based mode for the x86 architecture
>> with the help of the new CPU feature called Linear Address Masking
>> (LAM). Main improvement introduced by the series is 2x lower memory
>> usage compared to KASAN's generic mode, the only currently available
>> mode on x86. The tag based mode may also find errors that the generic
>> mode couldn't because of differences in how these modes operate.
> Well this is a hearty mixture of arm, x86 and MM. I guess that means
> mm.git.
>
> The review process seems to be proceeding OK so I'll add this to
> mm.git's mm-new branch, which is not included in linux-next. I'll aim
> to hold it there for a week while people check the patches over and
> send out their acks (please). Then I hope I can move it into mm.git's
> mm-unstable branch where it will receive linux-next exposure.
Yeah, it'll be good to get it some more testing exposure.
But, we definitely don't want it going upstream until it's more
thoroughly reviewed than it stands. Maciej, this would be a good time to
make sure you have a good idea who needs to review this and go rattle
some cages.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-12 20:08 ` Maciej Wieczór-Retman
@ 2026-01-12 20:53 ` Andrew Morton
2026-01-13 1:47 ` Andrey Konovalov
0 siblings, 1 reply; 17+ messages in thread
From: Andrew Morton @ 2026-01-12 20:53 UTC (permalink / raw)
To: Maciej Wieczór-Retman
Cc: corbet, morbo, rppt, lorenzo.stoakes, ubizjak, mingo,
vincenzo.frascino, maciej.wieczor-retman, maz, catalin.marinas,
yeoreum.yun, will, jackmanb, samuel.holland, glider, osandov, nsc,
luto, jpoimboe, Liam.Howlett, kees, jan.kiszka, thomas.lendacky,
jeremy.linton, dvyukov, axelrasmussen, leitao, ryabinin.a.a,
bigeasy, peterz, mark.rutland, urezki, brgerst, hpa, mhocko,
andreyknvl, weixugc, kbingham, vbabka, nathan, trintaeoitogc,
samitolvanen, tglx, thuth, surenb, anshuman.khandual, smostafa,
yuanchu, ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml,
david, bp, ardb, justinstitt, linux-kernel, linux-mm, kasan-dev,
llvm, linux-arm-kernel, linux-doc, linux-kbuild, x86
On Mon, 12 Jan 2026 20:08:23 +0000 Maciej Wieczór-Retman <m.wieczorretman@pm.me> wrote:
> >OK, known issues and they are understandable. With this patchset is
> >there any way in which our testers can encounter these things? If so
> >can we make changes to protect them from hitting known issues?
>
> The gcc documentation states that the -fsanitize=kernel-hwaddress is
> similar to -fsanitize=hwaddress, which only works on AArch64. So that
> hints that it shouldn't work.
>
> But while with KASAN sw_tags enabled the kernel compiles fine with gcc,
> at least in my patched qemu it doesn't run. I remember Ada Couprie Diaz
> mention that passing -march=arrowlake might help since the tag support
> seems to be based on arch.
>
> I'll check if there's a non-hacky way to have gcc work too, but perhaps
> to minimize hitting known issue, for now HAVE_ARCH_KASAN_SW_TAGS should
> be locked behind both ADDRESS_MASKING and CC_IS_CLANG in the Kconfig?
Yes please - my main concern is that we avoid causing any disruption to
testers/buildbots/fuzzers/etc.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-12 17:26 [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (2 preceding siblings ...)
2026-01-12 18:29 ` [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
@ 2026-01-13 1:44 ` Andrey Konovalov
2026-01-19 16:33 ` Andrey Ryabinin
4 siblings, 0 replies; 17+ messages in thread
From: Andrey Konovalov @ 2026-01-13 1:44 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: corbet, morbo, rppt, lorenzo.stoakes, ubizjak, mingo,
vincenzo.frascino, maciej.wieczor-retman, maz, catalin.marinas,
yeoreum.yun, will, jackmanb, samuel.holland, glider, osandov, nsc,
luto, jpoimboe, akpm, Liam.Howlett, kees, jan.kiszka,
thomas.lendacky, jeremy.linton, dvyukov, axelrasmussen, leitao,
ryabinin.a.a, bigeasy, peterz, mark.rutland, urezki, brgerst, hpa,
mhocko, weixugc, kbingham, vbabka, nathan, trintaeoitogc,
samitolvanen, tglx, thuth, surenb, anshuman.khandual, smostafa,
yuanchu, ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml,
david, bp, ardb, justinstitt, linux-kernel, linux-mm, kasan-dev,
llvm, linux-arm-kernel, linux-doc, linux-kbuild, x86
On Mon, Jan 12, 2026 at 6:26 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> ======= Introduction
> The patchset aims to add a KASAN tag-based mode for the x86 architecture
> with the help of the new CPU feature called Linear Address Masking
> (LAM). Main improvement introduced by the series is 2x lower memory
> usage compared to KASAN's generic mode, the only currently available
> mode on x86. The tag based mode may also find errors that the generic
> mode couldn't because of differences in how these modes operate.
>
> ======= How does KASAN' tag-based mode work?
> When enabled, memory accesses and allocations are augmented by the
> compiler during kernel compilation. Instrumentation functions are added
> to each memory allocation and each pointer dereference.
>
> The allocation related functions generate a random tag and save it in
> two places: in shadow memory that maps to the allocated memory, and in
> the top bits of the pointer that points to the allocated memory. Storing
> the tag in the top of the pointer is possible because of Top-Byte Ignore
> (TBI) on arm64 architecture and LAM on x86.
>
> The access related functions are performing a comparison between the tag
> stored in the pointer and the one stored in shadow memory. If the tags
> don't match an out of bounds error must have occurred and so an error
> report is generated.
>
> The general idea for the tag-based mode is very well explained in the
> series with the original implementation [1].
>
> [1] https://lore.kernel.org/all/cover.1544099024.git.andreyknvl@google.com/
>
> ======= Differences summary compared to the arm64 tag-based mode
> - Tag width:
> - Tag width influences the chance of a tag mismatch due to two
> tags from different allocations having the same value. The
> bigger the possible range of tag values the lower the chance
> of that happening.
> - Shortening the tag width from 8 bits to 4, while it can help
> with memory usage, it also increases the chance of not
> reporting an error. 4 bit tags have a ~7% chance of a tag
> mismatch.
>
> - Address masking mechanism
> - TBI in arm64 allows for storing metadata in the top 8 bits of
> the virtual address.
> - LAM in x86 allows storing tags in bits [62:57] of the pointer.
> To maximize memory savings the tag width is reduced to bits
> [60:57].
>
> - Inline mode mismatch reporting
> - Arm64 inserts a BRK instruction to pass metadata about a tag
> mismatch to the KASAN report.
> - Right now on x86 the INT3 instruction is used for the same
> purpose. The attempt to move it over to use UD1 is already
> implemented and tested but relies on another series that needs
> merging first. Therefore this patch will be posted separately
> once the dependency is satisfied by being merged upstream.
>
Please also update the Software Tag-Based KASAN section in
Documentation/dev-tools/kasan.rst accordingly.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-12 20:53 ` Andrew Morton
@ 2026-01-13 1:47 ` Andrey Konovalov
0 siblings, 0 replies; 17+ messages in thread
From: Andrey Konovalov @ 2026-01-13 1:47 UTC (permalink / raw)
To: Andrew Morton, Maciej Wieczór-Retman
Cc: corbet, morbo, rppt, lorenzo.stoakes, ubizjak, mingo,
vincenzo.frascino, maciej.wieczor-retman, maz, catalin.marinas,
yeoreum.yun, will, jackmanb, samuel.holland, glider, osandov, nsc,
luto, jpoimboe, Liam.Howlett, kees, jan.kiszka, thomas.lendacky,
jeremy.linton, dvyukov, axelrasmussen, leitao, ryabinin.a.a,
bigeasy, peterz, mark.rutland, urezki, brgerst, hpa, mhocko,
weixugc, kbingham, vbabka, nathan, trintaeoitogc, samitolvanen,
tglx, thuth, surenb, anshuman.khandual, smostafa, yuanchu,
ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml, david,
bp, ardb, justinstitt, linux-kernel, linux-mm, kasan-dev, llvm,
linux-arm-kernel, linux-doc, linux-kbuild, x86
On Mon, Jan 12, 2026 at 9:53 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Mon, 12 Jan 2026 20:08:23 +0000 Maciej Wieczór-Retman <m.wieczorretman@pm.me> wrote:
>
> > >OK, known issues and they are understandable. With this patchset is
> > >there any way in which our testers can encounter these things? If so
> > >can we make changes to protect them from hitting known issues?
> >
> > The gcc documentation states that the -fsanitize=kernel-hwaddress is
> > similar to -fsanitize=hwaddress, which only works on AArch64. So that
> > hints that it shouldn't work.
> >
> > But while with KASAN sw_tags enabled the kernel compiles fine with gcc,
> > at least in my patched qemu it doesn't run. I remember Ada Couprie Diaz
> > mention that passing -march=arrowlake might help since the tag support
> > seems to be based on arch.
FYI, there are some known GCC issues with arm64 SW_TAGS mode as well:
https://bugzilla.kernel.org/show_bug.cgi?id=218043#c3.
> >
> > I'll check if there's a non-hacky way to have gcc work too, but perhaps
> > to minimize hitting known issue, for now HAVE_ARCH_KASAN_SW_TAGS should
> > be locked behind both ADDRESS_MASKING and CC_IS_CLANG in the Kconfig?
>
> Yes please - my main concern is that we avoid causing any disruption to
> testers/buildbots/fuzzers/etc.
I left some comments, but from my/KASAN point of view, the series is
ready for linux-next (but this could wait for a week and maybe the
next version of the series).
I wouldn't think there would be disruption issues: one would need to
deliberately enable the SW_TAGS mode for x86 (as GENERIC is the
default mode when just enabling KASAN). But I don't mind locking down
x86 SW_TAGS to be Clang-only for now if GCC is known not to work at
all.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-12 18:29 ` [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
2026-01-12 20:08 ` Maciej Wieczór-Retman
2026-01-12 20:27 ` Dave Hansen
@ 2026-01-13 11:47 ` Borislav Petkov
2026-01-13 17:34 ` Andrew Morton
2 siblings, 1 reply; 17+ messages in thread
From: Borislav Petkov @ 2026-01-13 11:47 UTC (permalink / raw)
To: Andrew Morton
Cc: Maciej Wieczor-Retman, corbet, morbo, rppt, lorenzo.stoakes,
ubizjak, mingo, vincenzo.frascino, maciej.wieczor-retman, maz,
catalin.marinas, yeoreum.yun, will, jackmanb, samuel.holland,
glider, osandov, nsc, luto, jpoimboe, Liam.Howlett, kees,
jan.kiszka, thomas.lendacky, jeremy.linton, dvyukov,
axelrasmussen, leitao, ryabinin.a.a, bigeasy, peterz,
mark.rutland, urezki, brgerst, hpa, mhocko, andreyknvl, weixugc,
kbingham, vbabka, nathan, trintaeoitogc, samitolvanen, tglx,
thuth, surenb, anshuman.khandual, smostafa, yuanchu,
ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml, david,
ardb, justinstitt, linux-kernel, linux-mm, kasan-dev, llvm,
linux-arm-kernel, linux-doc, linux-kbuild, x86
On Mon, Jan 12, 2026 at 10:29:57AM -0800, Andrew Morton wrote:
> The review process seems to be proceeding OK so I'll add this to
> mm.git's mm-new branch, which is not included in linux-next. I'll aim
> to hold it there for a week while people check the patches over and
> send out their acks (please). Then I hope I can move it into mm.git's
> mm-unstable branch where it will receive linux-next exposure.
Yah, you can drop this one and take the next revision after all comments have
been addressed.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-13 11:47 ` Borislav Petkov
@ 2026-01-13 17:34 ` Andrew Morton
2026-01-22 17:25 ` Maciej Wieczor-Retman
0 siblings, 1 reply; 17+ messages in thread
From: Andrew Morton @ 2026-01-13 17:34 UTC (permalink / raw)
To: Borislav Petkov
Cc: Maciej Wieczor-Retman, corbet, morbo, rppt, lorenzo.stoakes,
ubizjak, mingo, vincenzo.frascino, maciej.wieczor-retman, maz,
catalin.marinas, yeoreum.yun, will, jackmanb, samuel.holland,
glider, osandov, nsc, luto, jpoimboe, Liam.Howlett, kees,
jan.kiszka, thomas.lendacky, jeremy.linton, dvyukov,
axelrasmussen, leitao, ryabinin.a.a, bigeasy, peterz,
mark.rutland, urezki, brgerst, hpa, mhocko, andreyknvl, weixugc,
kbingham, vbabka, nathan, trintaeoitogc, samitolvanen, tglx,
thuth, surenb, anshuman.khandual, smostafa, yuanchu,
ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml, david,
ardb, justinstitt, linux-kernel, linux-mm, kasan-dev, llvm,
linux-arm-kernel, linux-doc, linux-kbuild, x86
On Tue, 13 Jan 2026 12:47:05 +0100 Borislav Petkov <bp@alien8.de> wrote:
> On Mon, Jan 12, 2026 at 10:29:57AM -0800, Andrew Morton wrote:
> > The review process seems to be proceeding OK so I'll add this to
> > mm.git's mm-new branch, which is not included in linux-next. I'll aim
> > to hold it there for a week while people check the patches over and
> > send out their acks (please). Then I hope I can move it into mm.git's
> > mm-unstable branch where it will receive linux-next exposure.
>
> Yah, you can drop this one and take the next revision after all comments have
> been addressed.
Cool, I removed the series.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 01/14] kasan: sw_tags: Use arithmetic shift for shadow computation
2026-01-12 17:27 ` [PATCH v8 01/14] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
@ 2026-01-15 22:42 ` Andrey Ryabinin
2026-01-16 13:11 ` Maciej Wieczor-Retman
0 siblings, 1 reply; 17+ messages in thread
From: Andrey Ryabinin @ 2026-01-15 22:42 UTC (permalink / raw)
To: Maciej Wieczor-Retman, Catalin Marinas, Will Deacon,
Jonathan Corbet, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Andrew Morton, Jan Kiszka,
Kieran Bingham, Nathan Chancellor, Nick Desaulniers,
Bill Wendling, Justin Stitt
Cc: Samuel Holland, Maciej Wieczor-Retman, linux-arm-kernel,
linux-doc, linux-kernel, kasan-dev, linux-mm, llvm
On 1/12/26 6:27 PM, Maciej Wieczor-Retman wrote:
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 62c01b4527eb..b5beb1b10bd2 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -642,11 +642,39 @@ void kasan_non_canonical_hook(unsigned long addr)
> const char *bug_type;
>
> /*
> - * All addresses that came as a result of the memory-to-shadow mapping
> - * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
> + * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
> + * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
> + * both x86 and arm64). Thus, the possible shadow addresses (even for
> + * bogus pointers) belong to a single contiguous region that is the
> + * result of kasan_mem_to_shadow() applied to the whole address space.
> */
> - if (addr < KASAN_SHADOW_OFFSET)
> - return;
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
> + if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) ||
> + addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
> + return;
> + }
> +
> + /*
> + * For Software Tag-Based KASAN, kasan_mem_to_shadow() uses the
> + * arithmetic shift. Normally, this would make checking for a possible
> + * shadow address complicated, as the shadow address computation
> + * operation would overflow only for some memory addresses. However, due
> + * to the chosen KASAN_SHADOW_OFFSET values and the fact the
> + * kasan_mem_to_shadow() only operates on pointers with the tag reset,
> + * the overflow always happens.
> + *
> + * For arm64, the top byte of the pointer gets reset to 0xFF. Thus, the
> + * possible shadow addresses belong to a region that is the result of
> + * kasan_mem_to_shadow() applied to the memory range
> + * [0xFF000000000000, 0xFFFFFFFFFFFFFFFF]. Despite the overflow, the
^ Missing couple 00 here
> + * resulting possible shadow region is contiguous, as the overflow
> + * happens for both 0xFF000000000000 and 0xFFFFFFFFFFFFFFFF.
^ same as above
> + */
> + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) && IS_ENABLED(CONFIG_ARM64)) {
> + if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0xFFULL << 56)) ||
This will not work for inline mode because compiler uses logical shift.
Consider NULL-ptr derefernce. Compiler will calculate shadow address for 0 as:
(((0x0 | 0xffULL) << 56) >> 4)+0xffff800000000000ULL = 0x0fef8000....0
Which is less than ((0xFF00...00LL) >> 4) + 0xffff800000000000ULL = 0xffff800...0
So we will bail out here.
Perhaps we could do addr |= 0xFFLL to fix this
> + addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
> + return;
> + }
>
> orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 01/14] kasan: sw_tags: Use arithmetic shift for shadow computation
2026-01-15 22:42 ` Andrey Ryabinin
@ 2026-01-16 13:11 ` Maciej Wieczor-Retman
0 siblings, 0 replies; 17+ messages in thread
From: Maciej Wieczor-Retman @ 2026-01-16 13:11 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: Catalin Marinas, Will Deacon, Jonathan Corbet,
Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Andrew Morton, Jan Kiszka, Kieran Bingham,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
Samuel Holland, Maciej Wieczor-Retman, linux-arm-kernel,
linux-doc, linux-kernel, kasan-dev, linux-mm, llvm
Thanks for looking at the patches :)
On 2026-01-15 at 23:42:02 +0100, Andrey Ryabinin wrote:
>
>
>On 1/12/26 6:27 PM, Maciej Wieczor-Retman wrote:
>
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index 62c01b4527eb..b5beb1b10bd2 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -642,11 +642,39 @@ void kasan_non_canonical_hook(unsigned long addr)
>> const char *bug_type;
>>
>> /*
>> - * All addresses that came as a result of the memory-to-shadow mapping
>> - * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
>> + * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
>> + * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
>> + * both x86 and arm64). Thus, the possible shadow addresses (even for
>> + * bogus pointers) belong to a single contiguous region that is the
>> + * result of kasan_mem_to_shadow() applied to the whole address space.
>> */
>> - if (addr < KASAN_SHADOW_OFFSET)
>> - return;
>> + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
>> + if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0ULL)) ||
>> + addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
>> + return;
>> + }
>> +
>> + /*
>> + * For Software Tag-Based KASAN, kasan_mem_to_shadow() uses the
>> + * arithmetic shift. Normally, this would make checking for a possible
>> + * shadow address complicated, as the shadow address computation
>> + * operation would overflow only for some memory addresses. However, due
>> + * to the chosen KASAN_SHADOW_OFFSET values and the fact the
>> + * kasan_mem_to_shadow() only operates on pointers with the tag reset,
>> + * the overflow always happens.
>> + *
>> + * For arm64, the top byte of the pointer gets reset to 0xFF. Thus, the
>> + * possible shadow addresses belong to a region that is the result of
>> + * kasan_mem_to_shadow() applied to the memory range
>> + * [0xFF000000000000, 0xFFFFFFFFFFFFFFFF]. Despite the overflow, the
> ^ Missing couple 00 here
>
>> + * resulting possible shadow region is contiguous, as the overflow
>> + * happens for both 0xFF000000000000 and 0xFFFFFFFFFFFFFFFF.
> ^ same as above
Hah, right, thank you!
>
>> + */
>> + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) && IS_ENABLED(CONFIG_ARM64)) {
>> + if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0xFFULL << 56)) ||
>
>This will not work for inline mode because compiler uses logical shift.
>Consider NULL-ptr derefernce. Compiler will calculate shadow address for 0 as:
> (((0x0 | 0xffULL) << 56) >> 4)+0xffff800000000000ULL = 0x0fef8000....0
>Which is less than ((0xFF00...00LL) >> 4) + 0xffff800000000000ULL = 0xffff800...0
>So we will bail out here.
>Perhaps we could do addr |= 0xFFLL to fix this
I suppose it should work; tried it in a python script by shoving various
addresses into this check. Pushing addresses through a logical shift
memory_to_shadow normally would return early as you noticed, and after 'addr |=
0xFFLL' it seems to work as expected. And I didn't really catch any incorrect
address slipping by this scheme either. Thanks, I'll correct it.
>
>> + addr > (unsigned long)kasan_mem_to_shadow((void *)(~0ULL)))
>> + return;
>> + }
>>
>> orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);
>>
--
Kind regards
Maciej Wieczór-Retman
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 03/14] kasan: Fix inline mode for x86 tag-based mode
2026-01-12 17:27 ` [PATCH v8 03/14] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
@ 2026-01-16 13:33 ` Andrey Ryabinin
0 siblings, 0 replies; 17+ messages in thread
From: Andrey Ryabinin @ 2026-01-16 13:33 UTC (permalink / raw)
To: Maciej Wieczor-Retman, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Nathan Chancellor,
Nicolas Schier, Nick Desaulniers, Bill Wendling, Justin Stitt
Cc: Maciej Wieczor-Retman, kasan-dev, linux-kbuild, linux-kernel,
llvm
On 1/12/26 6:27 PM, Maciej Wieczor-Retman wrote:
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> The LLVM compiler uses hwasan-instrument-with-calls parameter to setup
> inline or outline mode in tag-based KASAN. If zeroed, it means the
> instrumentation implementation will be pasted into each relevant
> location along with KASAN related constants during compilation. If set
> to one all function instrumentation will be done with function calls
> instead.
>
> The default hwasan-instrument-with-calls value for the x86 architecture
> in the compiler is "1", which is not true for other architectures.
> Because of this, enabling inline mode in software tag-based KASAN
> doesn't work on x86 as the kernel script doesn't zero out the parameter
> and always sets up the outline mode.
>
> Explicitly zero out hwasan-instrument-with-calls when enabling inline
> mode in tag-based KASAN.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
> Reviewed-by: Alexander Potapenko <glider@google.com>
> ---
Reviewed-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-12 17:26 [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (3 preceding siblings ...)
2026-01-13 1:44 ` Andrey Konovalov
@ 2026-01-19 16:33 ` Andrey Ryabinin
2026-01-19 19:43 ` Maciej Wieczor-Retman
4 siblings, 1 reply; 17+ messages in thread
From: Andrey Ryabinin @ 2026-01-19 16:33 UTC (permalink / raw)
To: Maciej Wieczor-Retman, corbet, morbo, rppt, lorenzo.stoakes,
ubizjak, mingo, vincenzo.frascino, maciej.wieczor-retman, maz,
catalin.marinas, yeoreum.yun, will, jackmanb, samuel.holland,
glider, osandov, nsc, luto, jpoimboe, akpm, Liam.Howlett, kees,
jan.kiszka, thomas.lendacky, jeremy.linton, dvyukov,
axelrasmussen, leitao, bigeasy, peterz, mark.rutland, urezki,
brgerst, hpa, mhocko, andreyknvl, weixugc, kbingham, vbabka,
nathan, trintaeoitogc, samitolvanen, tglx, thuth, surenb,
anshuman.khandual, smostafa, yuanchu, ada.coupriediaz,
dave.hansen, kas, nick.desaulniers+lkml, david, bp, ardb,
justinstitt
Cc: linux-kernel, linux-mm, kasan-dev, llvm, linux-arm-kernel,
linux-doc, linux-kbuild, x86
On 1/12/26 6:26 PM, Maciej Wieczor-Retman wrote:
> ======= Compilation
> Clang was used to compile the series (make LLVM=1) since gcc doesn't
> seem to have support for KASAN tag-based compiler instrumentation on
> x86.
>
It appears that GCC nominally supports this, but in practice it does not work.
Here is a minimal reproducer: https://godbolt.org/z/s85e11T5r
As far as I understand, calling a function through a tagged pointer is not
supported by the hardware, so GCC attempts to clear the tag before the call.
This behavior seems to be inherited from the userspace implementation of HWASan (-fsanitize=hwaddress).
I have filed a GCC bug report: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=123696
For the kernel, we probably do not want this masking at all, as effectively 99.9–100%
of function pointer calls are expected to be untagged anyway.
Clang does not appear to do this, not even for userspace.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-19 16:33 ` Andrey Ryabinin
@ 2026-01-19 19:43 ` Maciej Wieczor-Retman
0 siblings, 0 replies; 17+ messages in thread
From: Maciej Wieczor-Retman @ 2026-01-19 19:43 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: corbet, morbo, rppt, lorenzo.stoakes, ubizjak, mingo,
vincenzo.frascino, maciej.wieczor-retman, maz, catalin.marinas,
yeoreum.yun, will, jackmanb, samuel.holland, glider, osandov, nsc,
luto, jpoimboe, akpm, Liam.Howlett, kees, jan.kiszka,
thomas.lendacky, jeremy.linton, dvyukov, axelrasmussen, leitao,
bigeasy, peterz, mark.rutland, urezki, brgerst, hpa, mhocko,
andreyknvl, weixugc, kbingham, vbabka, nathan, trintaeoitogc,
samitolvanen, tglx, thuth, surenb, anshuman.khandual, smostafa,
yuanchu, ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml,
david, bp, ardb, justinstitt, linux-kernel, linux-mm, kasan-dev,
llvm, linux-arm-kernel, linux-doc, linux-kbuild, x86
On 2026-01-19 at 17:33:35 +0100, Andrey Ryabinin wrote:
>On 1/12/26 6:26 PM, Maciej Wieczor-Retman wrote:
>
>> ======= Compilation
>> Clang was used to compile the series (make LLVM=1) since gcc doesn't
>> seem to have support for KASAN tag-based compiler instrumentation on
>> x86.
>>
>
>It appears that GCC nominally supports this, but in practice it does not work.
>Here is a minimal reproducer: https://godbolt.org/z/s85e11T5r
>
>As far as I understand, calling a function through a tagged pointer is not
>supported by the hardware, so GCC attempts to clear the tag before the call.
>This behavior seems to be inherited from the userspace implementation of HWASan (-fsanitize=hwaddress).
>
>I have filed a GCC bug report: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=123696
>
>For the kernel, we probably do not want this masking at all, as effectively 99.9–100%
>of function pointer calls are expected to be untagged anyway.
>
>Clang does not appear to do this, not even for userspace.
Cool, thanks, nice to know why the kernel didn't start with gcc.
I'm going to check in on the bug report every now and then and once it gets
resolved I'll test if everything works as expected on both compilers.
--
Kind regards
Maciej Wieczór-Retman
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86
2026-01-13 17:34 ` Andrew Morton
@ 2026-01-22 17:25 ` Maciej Wieczor-Retman
0 siblings, 0 replies; 17+ messages in thread
From: Maciej Wieczor-Retman @ 2026-01-22 17:25 UTC (permalink / raw)
To: Andrew Morton
Cc: Borislav Petkov, corbet, morbo, rppt, lorenzo.stoakes, ubizjak,
mingo, vincenzo.frascino, maciej.wieczor-retman, maz,
catalin.marinas, yeoreum.yun, will, jackmanb, samuel.holland,
glider, osandov, nsc, luto, jpoimboe, Liam.Howlett, kees,
jan.kiszka, thomas.lendacky, jeremy.linton, dvyukov,
axelrasmussen, leitao, ryabinin.a.a, bigeasy, peterz,
mark.rutland, urezki, brgerst, hpa, mhocko, andreyknvl, weixugc,
kbingham, vbabka, nathan, trintaeoitogc, samitolvanen, tglx,
thuth, surenb, anshuman.khandual, smostafa, yuanchu,
ada.coupriediaz, dave.hansen, kas, nick.desaulniers+lkml, david,
ardb, justinstitt, linux-kernel, linux-mm, kasan-dev, llvm,
linux-arm-kernel, linux-doc, linux-kbuild, x86
On 2026-01-13 at 09:34:00 -0800, Andrew Morton wrote:
>On Tue, 13 Jan 2026 12:47:05 +0100 Borislav Petkov <bp@alien8.de> wrote:
>
>> On Mon, Jan 12, 2026 at 10:29:57AM -0800, Andrew Morton wrote:
>> > The review process seems to be proceeding OK so I'll add this to
>> > mm.git's mm-new branch, which is not included in linux-next. I'll aim
>> > to hold it there for a week while people check the patches over and
>> > send out their acks (please). Then I hope I can move it into mm.git's
>> > mm-unstable branch where it will receive linux-next exposure.
>>
>> Yah, you can drop this one and take the next revision after all comments have
>> been addressed.
>
>Cool, I removed the series.
I sent v9 with (I hope) all comments addressed:
https://lore.kernel.org/all/cover.1768845098.git.m.wieczorretman@pm.me/
--
Kind regards
Maciej Wieczór-Retman
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2026-01-22 17:26 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-12 17:26 [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2026-01-12 17:27 ` [PATCH v8 01/14] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
2026-01-15 22:42 ` Andrey Ryabinin
2026-01-16 13:11 ` Maciej Wieczor-Retman
2026-01-12 17:27 ` [PATCH v8 03/14] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
2026-01-16 13:33 ` Andrey Ryabinin
2026-01-12 18:29 ` [PATCH v8 00/14] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
2026-01-12 20:08 ` Maciej Wieczór-Retman
2026-01-12 20:53 ` Andrew Morton
2026-01-13 1:47 ` Andrey Konovalov
2026-01-12 20:27 ` Dave Hansen
2026-01-13 11:47 ` Borislav Petkov
2026-01-13 17:34 ` Andrew Morton
2026-01-22 17:25 ` Maciej Wieczor-Retman
2026-01-13 1:44 ` Andrey Konovalov
2026-01-19 16:33 ` Andrey Ryabinin
2026-01-19 19:43 ` Maciej Wieczor-Retman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox