* [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86
@ 2025-10-29 19:05 Maciej Wieczor-Retman
2025-10-29 19:05 ` [PATCH v6 01/18] kasan: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
` (18 more replies)
0 siblings, 19 replies; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:05 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
======= Introduction
The patchset aims to add a KASAN tag-based mode for the x86 architecture
with the help of the new CPU feature called Linear Address Masking
(LAM). Main improvement introduced by the series is 2x lower memory
usage compared to KASAN's generic mode, the only currently available
mode on x86. The tag based mode may also find errors that the generic
mode couldn't because of differences in how these modes operate.
======= How does KASAN' tag-based mode work?
When enabled, memory accesses and allocations are augmented by the
compiler during kernel compilation. Instrumentation functions are added
to each memory allocation and each pointer dereference.
The allocation related functions generate a random tag and save it in
two places: in shadow memory that maps to the allocated memory, and in
the top bits of the pointer that points to the allocated memory. Storing
the tag in the top of the pointer is possible because of Top-Byte Ignore
(TBI) on arm64 architecture and LAM on x86.
The access related functions are performing a comparison between the tag
stored in the pointer and the one stored in shadow memory. If the tags
don't match an out of bounds error must have occurred and so an error
report is generated.
The general idea for the tag-based mode is very well explained in the
series with the original implementation [1].
[1] https://lore.kernel.org/all/cover.1544099024.git.andreyknvl@google.com/
======= Differences summary compared to the arm64 tag-based mode
- Tag width:
- Tag width influences the chance of a tag mismatch due to two
tags from different allocations having the same value. The
bigger the possible range of tag values the lower the chance
of that happening.
- Shortening the tag width from 8 bits to 4, while it can help
with memory usage, it also increases the chance of not
reporting an error. 4 bit tags have a ~7% chance of a tag
mismatch.
- Address masking mechanism
- TBI in arm64 allows for storing metadata in the top 8 bits of
the virtual address.
- LAM in x86 allows storing tags in bits [62:57] of the pointer.
To maximize memory savings the tag width is reduced to bits
[60:57].
- Inline mode mismatch reporting
- Arm64 inserts a BRK instruction to pass metadata about a tag
mismatch to the KASAN report.
- On x86 the INT3 instruction is used for the same purpose.
======= Testing
Checked all the kunits for both software tags and generic KASAN after
making changes.
In generic mode the results were:
kasan: pass:59 fail:0 skip:13 total:72
Totals: pass:59 fail:0 skip:13 total:72
ok 1 kasan
and for software tags:
kasan: pass:63 fail:0 skip:9 total:72
Totals: pass:63 fail:0 skip:9 total:72
ok 1 kasan
======= Benchmarks [1]
All tests were ran on a Sierra Forest server platform. The only
differences between the tests were kernel options:
- CONFIG_KASAN
- CONFIG_KASAN_GENERIC
- CONFIG_KASAN_SW_TAGS
- CONFIG_KASAN_INLINE [1]
- CONFIG_KASAN_OUTLINE
Boot time (until login prompt):
* 02:55 for clean kernel
* 05:42 / 06:32 for generic KASAN (inline/outline)
* 05:58 for tag-based KASAN (outline) [2]
Total memory usage (512GB present on the system - MemAvailable just
after boot):
* 12.56 GB for clean kernel
* 81.74 GB for generic KASAN
* 44.39 GB for tag-based KASAN
Kernel size:
* 14 MB for clean kernel
* 24.7 MB / 19.5 MB for generic KASAN (inline/outline)
* 27.1 MB / 18.1 MB for tag-based KASAN (inline/outline)
Work under load time comparison (compiling the mainline kernel) (200 cores):
* 62s for clean kernel
* 171s / 125s for generic KASAN (outline/inline)
* 145s for tag-based KASAN (outline) [2]
[1] Currently inline mode doesn't work on x86 due to things missing in
the compiler. I have written a patch for clang that seems to fix the
inline mode and I was able to boot and check that all patches regarding
the inline mode work as expected. My hope is to post the patch to LLVM
once this series is completed, and then make inline mode available in
the kernel config.
[2] While I was able to boot the inline tag-based kernel with my
compiler changes in a simulated environment, due to toolchain
difficulties I couldn't get it to boot on the machine I had access to.
Also boot time results from the simulation seem too good to be true, and
they're much too worse for the generic case to be believable. Therefore
I'm posting only results from the physical server platform.
======= Compilation
Clang was used to compile the series (make LLVM=1) since gcc doesn't
seem to have support for KASAN tag-based compiler instrumentation on
x86.
======= Dependencies
The base branch for the series is the mainline kernel, tag 6.18-rc3.
======= Previous versions
v5: https://lore.kernel.org/all/cover.1756151769.git.maciej.wieczor-retman@intel.com/
v4: https://lore.kernel.org/all/cover.1755004923.git.maciej.wieczor-retman@intel.com/
v3: https://lore.kernel.org/all/cover.1743772053.git.maciej.wieczor-retman@intel.com/
v2: https://lore.kernel.org/all/cover.1739866028.git.maciej.wieczor-retman@intel.com/
v1: https://lore.kernel.org/all/cover.1738686764.git.maciej.wieczor-retman@intel.com/
Changes v6:
- Initialize sw-tags only when LAM is available.
- Move inline mode to use UD1 instead of INT3
- Remove inline multishot patch.
- Fix the canonical check to work for user addresses too.
- Revise patch names and messages to align to tip tree rules.
- Fix vdso compilation issue.
Changes v5:
- Fix a bunch of arm64 compilation errors I didn't catch earlier.
Thank You Ada for testing the series!
- Simplify the usage of the tag handling x86 functions (virt_to_page,
phys_addr etc.).
- Remove within() and within_range() from the EXECMEM_ROX patch.
Changes v4:
- Revert x86 kasan_mem_to_shadow() scheme to the same on used in generic
KASAN. Keep the arithmetic shift idea for the KASAN in general since
it makes more sense for arm64 and in risc-v.
- Fix inline mode but leave it unavailable until a complementary
compiler patch can be merged.
- Apply Dave Hansen's comments on series formatting, patch style and
code simplifications.
Changes v3:
- Remove the runtime_const patch and setup a unified offset for both 5
and 4 paging levels.
- Add a fix for inline mode on x86 tag-based KASAN. Add a handler for
int3 that is generated on inline tag mismatches.
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
account.
- Made changes to the kasan_non_canonical_hook() according to upstream
discussion.
- Remove patches 2 and 3 since they related to risc-v and this series
adds only x86 related things.
- Reorder __tag_*() functions so they're before arch_kasan_*(). Remove
CONFIG_KASAN condition from __tag_set().
Changes v2:
- Split the series into one adding KASAN tag-based mode (this one) and
another one that adds the dense mode to KASAN (will post later).
- Removed exporting kasan_poison() and used a wrapper instead in
kasan_init_64.c
- Prepended series with 4 patches from the risc-v series and applied
review comments to the first patch as the rest already are reviewed.
Maciej Wieczor-Retman (16):
kasan: Unpoison pcpu chunks with base address tag
kasan: Unpoison vms[area] addresses with a common tag
kasan: Fix inline mode for x86 tag-based mode
x86/kasan: Add arch specific kasan functions
kasan: arm64: x86: Make special tags arch specific
x86/mm: Reset tag for virtual to physical address conversions
mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic
x86/mm: Physical address comparisons in fill_p*d/pte
x86/kasan: KASAN raw shadow memory PTE init
x86/mm: LAM compatible non-canonical definition
x86/mm: LAM initialization
x86: Minimal SLAB alignment
x86/kasan: Handle UD1 for inline KASAN reports
arm64: Unify software tag-based KASAN inline recovery path
x86/kasan: Logical bit shift for kasan_mem_to_shadow
x86/kasan: Make software tag-based kasan available
Samuel Holland (2):
kasan: sw_tags: Use arithmetic shift for shadow computation
kasan: sw_tags: Support tag widths less than 8 bits
Documentation/arch/arm64/kasan-offsets.sh | 8 ++-
Documentation/arch/x86/x86_64/mm.rst | 6 +-
MAINTAINERS | 4 +-
arch/arm64/Kconfig | 10 ++--
arch/arm64/include/asm/kasan-tags.h | 14 +++++
arch/arm64/include/asm/kasan.h | 2 -
arch/arm64/include/asm/memory.h | 14 ++++-
arch/arm64/include/asm/uaccess.h | 1 +
arch/arm64/kernel/traps.c | 17 +-----
arch/arm64/mm/kasan_init.c | 7 ++-
arch/x86/Kconfig | 4 ++
arch/x86/boot/compressed/misc.h | 1 +
arch/x86/include/asm/bug.h | 1 +
arch/x86/include/asm/cache.h | 4 ++
arch/x86/include/asm/kasan-tags.h | 9 +++
arch/x86/include/asm/kasan.h | 73 ++++++++++++++++++++++-
arch/x86/include/asm/page.h | 33 +++++++++-
arch/x86/include/asm/page_64.h | 1 +
arch/x86/kernel/head_64.S | 3 +
arch/x86/kernel/traps.c | 8 +++
arch/x86/mm/Makefile | 2 +
arch/x86/mm/init.c | 3 +
arch/x86/mm/init_64.c | 11 ++--
arch/x86/mm/kasan_init_64.c | 24 +++++++-
arch/x86/mm/kasan_inline.c | 21 +++++++
arch/x86/mm/physaddr.c | 2 +
include/linux/kasan-tags.h | 21 +++++--
include/linux/kasan.h | 46 ++++++++++++--
include/linux/mm.h | 6 +-
include/linux/page-flags-layout.h | 9 +--
lib/Kconfig.kasan | 3 +-
mm/execmem.c | 2 +-
mm/kasan/report.c | 37 ++++++++++--
mm/kasan/tags.c | 19 ++++++
mm/vmalloc.c | 6 +-
scripts/Makefile.kasan | 3 +
scripts/gdb/linux/kasan.py | 5 +-
scripts/gdb/linux/mm.py | 5 +-
38 files changed, 370 insertions(+), 75 deletions(-)
mode change 100644 => 100755 Documentation/arch/arm64/kasan-offsets.sh
create mode 100644 arch/arm64/include/asm/kasan-tags.h
create mode 100644 arch/x86/include/asm/kasan-tags.h
create mode 100644 arch/x86/mm/kasan_inline.c
--
2.51.0
^ permalink raw reply [flat|nested] 53+ messages in thread
* [PATCH v6 01/18] kasan: Unpoison pcpu chunks with base address tag
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
@ 2025-10-29 19:05 ` Maciej Wieczor-Retman
2025-11-10 17:32 ` Alexander Potapenko
2025-10-29 19:06 ` [PATCH v6 02/18] kasan: Unpoison vms[area] addresses with a common tag Maciej Wieczor-Retman
` (17 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:05 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman, stable, Baoquan He
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
The problem presented here is related to NUMA systems and tag-based
KASAN modes - software and hardware ones. It can be explained in the
following points:
1. There can be more than one virtual memory chunk.
2. Chunk's base address has a tag.
3. The base address points at the first chunk and thus inherits
the tag of the first chunk.
4. The subsequent chunks will be accessed with the tag from the
first chunk.
5. Thus, the subsequent chunks need to have their tag set to
match that of the first chunk.
Refactor code by moving it into a helper in preparation for the actual
fix.
Fixes: 1d96320f8d53 ("kasan, vmalloc: add vmalloc tagging for SW_TAGS")
Cc: <stable@vger.kernel.org> # 6.1+
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Tested-by: Baoquan He <bhe@redhat.com>
---
Changelog v6:
- Add Baoquan's tested-by tag.
- Move patch to the beginning of the series as it is a fix.
- Move the refactored code to tags.c because both software and hardware
modes compile it.
- Add fixes tag.
Changelog v4:
- Redo the patch message numbered list.
- Do the refactoring in this patch and move additions to the next new
one.
Changelog v3:
- Remove last version of this patch that just resets the tag on
base_addr and add this patch that unpoisons all areas with the same
tag instead.
include/linux/kasan.h | 10 ++++++++++
mm/kasan/tags.c | 11 +++++++++++
mm/vmalloc.c | 4 +---
3 files changed, 22 insertions(+), 3 deletions(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index d12e1a5f5a9a..b00849ea8ffd 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -614,6 +614,13 @@ static __always_inline void kasan_poison_vmalloc(const void *start,
__kasan_poison_vmalloc(start, size);
}
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms);
+static __always_inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+ if (kasan_enabled())
+ __kasan_unpoison_vmap_areas(vms, nr_vms);
+}
+
#else /* CONFIG_KASAN_VMALLOC */
static inline void kasan_populate_early_vm_area_shadow(void *start,
@@ -638,6 +645,9 @@ static inline void *kasan_unpoison_vmalloc(const void *start,
static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }
+static inline void kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{ }
+
#endif /* CONFIG_KASAN_VMALLOC */
#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index b9f31293622b..ecc17c7c675a 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -18,6 +18,7 @@
#include <linux/static_key.h>
#include <linux/string.h>
#include <linux/types.h>
+#include <linux/vmalloc.h>
#include "kasan.h"
#include "../slab.h"
@@ -146,3 +147,13 @@ void __kasan_save_free_info(struct kmem_cache *cache, void *object)
{
save_stack_info(cache, object, 0, true);
}
+
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
+{
+ int area;
+
+ for (area = 0 ; area < nr_vms ; area++) {
+ kasan_poison(vms[area]->addr, vms[area]->size,
+ arch_kasan_get_tag(vms[area]->addr), false);
+ }
+}
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 798b2ed21e46..934c8bfbcebf 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4870,9 +4870,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
* With hardware tag-based KASAN, marking is skipped for
* non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
- for (area = 0; area < nr_vms; area++)
- vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
- vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);
+ kasan_unpoison_vmap_areas(vms, nr_vms);
kfree(vas);
return vms;
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 02/18] kasan: Unpoison vms[area] addresses with a common tag
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2025-10-29 19:05 ` [PATCH v6 01/18] kasan: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
@ 2025-10-29 19:06 ` Maciej Wieczor-Retman
2025-11-10 16:40 ` Alexander Potapenko
2025-10-29 19:06 ` [PATCH v6 03/18] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
` (16 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:06 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman, stable, Baoquan He
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
The problem presented here is related to NUMA systems and tag-based
KASAN modes - software and hardware ones. It can be explained in the
following points:
1. There can be more than one virtual memory chunk.
2. Chunk's base address has a tag.
3. The base address points at the first chunk and thus inherits
the tag of the first chunk.
4. The subsequent chunks will be accessed with the tag from the
first chunk.
5. Thus, the subsequent chunks need to have their tag set to
match that of the first chunk.
Unpoison all vms[]->addr memory and pointers with the same tag to
resolve the mismatch.
Fixes: 1d96320f8d53 ("kasan, vmalloc: add vmalloc tagging for SW_TAGS")
Cc: <stable@vger.kernel.org> # 6.1+
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Tested-by: Baoquan He <bhe@redhat.com>
---
Changelog v6:
- Add Baoquan's tested-by tag.
- Move patch to the beginning of the series as it is a fix.
- Add fixes tag.
mm/kasan/tags.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index ecc17c7c675a..c6b40cbffae3 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -148,12 +148,20 @@ void __kasan_save_free_info(struct kmem_cache *cache, void *object)
save_stack_info(cache, object, 0, true);
}
+/*
+ * A tag mismatch happens when calculating per-cpu chunk addresses, because
+ * they all inherit the tag from vms[0]->addr, even when nr_vms is bigger
+ * than 1. This is a problem because all the vms[]->addr come from separate
+ * allocations and have different tags so while the calculated address is
+ * correct the tag isn't.
+ */
void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
{
int area;
for (area = 0 ; area < nr_vms ; area++) {
kasan_poison(vms[area]->addr, vms[area]->size,
- arch_kasan_get_tag(vms[area]->addr), false);
+ arch_kasan_get_tag(vms[0]->addr), false);
+ arch_kasan_set_tag(vms[area]->addr, arch_kasan_get_tag(vms[0]->addr));
}
}
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 03/18] kasan: sw_tags: Use arithmetic shift for shadow computation
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2025-10-29 19:05 ` [PATCH v6 01/18] kasan: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
2025-10-29 19:06 ` [PATCH v6 02/18] kasan: Unpoison vms[area] addresses with a common tag Maciej Wieczor-Retman
@ 2025-10-29 19:06 ` Maciej Wieczor-Retman
2025-11-11 9:39 ` Alexander Potapenko
2025-10-29 19:06 ` [PATCH v6 04/18] kasan: sw_tags: Support tag widths less than 8 bits Maciej Wieczor-Retman
` (15 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:06 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Samuel Holland <samuel.holland@sifive.com>
Currently, kasan_mem_to_shadow() uses a logical right shift, which turns
canonical kernel addresses into non-canonical addresses by clearing the
high KASAN_SHADOW_SCALE_SHIFT bits. The value of KASAN_SHADOW_OFFSET is
then chosen so that the addition results in a canonical address for the
shadow memory.
For KASAN_GENERIC, this shift/add combination is ABI with the compiler,
because KASAN_SHADOW_OFFSET is used in compiler-generated inline tag
checks[1], which must only attempt to dereference canonical addresses.
However, for KASAN_SW_TAGS there is some freedom to change the algorithm
without breaking the ABI. Because TBI is enabled for kernel addresses,
the top bits of shadow memory addresses computed during tag checks are
irrelevant, and so likewise are the top bits of KASAN_SHADOW_OFFSET.
This is demonstrated by the fact that LLVM uses a logical right shift in
the tag check fast path[2] but a sbfx (signed bitfield extract)
instruction in the slow path[3] without causing any issues.
Using an arithmetic shift in kasan_mem_to_shadow() provides a number of
benefits:
1) The memory layout doesn't change but is easier to understand.
KASAN_SHADOW_OFFSET becomes a canonical memory address, and the shifted
pointer becomes a negative offset, so KASAN_SHADOW_OFFSET ==
KASAN_SHADOW_END regardless of the shift amount or the size of the
virtual address space.
2) KASAN_SHADOW_OFFSET becomes a simpler constant, requiring only one
instruction to load instead of two. Since it must be loaded in each
function with a tag check, this decreases kernel text size by 0.5%.
3) This shift and the sign extension from kasan_reset_tag() can be
combined into a single sbfx instruction. When this same algorithm change
is applied to the compiler, it removes an instruction from each inline
tag check, further reducing kernel text size by an additional 4.6%.
These benefits extend to other architectures as well. On RISC-V, where
the baseline ISA does not shifted addition or have an equivalent to the
sbfx instruction, loading KASAN_SHADOW_OFFSET is reduced from 3 to 2
instructions, and kasan_mem_to_shadow(kasan_reset_tag(addr)) similarly
combines two consecutive right shifts.
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/AddressSanitizer.cpp#L1316 [1]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Transforms/Instrumentation/HWAddressSanitizer.cpp#L895 [2]
Link: https://github.com/llvm/llvm-project/blob/llvmorg-20-init/llvm/lib/Target/AArch64/AArch64AsmPrinter.cpp#L669 [3]
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
Changelog v6: (Maciej)
- Add Catalin's acked-by.
- Move x86 gdb snippet here from the last patch.
Changelog v5: (Maciej)
- (u64) -> (unsigned long) in report.c
Changelog v4: (Maciej)
- Revert x86 to signed mem_to_shadow mapping.
- Remove last two paragraphs since they were just poorer duplication of
the comments in kasan_non_canonical_hook().
Changelog v3: (Maciej)
- Fix scripts/gdb/linux/kasan.py so the new signed mem_to_shadow() is
reflected there.
- Fix Documentation/arch/arm64/kasan-offsets.sh to take new offsets into
account.
- Made changes to the kasan_non_canonical_hook() according to upstream
discussion. Settled on overflow on both ranges and separate checks for
x86 and arm.
Changelog v2: (Maciej)
- Correct address range that's checked in kasan_non_canonical_hook().
Adjust the comment inside.
- Remove part of comment from arch/arm64/include/asm/memory.h.
- Append patch message paragraph about the overflow in
kasan_non_canonical_hook().
Documentation/arch/arm64/kasan-offsets.sh | 8 +++--
arch/arm64/Kconfig | 10 +++----
arch/arm64/include/asm/memory.h | 14 ++++++++-
arch/arm64/mm/kasan_init.c | 7 +++--
include/linux/kasan.h | 10 +++++--
mm/kasan/report.c | 36 ++++++++++++++++++++---
scripts/gdb/linux/kasan.py | 5 +++-
scripts/gdb/linux/mm.py | 5 ++--
8 files changed, 76 insertions(+), 19 deletions(-)
mode change 100644 => 100755 Documentation/arch/arm64/kasan-offsets.sh
diff --git a/Documentation/arch/arm64/kasan-offsets.sh b/Documentation/arch/arm64/kasan-offsets.sh
old mode 100644
new mode 100755
index 2dc5f9e18039..ce777c7c7804
--- a/Documentation/arch/arm64/kasan-offsets.sh
+++ b/Documentation/arch/arm64/kasan-offsets.sh
@@ -5,8 +5,12 @@
print_kasan_offset () {
printf "%02d\t" $1
- printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
- - (1 << (64 - 32 - $2)) ))
+ if [[ $2 -ne 4 ]] then
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+ - (1 << (64 - 32 - $2)) ))
+ else
+ printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) ))
+ fi
}
echo KASAN_SHADOW_SCALE_SHIFT = 3
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6663ffd23f25..ac50ba2d760b 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -433,11 +433,11 @@ config KASAN_SHADOW_OFFSET
default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
- default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
- default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
- default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
- default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
- default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+ default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
+ default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
+ default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+ default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+ default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
default 0xffffffffffffffff
config UNWIND_TABLES
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index f1505c4acb38..7bbebde59a75 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -89,7 +89,15 @@
*
* KASAN_SHADOW_END is defined first as the shadow address that corresponds to
* the upper bound of possible virtual kernel memory addresses UL(1) << 64
- * according to the mapping formula.
+ * according to the mapping formula. For Generic KASAN, the address in the
+ * mapping formula is treated as unsigned (part of the compiler's ABI), so the
+ * end of the shadow memory region is at a large positive offset from
+ * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
+ * formula is treated as signed. Since all kernel addresses are negative, they
+ * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
+ * itself the end of the shadow memory region. (User pointers are positive and
+ * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
+ * not allocated for them.)
*
* KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
* memory start must map to the lowest possible kernel virtual memory address
@@ -100,7 +108,11 @@
*/
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_GENERIC
#define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
+#else
+#define KASAN_SHADOW_END KASAN_SHADOW_OFFSET
+#endif
#define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
#define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual)
#define PAGE_END KASAN_SHADOW_START
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index abeb81bf6ebd..937f6eb8115b 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
/* The early shadow maps everything to a single page of zeroes */
asmlinkage void __init kasan_early_init(void)
{
- BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
- KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
+ KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
+ else
+ BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b00849ea8ffd..952ade776e51 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -61,8 +61,14 @@ int kasan_populate_early_shadow(const void *shadow_start,
#ifndef kasan_mem_to_shadow
static inline void *kasan_mem_to_shadow(const void *addr)
{
- return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
- + KASAN_SHADOW_OFFSET;
+ void *scaled;
+
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+ else
+ scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT);
+
+ return KASAN_SHADOW_OFFSET + scaled;
}
#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 62c01b4527eb..50d487a0687a 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -642,11 +642,39 @@ void kasan_non_canonical_hook(unsigned long addr)
const char *bug_type;
/*
- * All addresses that came as a result of the memory-to-shadow mapping
- * (even for bogus pointers) must be >= KASAN_SHADOW_OFFSET.
+ * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+ * and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
+ * both x86 and arm64). Thus, the possible shadow addresses (even for
+ * bogus pointers) belong to a single contiguous region that is the
+ * result of kasan_mem_to_shadow() applied to the whole address space.
*/
- if (addr < KASAN_SHADOW_OFFSET)
- return;
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+ if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0UL)) ||
+ addr > (unsigned long)kasan_mem_to_shadow((void *)(~0UL)))
+ return;
+ }
+
+ /*
+ * For Software Tag-Based KASAN, kasan_mem_to_shadow() uses the
+ * arithmetic shift. Normally, this would make checking for a possible
+ * shadow address complicated, as the shadow address computation
+ * operation would overflow only for some memory addresses. However, due
+ * to the chosen KASAN_SHADOW_OFFSET values and the fact the
+ * kasan_mem_to_shadow() only operates on pointers with the tag reset,
+ * the overflow always happens.
+ *
+ * For arm64, the top byte of the pointer gets reset to 0xFF. Thus, the
+ * possible shadow addresses belong to a region that is the result of
+ * kasan_mem_to_shadow() applied to the memory range
+ * [0xFF000000000000, 0xFFFFFFFFFFFFFFFF]. Despite the overflow, the
+ * resulting possible shadow region is contiguous, as the overflow
+ * happens for both 0xFF000000000000 and 0xFFFFFFFFFFFFFFFF.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) && IS_ENABLED(CONFIG_ARM64)) {
+ if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0xFFUL << 56)) ||
+ addr > (unsigned long)kasan_mem_to_shadow((void *)(~0UL)))
+ return;
+ }
orig_addr = (unsigned long)kasan_shadow_to_mem((void *)addr);
diff --git a/scripts/gdb/linux/kasan.py b/scripts/gdb/linux/kasan.py
index 56730b3fde0b..4b86202b155f 100644
--- a/scripts/gdb/linux/kasan.py
+++ b/scripts/gdb/linux/kasan.py
@@ -7,7 +7,8 @@
#
import gdb
-from linux import constants, mm
+from linux import constants, utils, mm
+from ctypes import c_int64 as s64
def help():
t = """Usage: lx-kasan_mem_to_shadow [Hex memory addr]
@@ -39,6 +40,8 @@ class KasanMemToShadow(gdb.Command):
else:
help()
def kasan_mem_to_shadow(self, addr):
+ if constants.CONFIG_KASAN_SW_TAGS and not utils.is_target_arch('x86'):
+ addr = s64(addr)
return (addr >> self.p_ops.KASAN_SHADOW_SCALE_SHIFT) + self.p_ops.KASAN_SHADOW_OFFSET
KasanMemToShadow()
diff --git a/scripts/gdb/linux/mm.py b/scripts/gdb/linux/mm.py
index 7571aebbe650..2e63f3dedd53 100644
--- a/scripts/gdb/linux/mm.py
+++ b/scripts/gdb/linux/mm.py
@@ -110,12 +110,13 @@ class aarch64_page_ops():
self.KERNEL_END = gdb.parse_and_eval("_end")
if constants.LX_CONFIG_KASAN_GENERIC or constants.LX_CONFIG_KASAN_SW_TAGS:
+ self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
if constants.LX_CONFIG_KASAN_GENERIC:
self.KASAN_SHADOW_SCALE_SHIFT = 3
+ self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
else:
self.KASAN_SHADOW_SCALE_SHIFT = 4
- self.KASAN_SHADOW_OFFSET = constants.LX_CONFIG_KASAN_SHADOW_OFFSET
- self.KASAN_SHADOW_END = (1 << (64 - self.KASAN_SHADOW_SCALE_SHIFT)) + self.KASAN_SHADOW_OFFSET
+ self.KASAN_SHADOW_END = self.KASAN_SHADOW_OFFSET
self.PAGE_END = self.KASAN_SHADOW_END - (1 << (self.vabits_actual - self.KASAN_SHADOW_SCALE_SHIFT))
else:
self.PAGE_END = self._PAGE_END(self.VA_BITS_MIN)
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 04/18] kasan: sw_tags: Support tag widths less than 8 bits
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (2 preceding siblings ...)
2025-10-29 19:06 ` [PATCH v6 03/18] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
@ 2025-10-29 19:06 ` Maciej Wieczor-Retman
2025-11-10 17:37 ` Alexander Potapenko
2025-10-29 19:06 ` [PATCH v6 05/18] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
` (14 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:06 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Samuel Holland <samuel.holland@sifive.com>
Allow architectures to override KASAN_TAG_KERNEL in asm/kasan.h. This
is needed on RISC-V, which supports 57-bit virtual addresses and 7-bit
pointer tags. For consistency, move the arm64 MTE definition of
KASAN_TAG_MIN to asm/kasan.h, since it is also architecture-dependent;
RISC-V's equivalent extension is expected to support 7-bit hardware
memory tags.
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Samuel Holland <samuel.holland@sifive.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
arch/arm64/include/asm/kasan.h | 6 ++++--
arch/arm64/include/asm/uaccess.h | 1 +
include/linux/kasan-tags.h | 13 ++++++++-----
3 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index e1b57c13f8a4..4ab419df8b93 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -6,8 +6,10 @@
#include <linux/linkage.h>
#include <asm/memory.h>
-#include <asm/mte-kasan.h>
-#include <asm/pgtable-types.h>
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
+#endif
#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
#define arch_kasan_reset_tag(addr) __tag_reset(addr)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 1aa4ecb73429..8f700a7dd2cd 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -22,6 +22,7 @@
#include <asm/cpufeature.h>
#include <asm/mmu.h>
#include <asm/mte.h>
+#include <asm/mte-kasan.h>
#include <asm/ptrace.h>
#include <asm/memory.h>
#include <asm/extable.h>
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index 4f85f562512c..e07c896f95d3 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -2,13 +2,16 @@
#ifndef _LINUX_KASAN_TAGS_H
#define _LINUX_KASAN_TAGS_H
+#include <asm/kasan.h>
+
+#ifndef KASAN_TAG_KERNEL
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
-#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */
-#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */
+#endif
+
+#define KASAN_TAG_INVALID (KASAN_TAG_KERNEL - 1) /* inaccessible memory tag */
+#define KASAN_TAG_MAX (KASAN_TAG_KERNEL - 2) /* maximum value for random tags */
-#ifdef CONFIG_KASAN_HW_TAGS
-#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
-#else
+#ifndef KASAN_TAG_MIN
#define KASAN_TAG_MIN 0x00 /* minimum value for random tags */
#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 05/18] kasan: Fix inline mode for x86 tag-based mode
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (3 preceding siblings ...)
2025-10-29 19:06 ` [PATCH v6 04/18] kasan: sw_tags: Support tag widths less than 8 bits Maciej Wieczor-Retman
@ 2025-10-29 19:06 ` Maciej Wieczor-Retman
2025-11-11 9:22 ` Alexander Potapenko
2025-10-29 19:07 ` [PATCH v6 06/18] x86/kasan: Add arch specific kasan functions Maciej Wieczor-Retman
` (13 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:06 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
The LLVM compiler uses hwasan-instrument-with-calls parameter to setup
inline or outline mode in tag-based KASAN. If zeroed, it means the
instrumentation implementation will be pasted into each relevant
location along with KASAN related constants during compilation. If set
to one all function instrumentation will be done with function calls
instead.
The default hwasan-instrument-with-calls value for the x86 architecture
in the compiler is "1", which is not true for other architectures.
Because of this, enabling inline mode in software tag-based KASAN
doesn't work on x86 as the kernel script doesn't zero out the parameter
and always sets up the outline mode.
Explicitly zero out hwasan-instrument-with-calls when enabling inline
mode in tag-based KASAN.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
---
Changelog v6:
- Add Andrey's Reviewed-by tag.
Changelog v3:
- Add this patch to the series.
scripts/Makefile.kasan | 3 +++
1 file changed, 3 insertions(+)
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 0ba2aac3b8dc..e485814df3e9 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -76,8 +76,11 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress
RUSTFLAGS_KASAN := -Zsanitizer=kernel-hwaddress \
-Zsanitizer-recover=kernel-hwaddress
+# LLVM sets hwasan-instrument-with-calls to 1 on x86 by default. Set it to 0
+# when inline mode is enabled.
ifdef CONFIG_KASAN_INLINE
kasan_params += hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)
+ kasan_params += hwasan-instrument-with-calls=0
else
kasan_params += hwasan-instrument-with-calls=1
endif
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 06/18] x86/kasan: Add arch specific kasan functions
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (4 preceding siblings ...)
2025-10-29 19:06 ` [PATCH v6 05/18] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
@ 2025-10-29 19:07 ` Maciej Wieczor-Retman
2025-11-11 9:31 ` Alexander Potapenko
2025-10-29 19:07 ` [PATCH v6 07/18] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
` (12 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:07 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
KASAN's software tag-based mode needs multiple macros/functions to
handle tag and pointer interactions - to set, retrieve and reset tags
from the top bits of a pointer.
Mimic functions currently used by arm64 but change the tag's position to
bits [60:57] in the pointer.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
---
Changelog v6:
- Remove empty line after ifdef CONFIG_KASAN_SW_TAGS
- Add ifdef 64 bit to avoid problems in vdso32.
- Add Andrey's Acked-by tag.
Changelog v4:
- Rewrite __tag_set() without pointless casts and make it more readable.
Changelog v3:
- Reorder functions so that __tag_*() etc are above the
arch_kasan_*() ones.
- Remove CONFIG_KASAN condition from __tag_set()
arch/x86/include/asm/kasan.h | 42 ++++++++++++++++++++++++++++++++++--
1 file changed, 40 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index d7e33c7f096b..396071832d02 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -3,6 +3,8 @@
#define _ASM_X86_KASAN_H
#include <linux/const.h>
+#include <linux/kasan-tags.h>
+#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
#define KASAN_SHADOW_SCALE_SHIFT 3
@@ -24,8 +26,43 @@
KASAN_SHADOW_SCALE_SHIFT)))
#ifndef __ASSEMBLER__
+#include <linux/bitops.h>
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+
+#ifdef CONFIG_KASAN_SW_TAGS
+#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
+#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
+#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+#else
+#define __tag_shifted(tag) 0UL
+#define __tag_reset(addr) (addr)
+#define __tag_get(addr) 0
+#endif /* CONFIG_KASAN_SW_TAGS */
+
+#ifdef CONFIG_64BIT
+static inline void *__tag_set(const void *__addr, u8 tag)
+{
+ u64 addr = (u64)__addr;
+
+ addr &= ~__tag_shifted(KASAN_TAG_MASK);
+ addr |= __tag_shifted(tag);
+
+ return (void *)addr;
+}
+#else
+static inline void *__tag_set(void *__addr, u8 tag)
+{
+ return __addr;
+}
+#endif
+
+#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
+#define arch_kasan_reset_tag(addr) __tag_reset(addr)
+#define arch_kasan_get_tag(addr) __tag_get(addr)
#ifdef CONFIG_KASAN
+
void __init kasan_early_init(void);
void __init kasan_init(void);
void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid);
@@ -34,8 +71,9 @@ static inline void kasan_early_init(void) { }
static inline void kasan_init(void) { }
static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size,
int nid) { }
-#endif
-#endif
+#endif /* CONFIG_KASAN */
+
+#endif /* __ASSEMBLER__ */
#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 07/18] kasan: arm64: x86: Make special tags arch specific
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (5 preceding siblings ...)
2025-10-29 19:07 ` [PATCH v6 06/18] x86/kasan: Add arch specific kasan functions Maciej Wieczor-Retman
@ 2025-10-29 19:07 ` Maciej Wieczor-Retman
2025-11-11 9:34 ` Alexander Potapenko
2025-10-29 19:07 ` [PATCH v6 08/18] x86/mm: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
` (11 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:07 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
KASAN's tag-based mode defines multiple special tag values. They're
reserved for:
- Native kernel value. On arm64 it's 0xFF and it causes an early return
in the tag checking function.
- Invalid value. 0xFE marks an area as freed / unallocated. It's also
the value that is used to initialize regions of shadow memory.
- Max value. 0xFD is the highest value that can be randomly generated
for a new tag.
Metadata macro is also defined:
- Tag width equal to 8.
Tag-based mode on x86 is going to use 4 bit wide tags so all the above
values need to be changed accordingly.
Make native kernel tag arch specific for x86 and arm64.
Replace hardcoded kernel tag value and tag width with macros in KASAN's
non-arch specific code.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v6:
- Add hardware tags KASAN_TAG_WIDTH value to the arm64 arch file.
- Keep KASAN_TAG_MASK in the mmzone.h.
- Remove ifndef from KASAN_SHADOW_INIT.
Changelog v5:
- Move KASAN_TAG_MIN to the arm64 kasan-tags.h for the hardware KASAN
mode case.
Changelog v4:
- Move KASAN_TAG_MASK to kasan-tags.h.
Changelog v2:
- Remove risc-v from the patch.
MAINTAINERS | 2 +-
arch/arm64/include/asm/kasan-tags.h | 14 ++++++++++++++
arch/arm64/include/asm/kasan.h | 4 ----
arch/x86/include/asm/kasan-tags.h | 9 +++++++++
include/linux/kasan-tags.h | 10 +++++++++-
include/linux/kasan.h | 3 +--
include/linux/mm.h | 6 +++---
include/linux/page-flags-layout.h | 9 +--------
8 files changed, 38 insertions(+), 19 deletions(-)
create mode 100644 arch/arm64/include/asm/kasan-tags.h
create mode 100644 arch/x86/include/asm/kasan-tags.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 3da2c26a796b..53cbc7534911 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13421,7 +13421,7 @@ L: kasan-dev@googlegroups.com
S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kasan.rst
-F: arch/*/include/asm/*kasan.h
+F: arch/*/include/asm/*kasan*.h
F: arch/*/mm/kasan_init*
F: include/linux/kasan*.h
F: lib/Kconfig.kasan
diff --git a/arch/arm64/include/asm/kasan-tags.h b/arch/arm64/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..e6b5086e3f44
--- /dev/null
+++ b/arch/arm64/include/asm/kasan-tags.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
+
+#define KASAN_TAG_WIDTH 8
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
+#define KASAN_TAG_WIDTH 4
+#endif
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 4ab419df8b93..d2841e0fb908 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -7,10 +7,6 @@
#include <linux/linkage.h>
#include <asm/memory.h>
-#ifdef CONFIG_KASAN_HW_TAGS
-#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
-#endif
-
#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
#define arch_kasan_reset_tag(addr) __tag_reset(addr)
#define arch_kasan_get_tag(addr) __tag_get(addr)
diff --git a/arch/x86/include/asm/kasan-tags.h b/arch/x86/include/asm/kasan-tags.h
new file mode 100644
index 000000000000..68ba385bc75c
--- /dev/null
+++ b/arch/x86/include/asm/kasan-tags.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_KASAN_TAGS_H
+#define __ASM_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL 0xF /* native kernel pointers tag */
+
+#define KASAN_TAG_WIDTH 4
+
+#endif /* ASM_KASAN_TAGS_H */
diff --git a/include/linux/kasan-tags.h b/include/linux/kasan-tags.h
index e07c896f95d3..fe80fa8f3315 100644
--- a/include/linux/kasan-tags.h
+++ b/include/linux/kasan-tags.h
@@ -2,7 +2,15 @@
#ifndef _LINUX_KASAN_TAGS_H
#define _LINUX_KASAN_TAGS_H
-#include <asm/kasan.h>
+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+#include <asm/kasan-tags.h>
+#endif
+
+#ifndef KASAN_TAG_WIDTH
+#define KASAN_TAG_WIDTH 0
+#endif
+
+#define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1)
#ifndef KASAN_TAG_KERNEL
#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 952ade776e51..3c0c60ed5d5c 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -39,8 +39,7 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t;
/* Software KASAN implementations use shadow memory. */
#ifdef CONFIG_KASAN_SW_TAGS
-/* This matches KASAN_TAG_INVALID. */
-#define KASAN_SHADOW_INIT 0xFE
+#define KASAN_SHADOW_INIT KASAN_TAG_INVALID
#else
#define KASAN_SHADOW_INIT 0
#endif
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d16b33bacc32..09538c7487f3 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1762,7 +1762,7 @@ static inline u8 page_kasan_tag(const struct page *page)
if (kasan_enabled()) {
tag = (page->flags.f >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
- tag ^= 0xff;
+ tag ^= KASAN_TAG_KERNEL;
}
return tag;
@@ -1775,7 +1775,7 @@ static inline void page_kasan_tag_set(struct page *page, u8 tag)
if (!kasan_enabled())
return;
- tag ^= 0xff;
+ tag ^= KASAN_TAG_KERNEL;
old_flags = READ_ONCE(page->flags.f);
do {
flags = old_flags;
@@ -1794,7 +1794,7 @@ static inline void page_kasan_tag_reset(struct page *page)
static inline u8 page_kasan_tag(const struct page *page)
{
- return 0xff;
+ return KASAN_TAG_KERNEL;
}
static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h
index 760006b1c480..b2cc4cb870e0 100644
--- a/include/linux/page-flags-layout.h
+++ b/include/linux/page-flags-layout.h
@@ -3,6 +3,7 @@
#define PAGE_FLAGS_LAYOUT_H
#include <linux/numa.h>
+#include <linux/kasan-tags.h>
#include <generated/bounds.h>
/*
@@ -72,14 +73,6 @@
#define NODE_NOT_IN_PAGE_FLAGS 1
#endif
-#if defined(CONFIG_KASAN_SW_TAGS)
-#define KASAN_TAG_WIDTH 8
-#elif defined(CONFIG_KASAN_HW_TAGS)
-#define KASAN_TAG_WIDTH 4
-#else
-#define KASAN_TAG_WIDTH 0
-#endif
-
#ifdef CONFIG_NUMA_BALANCING
#define LAST__PID_SHIFT 8
#define LAST__PID_MASK ((1 << LAST__PID_SHIFT)-1)
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 08/18] x86/mm: Reset tag for virtual to physical address conversions
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (6 preceding siblings ...)
2025-10-29 19:07 ` [PATCH v6 07/18] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
@ 2025-10-29 19:07 ` Maciej Wieczor-Retman
2025-11-11 9:42 ` Alexander Potapenko
2025-10-29 19:07 ` [PATCH v6 09/18] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
` (10 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:07 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Any place where pointer arithmetic is used to convert a virtual address
into a physical one can raise errors if the virtual address is tagged.
Reset the pointer's tag by sign extending the tag bits in macros that do
pointer arithmetic in address conversions. There will be no change in
compiled code with KASAN disabled since the compiler will optimize the
__tag_reset() out.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v5:
- Move __tag_reset() calls into __phys_addr_nodebug() and
__virt_addr_valid() instead of calling it on the arguments of higher
level functions.
Changelog v4:
- Simplify page_to_virt() by removing pointless casts.
- Remove change in __is_canonical_address() because it's taken care of
in a later patch due to a LAM compatible definition of canonical.
arch/x86/include/asm/page.h | 8 ++++++++
arch/x86/include/asm/page_64.h | 1 +
arch/x86/mm/physaddr.c | 2 ++
3 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index 9265f2fca99a..bcf5cad3da36 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -7,6 +7,7 @@
#ifdef __KERNEL__
#include <asm/page_types.h>
+#include <asm/kasan.h>
#ifdef CONFIG_X86_64
#include <asm/page_64.h>
@@ -65,6 +66,13 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
* virt_to_page(kaddr) returns a valid pointer if and only if
* virt_addr_valid(kaddr) returns true.
*/
+
+#ifdef CONFIG_KASAN_SW_TAGS
+#define page_to_virt(x) ({ \
+ void *__addr = __va(page_to_pfn((struct page *)x) << PAGE_SHIFT); \
+ __tag_set(__addr, page_kasan_tag(x)); \
+})
+#endif
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
extern bool __virt_addr_valid(unsigned long kaddr);
#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr))
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index 015d23f3e01f..b18fef43dd34 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -21,6 +21,7 @@ extern unsigned long direct_map_physmem_end;
static __always_inline unsigned long __phys_addr_nodebug(unsigned long x)
{
+ x = __tag_reset(x);
unsigned long y = x - __START_KERNEL_map;
/* use the carry flag to determine if x was < __START_KERNEL_map */
diff --git a/arch/x86/mm/physaddr.c b/arch/x86/mm/physaddr.c
index fc3f3d3e2ef2..d6aa3589c798 100644
--- a/arch/x86/mm/physaddr.c
+++ b/arch/x86/mm/physaddr.c
@@ -14,6 +14,7 @@
#ifdef CONFIG_DEBUG_VIRTUAL
unsigned long __phys_addr(unsigned long x)
{
+ x = __tag_reset(x);
unsigned long y = x - __START_KERNEL_map;
/* use the carry flag to determine if x was < __START_KERNEL_map */
@@ -46,6 +47,7 @@ EXPORT_SYMBOL(__phys_addr_symbol);
bool __virt_addr_valid(unsigned long x)
{
+ x = __tag_reset(x);
unsigned long y = x - __START_KERNEL_map;
/* use the carry flag to determine if x was < __START_KERNEL_map */
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 09/18] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (7 preceding siblings ...)
2025-10-29 19:07 ` [PATCH v6 08/18] x86/mm: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
@ 2025-10-29 19:07 ` Maciej Wieczor-Retman
2025-11-11 9:13 ` Alexander Potapenko
2025-10-29 20:07 ` [PATCH v6 10/18] x86/mm: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
` (9 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 19:07 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release.
vm_reset_perms() calculates range's start and end addresses using min()
and max() functions. To do that it compares pointers but, with KASAN
software tags mode enabled, some are tagged - addr variable is, while
start and end variables aren't. This can cause the wrong address to be
chosen and result in various errors in different places.
Reset tags in the address used as function argument in min(), max().
execmem_cache_add() adds tagged pointers to a maple tree structure,
which then are incorrectly compared when walking the tree. That results
in different pointers being returned later and page permission violation
errors panicking the kernel.
Reset tag of the address range inserted into the maple tree inside
execmem_vmalloc() which then gets propagated to execmem_cache_add().
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v6:
- Move back the tag reset from execmem_cache_add() to execmem_vmalloc()
(Mike Rapoport)
- Rewrite the changelogs to match the code changes from v6 and v5.
Changelog v5:
- Remove the within_range() change.
- arch_kasan_reset_tag -> kasan_reset_tag.
Changelog v4:
- Add patch to the series.
mm/execmem.c | 2 +-
mm/vmalloc.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/execmem.c b/mm/execmem.c
index 810a4ba9c924..fd11409a6217 100644
--- a/mm/execmem.c
+++ b/mm/execmem.c
@@ -59,7 +59,7 @@ static void *execmem_vmalloc(struct execmem_range *range, size_t size,
return NULL;
}
- return p;
+ return kasan_reset_tag(p);
}
struct vm_struct *execmem_vmap(size_t size)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 934c8bfbcebf..392e3863d7d0 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3328,7 +3328,7 @@ static void vm_reset_perms(struct vm_struct *area)
* the vm_unmap_aliases() flush includes the direct map.
*/
for (i = 0; i < area->nr_pages; i += 1U << page_order) {
- unsigned long addr = (unsigned long)page_address(area->pages[i]);
+ unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));
if (addr) {
unsigned long page_size;
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 10/18] x86/mm: Physical address comparisons in fill_p*d/pte
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (8 preceding siblings ...)
2025-10-29 19:07 ` [PATCH v6 09/18] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
@ 2025-10-29 20:07 ` Maciej Wieczor-Retman
2025-11-10 16:24 ` Alexander Potapenko
2025-10-29 20:07 ` [PATCH v6 11/18] x86/kasan: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
` (8 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 20:07 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Calculating page offset returns a pointer without a tag. When comparing
the calculated offset to a tagged page pointer an error is raised
because they are not equal.
Change pointer comparisons to physical address comparisons as to avoid
issues with tagged pointers that pointer arithmetic would create. Open
code pte_offset_kernel(), pmd_offset(), pud_offset() and p4d_offset().
Because one parameter is always zero and the rest of the function
insides are enclosed inside __va(), removing that layer lowers the
complexity of final assembly.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v2:
- Open code *_offset() to avoid it's internal __va().
arch/x86/mm/init_64.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0e4270e20fad..2d79fc0cf391 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -269,7 +269,10 @@ static p4d_t *fill_p4d(pgd_t *pgd, unsigned long vaddr)
if (pgd_none(*pgd)) {
p4d_t *p4d = (p4d_t *)spp_getpage();
pgd_populate(&init_mm, pgd, p4d);
- if (p4d != p4d_offset(pgd, 0))
+
+ if (__pa(p4d) != (pgtable_l5_enabled() ?
+ __pa(pgd) :
+ (unsigned long)pgd_val(*pgd) & PTE_PFN_MASK))
printk(KERN_ERR "PAGETABLE BUG #00! %p <-> %p\n",
p4d, p4d_offset(pgd, 0));
}
@@ -281,7 +284,7 @@ static pud_t *fill_pud(p4d_t *p4d, unsigned long vaddr)
if (p4d_none(*p4d)) {
pud_t *pud = (pud_t *)spp_getpage();
p4d_populate(&init_mm, p4d, pud);
- if (pud != pud_offset(p4d, 0))
+ if (__pa(pud) != (p4d_val(*p4d) & p4d_pfn_mask(*p4d)))
printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n",
pud, pud_offset(p4d, 0));
}
@@ -293,7 +296,7 @@ static pmd_t *fill_pmd(pud_t *pud, unsigned long vaddr)
if (pud_none(*pud)) {
pmd_t *pmd = (pmd_t *) spp_getpage();
pud_populate(&init_mm, pud, pmd);
- if (pmd != pmd_offset(pud, 0))
+ if (__pa(pmd) != (pud_val(*pud) & pud_pfn_mask(*pud)))
printk(KERN_ERR "PAGETABLE BUG #02! %p <-> %p\n",
pmd, pmd_offset(pud, 0));
}
@@ -305,7 +308,7 @@ static pte_t *fill_pte(pmd_t *pmd, unsigned long vaddr)
if (pmd_none(*pmd)) {
pte_t *pte = (pte_t *) spp_getpage();
pmd_populate_kernel(&init_mm, pmd, pte);
- if (pte != pte_offset_kernel(pmd, 0))
+ if (__pa(pte) != (pmd_val(*pmd) & pmd_pfn_mask(*pmd)))
printk(KERN_ERR "PAGETABLE BUG #03!\n");
}
return pte_offset_kernel(pmd, vaddr);
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 11/18] x86/kasan: KASAN raw shadow memory PTE init
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (9 preceding siblings ...)
2025-10-29 20:07 ` [PATCH v6 10/18] x86/mm: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
@ 2025-10-29 20:07 ` Maciej Wieczor-Retman
2025-11-11 9:11 ` Alexander Potapenko
2025-10-29 20:08 ` [PATCH v6 12/18] x86/mm: LAM compatible non-canonical definition Maciej Wieczor-Retman
` (7 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 20:07 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
In KASAN's generic mode the default value in shadow memory is zero.
During initialization of shadow memory pages they are allocated and
zeroed.
In KASAN's tag-based mode the default tag for the arm64 architecture is
0xFE which corresponds to any memory that should not be accessed. On x86
(where tags are 4-bit wide instead of 8-bit wide) that tag is 0xE so
during the initializations all the bytes in shadow memory pages should
be filled with it.
Use memblock_alloc_try_nid_raw() instead of memblock_alloc_try_nid() to
avoid zeroing out the memory so it can be set with the KASAN invalid
tag.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v2:
- Remove dense mode references, use memset() instead of kasan_poison().
arch/x86/mm/kasan_init_64.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 998b6010d6d3..e69b7210aaae 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -34,6 +34,18 @@ static __init void *early_alloc(size_t size, int nid, bool should_panic)
return ptr;
}
+static __init void *early_raw_alloc(size_t size, int nid, bool should_panic)
+{
+ void *ptr = memblock_alloc_try_nid_raw(size, size,
+ __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+
+ if (!ptr && should_panic)
+ panic("%pS: Failed to allocate page, nid=%d from=%lx\n",
+ (void *)_RET_IP_, nid, __pa(MAX_DMA_ADDRESS));
+
+ return ptr;
+}
+
static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
unsigned long end, int nid)
{
@@ -63,8 +75,9 @@ static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
if (!pte_none(*pte))
continue;
- p = early_alloc(PAGE_SIZE, nid, true);
- entry = pfn_pte(PFN_DOWN(__pa(p)), PAGE_KERNEL);
+ p = early_raw_alloc(PAGE_SIZE, nid, true);
+ memset(p, PAGE_SIZE, KASAN_SHADOW_INIT);
+ entry = pfn_pte(PFN_DOWN(__pa_nodebug(p)), PAGE_KERNEL);
set_pte_at(&init_mm, addr, pte, entry);
} while (pte++, addr += PAGE_SIZE, addr != end);
}
@@ -436,7 +449,7 @@ void __init kasan_init(void)
* it may contain some garbage. Now we can clear and write protect it,
* since after the TLB flush no one should write to it.
*/
- memset(kasan_early_shadow_page, 0, PAGE_SIZE);
+ memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
for (i = 0; i < PTRS_PER_PTE; i++) {
pte_t pte;
pgprot_t prot;
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 12/18] x86/mm: LAM compatible non-canonical definition
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (10 preceding siblings ...)
2025-10-29 20:07 ` [PATCH v6 11/18] x86/kasan: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
@ 2025-10-29 20:08 ` Maciej Wieczor-Retman
2025-11-11 9:07 ` Alexander Potapenko
2025-10-29 20:08 ` [PATCH v6 13/18] x86/mm: LAM initialization Maciej Wieczor-Retman
` (6 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 20:08 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
For an address to be canonical it has to have its top bits equal to each
other. The number of bits depends on the paging level and whether
they're supposed to be ones or zeroes depends on whether the address
points to kernel or user space.
With Linear Address Masking (LAM) enabled, the definition of linear
address canonicality is modified. Not all of the previously required
bits need to be equal, only the first and last from the previously equal
bitmask. So for example a 5-level paging kernel address needs to have
bits [63] and [56] set.
Change the canonical checking function to use bit masks instead of bit
shifts.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v6:
- Use bitmasks to check both kernel and userspace addresses (Dave Hansen
and Samuel Holland).
Changelog v4:
- Add patch to the series.
arch/x86/include/asm/page.h | 25 ++++++++++++++++++++++++-
1 file changed, 24 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index bcf5cad3da36..df2c93b90a6b 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -82,14 +82,37 @@ static __always_inline void *pfn_to_kaddr(unsigned long pfn)
return __va(pfn << PAGE_SHIFT);
}
+/*
+ * CONFIG_KASAN_SW_TAGS requires LAM which changes the canonicality checks.
+ */
+#ifdef CONFIG_KASAN_SW_TAGS
+static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
+{
+ return (vaddr | BIT_ULL(63) | BIT_ULL(vaddr_bits - 1));
+}
+#else
static __always_inline u64 __canonical_address(u64 vaddr, u8 vaddr_bits)
{
return ((s64)vaddr << (64 - vaddr_bits)) >> (64 - vaddr_bits);
}
+#endif
+
+#ifdef CONFIG_KASAN_SW_TAGS
+#define CANONICAL_MASK(vaddr_bits) (BIT_ULL(63) | BIT_ULL(vaddr_bits - 1))
+#else
+#define CANONICAL_MASK(vaddr_bits) GENMASK_ULL(63, vaddr_bits)
+#endif
static __always_inline u64 __is_canonical_address(u64 vaddr, u8 vaddr_bits)
{
- return __canonical_address(vaddr, vaddr_bits) == vaddr;
+ unsigned long cmask = CANONICAL_MASK(vaddr_bits);
+
+ /*
+ * Kernel canonical address & cmask will evaluate to cmask while
+ * userspace canonical address & cmask will evaluate to zero.
+ */
+ u64 result = (vaddr & cmask) == cmask || !(vaddr & cmask);
+ return result;
}
#endif /* __ASSEMBLER__ */
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 13/18] x86/mm: LAM initialization
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (11 preceding siblings ...)
2025-10-29 20:08 ` [PATCH v6 12/18] x86/mm: LAM compatible non-canonical definition Maciej Wieczor-Retman
@ 2025-10-29 20:08 ` Maciej Wieczor-Retman
2025-11-11 9:04 ` Alexander Potapenko
2025-10-29 20:09 ` [PATCH v6 14/18] x86: Minimal SLAB alignment Maciej Wieczor-Retman
` (5 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 20:08 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
To make use of KASAN's tag based mode on x86, Linear Address Masking
(LAM) needs to be enabled. To do that the 28th bit in CR4 has to be set.
Set the bit in early memory initialization.
When launching secondary CPUs the LAM bit gets lost. To avoid this add
it in a mask in head_64.S. The bitmask permits some bits of CR4 to pass
from the primary CPU to the secondary CPUs without being cleared.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v6:
- boot_cpu_has() -> cpu_feature_enabled()
arch/x86/kernel/head_64.S | 3 +++
arch/x86/mm/init.c | 3 +++
2 files changed, 6 insertions(+)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 21816b48537c..c5a0bfbe280d 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -209,6 +209,9 @@ SYM_INNER_LABEL(common_startup_64, SYM_L_LOCAL)
* there will be no global TLB entries after the execution."
*/
movl $(X86_CR4_PAE | X86_CR4_LA57), %edx
+#ifdef CONFIG_ADDRESS_MASKING
+ orl $X86_CR4_LAM_SUP, %edx
+#endif
#ifdef CONFIG_X86_MCE
/*
* Preserve CR4.MCE if the kernel will enable #MC support.
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 8bf6ad4b9400..a8442b255481 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -764,6 +764,9 @@ void __init init_mem_mapping(void)
probe_page_size_mask();
setup_pcid();
+ if (cpu_feature_enabled(X86_FEATURE_LAM) && IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ cr4_set_bits_and_update_boot(X86_CR4_LAM_SUP);
+
#ifdef CONFIG_X86_64
end = max_pfn << PAGE_SHIFT;
#else
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 14/18] x86: Minimal SLAB alignment
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (12 preceding siblings ...)
2025-10-29 20:08 ` [PATCH v6 13/18] x86/mm: LAM initialization Maciej Wieczor-Retman
@ 2025-10-29 20:09 ` Maciej Wieczor-Retman
2025-11-10 17:48 ` Alexander Potapenko
2025-10-29 20:09 ` [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports Maciej Wieczor-Retman
` (4 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 20:09 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
8 byte minimal SLAB alignment interferes with KASAN's granularity of 16
bytes. It causes a lot of out-of-bounds errors for unaligned 8 byte
allocations.
Compared to a kernel with KASAN disabled, the memory footprint increases
because all kmalloc-8 allocations now are realized as kmalloc-16, which
has twice the object size. But more meaningfully, when compared to a
kernel with generic KASAN enabled, there is no difference. Because of
redzones in generic KASAN, kmalloc-8' and kmalloc-16' object size is the
same (48 bytes). So changing the minimal SLAB alignment of the tag-based
mode doesn't have any negative impact when compared to the other
software KASAN mode.
Adjust x86 minimal SLAB alignment to match KASAN granularity size.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
---
Changelog v6:
- Add Andrey's Reviewed-by tag.
Changelog v4:
- Extend the patch message with some more context and impact
information.
Changelog v3:
- Fix typo in patch message 4 -> 16.
- Change define location to arch/x86/include/asm/cache.c.
arch/x86/include/asm/cache.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
index 69404eae9983..3232583b5487 100644
--- a/arch/x86/include/asm/cache.h
+++ b/arch/x86/include/asm/cache.h
@@ -21,4 +21,8 @@
#endif
#endif
+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#endif
+
#endif /* _ASM_X86_CACHE_H */
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (13 preceding siblings ...)
2025-10-29 20:09 ` [PATCH v6 14/18] x86: Minimal SLAB alignment Maciej Wieczor-Retman
@ 2025-10-29 20:09 ` Maciej Wieczor-Retman
2025-11-11 10:14 ` Alexander Potapenko
2025-11-11 10:27 ` Peter Zijlstra
2025-10-29 20:10 ` [PATCH v6 16/18] arm64: Unify software tag-based KASAN inline recovery path Maciej Wieczor-Retman
` (3 subsequent siblings)
18 siblings, 2 replies; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 20:09 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Inline KASAN on x86 should do tag mismatch reports by passing the
metadata through the UD1 instruction and the faulty address through RDI,
a scheme that's already used by UBSan and is easy to extend.
The current LLVM way of passing KASAN software tag mode metadata is done
using the INT3 instruction. However that should be changed because it
doesn't align to how the kernel already handles UD1 for similar use
cases. Since inline software tag-based KASAN doesn't work on x86 due to
missing compiler support it can be fixed and the INT3 can be changed to
UD1 at the same time.
Add a kasan component to the #UD decoding and handling functions.
Make part of that hook - which decides whether to die or recover from a
tag mismatch - arch independent to avoid duplicating a long comment on
both x86 and arm64 architectures.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v6:
- Change the whole patch from using INT3 to UD1.
Changelog v5:
- Add die to argument list of kasan_inline_recover() in
arch/arm64/kernel/traps.c.
Changelog v4:
- Make kasan_handler() a stub in a header file. Remove #ifdef from
traps.c.
- Consolidate the "recover" comment into one place.
- Make small changes to the patch message.
MAINTAINERS | 2 +-
arch/x86/include/asm/bug.h | 1 +
arch/x86/include/asm/kasan.h | 20 ++++++++++++++++++++
arch/x86/kernel/traps.c | 8 ++++++++
arch/x86/mm/Makefile | 2 ++
arch/x86/mm/kasan_inline.c | 21 +++++++++++++++++++++
include/linux/kasan.h | 23 +++++++++++++++++++++++
7 files changed, 76 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/mm/kasan_inline.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 53cbc7534911..a6e3cc2f3cc5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13422,7 +13422,7 @@ S: Maintained
B: https://bugzilla.kernel.org/buglist.cgi?component=Sanitizers&product=Memory%20Management
F: Documentation/dev-tools/kasan.rst
F: arch/*/include/asm/*kasan*.h
-F: arch/*/mm/kasan_init*
+F: arch/*/mm/kasan_*
F: include/linux/kasan*.h
F: lib/Kconfig.kasan
F: mm/kasan/
diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
index 880ca15073ed..428c8865b995 100644
--- a/arch/x86/include/asm/bug.h
+++ b/arch/x86/include/asm/bug.h
@@ -31,6 +31,7 @@
#define BUG_UD2 0xfffe
#define BUG_UD1 0xfffd
#define BUG_UD1_UBSAN 0xfffc
+#define BUG_UD1_KASAN 0xfffb
#define BUG_UDB 0xffd6
#define BUG_LOCK 0xfff0
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 396071832d02..375651d9b114 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -6,6 +6,24 @@
#include <linux/kasan-tags.h>
#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+
+/*
+ * LLVM ABI for reporting tag mismatches in inline KASAN mode.
+ * On x86 the UD1 instruction is used to carry metadata in the ECX register
+ * to the KASAN report. ECX is used to differentiate KASAN from UBSan when
+ * decoding the UD1 instruction.
+ *
+ * SIZE refers to how many bytes the faulty memory access
+ * requested.
+ * WRITE bit, when set, indicates the access was a write, otherwise
+ * it was a read.
+ * RECOVER bit, when set, should allow the kernel to carry on after
+ * a tag mismatch. Otherwise die() is called.
+ */
+#define KASAN_ECX_RECOVER 0x20
+#define KASAN_ECX_WRITE 0x10
+#define KASAN_ECX_SIZE_MASK 0x0f
+#define KASAN_ECX_SIZE(ecx) (1 << ((ecx) & KASAN_ECX_SIZE_MASK))
#define KASAN_SHADOW_SCALE_SHIFT 3
/*
@@ -34,10 +52,12 @@
#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
+void kasan_inline_handler(struct pt_regs *regs);
#else
#define __tag_shifted(tag) 0UL
#define __tag_reset(addr) (addr)
#define __tag_get(addr) 0
+static inline void kasan_inline_handler(struct pt_regs *regs) { }
#endif /* CONFIG_KASAN_SW_TAGS */
#ifdef CONFIG_64BIT
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 6b22611e69cc..40fefd306c76 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -179,6 +179,9 @@ __always_inline int decode_bug(unsigned long addr, s32 *imm, int *len)
if (X86_MODRM_REG(v) == 0) /* EAX */
return BUG_UD1_UBSAN;
+ if (X86_MODRM_REG(v) == 1) /* ECX */
+ return BUG_UD1_KASAN;
+
return BUG_UD1;
}
@@ -357,6 +360,11 @@ static noinstr bool handle_bug(struct pt_regs *regs)
}
break;
+ case BUG_UD1_KASAN:
+ kasan_inline_handler(regs);
+ handled = true;
+ break;
+
default:
break;
}
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index 5b9908f13dcf..1dc18090cbe7 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -36,7 +36,9 @@ obj-$(CONFIG_PTDUMP) += dump_pagetables.o
obj-$(CONFIG_PTDUMP_DEBUGFS) += debug_pagetables.o
KASAN_SANITIZE_kasan_init_$(BITS).o := n
+KASAN_SANITIZE_kasan_inline.o := n
obj-$(CONFIG_KASAN) += kasan_init_$(BITS).o
+obj-$(CONFIG_KASAN_SW_TAGS) += kasan_inline.o
KMSAN_SANITIZE_kmsan_shadow.o := n
obj-$(CONFIG_KMSAN) += kmsan_shadow.o
diff --git a/arch/x86/mm/kasan_inline.c b/arch/x86/mm/kasan_inline.c
new file mode 100644
index 000000000000..65641557c294
--- /dev/null
+++ b/arch/x86/mm/kasan_inline.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kasan.h>
+#include <linux/kdebug.h>
+
+void kasan_inline_handler(struct pt_regs *regs)
+{
+ int metadata = regs->cx;
+ u64 addr = regs->di;
+ u64 pc = regs->ip;
+ bool recover = metadata & KASAN_ECX_RECOVER;
+ bool write = metadata & KASAN_ECX_WRITE;
+ size_t size = KASAN_ECX_SIZE(metadata);
+
+ if (user_mode(regs))
+ return;
+
+ if (!kasan_report((void *)addr, size, write, pc))
+ return;
+
+ kasan_die_unless_recover(recover, "Oops - KASAN", regs, metadata, die);
+}
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 3c0c60ed5d5c..9bd1b1ebd674 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -679,4 +679,27 @@ void kasan_non_canonical_hook(unsigned long addr);
static inline void kasan_non_canonical_hook(unsigned long addr) { }
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+#ifdef CONFIG_KASAN_SW_TAGS
+/*
+ * The instrumentation allows to control whether we can proceed after
+ * a crash was detected. This is done by passing the -recover flag to
+ * the compiler. Disabling recovery allows to generate more compact
+ * code.
+ *
+ * Unfortunately disabling recovery doesn't work for the kernel right
+ * now. KASAN reporting is disabled in some contexts (for example when
+ * the allocator accesses slab object metadata; this is controlled by
+ * current->kasan_depth). All these accesses are detected by the tool,
+ * even though the reports for them are not printed.
+ *
+ * This is something that might be fixed at some point in the future.
+ */
+static inline void kasan_die_unless_recover(bool recover, char *msg, struct pt_regs *regs,
+ unsigned long err, void die_fn(const char *str, struct pt_regs *regs, long err))
+{
+ if (!recover)
+ die_fn(msg, regs, err);
+}
+#endif
+
#endif /* LINUX_KASAN_H */
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 16/18] arm64: Unify software tag-based KASAN inline recovery path
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (14 preceding siblings ...)
2025-10-29 20:09 ` [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports Maciej Wieczor-Retman
@ 2025-10-29 20:10 ` Maciej Wieczor-Retman
2025-11-11 9:02 ` Alexander Potapenko
2025-10-29 20:11 ` [PATCH v6 17/18] x86/kasan: Logical bit shift for kasan_mem_to_shadow Maciej Wieczor-Retman
` (2 subsequent siblings)
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 20:10 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
To avoid having a copy of a long comment explaining the intricacies of
the inline KASAN recovery system and issues for every architecture that
uses the software tag-based mode, a unified kasan_die_unless_recover()
function was added.
Use kasan_die_unless_recover() in the kasan brk handler to cleanup the
long comment, that's kept in the non-arch KASAN code.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
Changelog v6:
- Add Catalin's Acked-by tag.
Changelog v5:
- Split arm64 portion of patch 13/18 into this one. (Peter Zijlstra)
arch/arm64/kernel/traps.c | 17 +----------------
1 file changed, 1 insertion(+), 16 deletions(-)
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 681939ef5d16..b1efc11c3b5a 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -1071,22 +1071,7 @@ int kasan_brk_handler(struct pt_regs *regs, unsigned long esr)
kasan_report(addr, size, write, pc);
- /*
- * The instrumentation allows to control whether we can proceed after
- * a crash was detected. This is done by passing the -recover flag to
- * the compiler. Disabling recovery allows to generate more compact
- * code.
- *
- * Unfortunately disabling recovery doesn't work for the kernel right
- * now. KASAN reporting is disabled in some contexts (for example when
- * the allocator accesses slab object metadata; this is controlled by
- * current->kasan_depth). All these accesses are detected by the tool,
- * even though the reports for them are not printed.
- *
- * This is something that might be fixed at some point in the future.
- */
- if (!recover)
- die("Oops - KASAN", regs, esr);
+ kasan_die_unless_recover(recover, "Oops - KASAN", regs, esr, die);
/* If thread survives, skip over the brk instruction and continue: */
arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 17/18] x86/kasan: Logical bit shift for kasan_mem_to_shadow
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (15 preceding siblings ...)
2025-10-29 20:10 ` [PATCH v6 16/18] arm64: Unify software tag-based KASAN inline recovery path Maciej Wieczor-Retman
@ 2025-10-29 20:11 ` Maciej Wieczor-Retman
2025-11-10 14:49 ` Marco Elver
2025-10-29 20:11 ` [PATCH v6 18/18] x86/kasan: Make software tag-based kasan available Maciej Wieczor-Retman
2025-10-29 22:08 ` [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 20:11 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
While generally tag-based KASAN adopts an arithemitc bit shift to
convert a memory address to a shadow memory address, it doesn't work for
all cases on x86. Testing different shadow memory offsets proved that
either 4 or 5 level paging didn't work correctly or inline mode ran into
issues. Thus the best working scheme is the logical bit shift and
non-canonical shadow offset that x86 uses for generic KASAN, of course
adjusted for the increased granularity from 8 to 16 bytes.
Add an arch specific implementation of kasan_mem_to_shadow() that uses
the logical bit shift.
The non-canonical hook tries to calculate whether an address came from
kasan_mem_to_shadow(). First it checks whether this address fits into
the legal set of values possible to output from the mem to shadow
function.
Tie both generic and tag-based x86 KASAN modes to the address range
check associated with generic KASAN.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v4:
- Add this patch to the series.
arch/x86/include/asm/kasan.h | 7 +++++++
mm/kasan/report.c | 5 +++--
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 375651d9b114..2372397bc3e5 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -49,6 +49,13 @@
#include <linux/bits.h>
#ifdef CONFIG_KASAN_SW_TAGS
+static inline void *__kasan_mem_to_shadow(const void *addr)
+{
+ return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+ + KASAN_SHADOW_OFFSET;
+}
+
+#define kasan_mem_to_shadow(addr) __kasan_mem_to_shadow(addr)
#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag)
#define __tag_reset(addr) (sign_extend64((u64)(addr), 56))
#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 50d487a0687a..fd8fe004b0c0 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -642,13 +642,14 @@ void kasan_non_canonical_hook(unsigned long addr)
const char *bug_type;
/*
- * For Generic KASAN, kasan_mem_to_shadow() uses the logical right shift
+ * For Generic KASAN and Software Tag-Based mode on the x86
+ * architecture, kasan_mem_to_shadow() uses the logical right shift
* and never overflows with the chosen KASAN_SHADOW_OFFSET values (on
* both x86 and arm64). Thus, the possible shadow addresses (even for
* bogus pointers) belong to a single contiguous region that is the
* result of kasan_mem_to_shadow() applied to the whole address space.
*/
- if (IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC) || IS_ENABLED(CONFIG_X86_64)) {
if (addr < (unsigned long)kasan_mem_to_shadow((void *)(0UL)) ||
addr > (unsigned long)kasan_mem_to_shadow((void *)(~0UL)))
return;
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH v6 18/18] x86/kasan: Make software tag-based kasan available
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (16 preceding siblings ...)
2025-10-29 20:11 ` [PATCH v6 17/18] x86/kasan: Logical bit shift for kasan_mem_to_shadow Maciej Wieczor-Retman
@ 2025-10-29 20:11 ` Maciej Wieczor-Retman
2025-11-11 9:00 ` Alexander Potapenko
2025-10-29 22:08 ` [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
18 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-10-29 20:11 UTC (permalink / raw)
To: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto
Cc: kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc, m.wieczorretman
From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Make CONFIG_KASAN_SW_TAGS available for x86 machines if they have
ADDRESS_MASKING enabled (LAM) as that works similarly to Top-Byte Ignore
(TBI) that allows the software tag-based mode on arm64 platform.
Set scale macro based on KASAN mode: in software tag-based mode 16 bytes
of memory map to one shadow byte and 8 in generic mode.
Disable CONFIG_KASAN_INLINE and CONFIG_KASAN_STACK when
CONFIG_KASAN_SW_TAGS is enabled on x86 until the appropriate compiler
support is available.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
---
Changelog v6:
- Don't enable KASAN if LAM is not supported.
- Move kasan_init_tags() to kasan_init_64.c to not clutter the setup.c
file.
- Move the #ifdef for the KASAN scale shift here.
- Move the gdb code to patch "Use arithmetic shift for shadow
computation".
- Return "depends on KASAN" line to Kconfig.
- Add the defer kasan config option so KASAN can be disabled on hardware
that doesn't have LAM.
Changelog v4:
- Add x86 specific kasan_mem_to_shadow().
- Revert x86 to the older unsigned KASAN_SHADOW_OFFSET. Do the same to
KASAN_SHADOW_START/END.
- Modify scripts/gdb/linux/kasan.py to keep x86 using unsigned offset.
- Disable inline and stack support when software tags are enabled on
x86.
Changelog v3:
- Remove runtime_const from previous patch and merge the rest here.
- Move scale shift definition back to header file.
- Add new kasan offset for software tag based mode.
- Fix patch message typo 32 -> 16, and 16 -> 8.
- Update lib/Kconfig.kasan with x86 now having software tag-based
support.
Changelog v2:
- Remove KASAN dense code.
Documentation/arch/x86/x86_64/mm.rst | 6 ++++--
arch/x86/Kconfig | 4 ++++
arch/x86/boot/compressed/misc.h | 1 +
arch/x86/include/asm/kasan.h | 4 ++++
arch/x86/mm/kasan_init_64.c | 5 +++++
lib/Kconfig.kasan | 3 ++-
6 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/Documentation/arch/x86/x86_64/mm.rst b/Documentation/arch/x86/x86_64/mm.rst
index a6cf05d51bd8..ccbdbb4cda36 100644
--- a/Documentation/arch/x86/x86_64/mm.rst
+++ b/Documentation/arch/x86/x86_64/mm.rst
@@ -60,7 +60,8 @@ Complete virtual memory map with 4-level page tables
ffffe90000000000 | -23 TB | ffffe9ffffffffff | 1 TB | ... unused hole
ffffea0000000000 | -22 TB | ffffeaffffffffff | 1 TB | virtual memory map (vmemmap_base)
ffffeb0000000000 | -21 TB | ffffebffffffffff | 1 TB | ... unused hole
- ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory
+ ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory (generic mode)
+ fffff40000000000 | -8 TB | fffffbffffffffff | 8 TB | KASAN shadow memory (software tag-based mode)
__________________|____________|__________________|_________|____________________________________________________________
|
| Identical layout to the 56-bit one from here on:
@@ -130,7 +131,8 @@ Complete virtual memory map with 5-level page tables
ffd2000000000000 | -11.5 PB | ffd3ffffffffffff | 0.5 PB | ... unused hole
ffd4000000000000 | -11 PB | ffd5ffffffffffff | 0.5 PB | virtual memory map (vmemmap_base)
ffd6000000000000 | -10.5 PB | ffdeffffffffffff | 2.25 PB | ... unused hole
- ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory
+ ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory (generic mode)
+ ffeffc0000000000 | -6 PB | fffffbffffffffff | 4 PB | KASAN shadow memory (software tag-based mode)
__________________|____________|__________________|_________|____________________________________________________________
|
| Identical layout to the 47-bit one from here on:
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index fa3b616af03a..7c73a2688172 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -67,6 +67,7 @@ config X86
select ARCH_CLOCKSOURCE_INIT
select ARCH_CONFIGURES_CPU_MITIGATIONS
select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
+ select ARCH_DISABLE_KASAN_INLINE if X86_64 && KASAN_SW_TAGS
select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION
select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64
select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG
@@ -196,6 +197,8 @@ config X86
select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if X86_64
select HAVE_ARCH_KASAN_VMALLOC if X86_64
+ select HAVE_ARCH_KASAN_SW_TAGS if ADDRESS_MASKING
+ select ARCH_NEEDS_DEFER_KASAN if ADDRESS_MASKING
select HAVE_ARCH_KFENCE
select HAVE_ARCH_KMSAN if X86_64
select HAVE_ARCH_KGDB
@@ -406,6 +409,7 @@ config AUDIT_ARCH
config KASAN_SHADOW_OFFSET
hex
depends on KASAN
+ default 0xeffffc0000000000 if KASAN_SW_TAGS
default 0xdffffc0000000000
config HAVE_INTEL_TXT
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index db1048621ea2..ded92b439ada 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -13,6 +13,7 @@
#undef CONFIG_PARAVIRT_SPINLOCKS
#undef CONFIG_KASAN
#undef CONFIG_KASAN_GENERIC
+#undef CONFIG_KASAN_SW_TAGS
#define __NO_FORTIFY
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
index 2372397bc3e5..8320fffc71a1 100644
--- a/arch/x86/include/asm/kasan.h
+++ b/arch/x86/include/asm/kasan.h
@@ -7,6 +7,7 @@
#include <linux/types.h>
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#ifdef CONFIG_KASAN_SW_TAGS
/*
* LLVM ABI for reporting tag mismatches in inline KASAN mode.
* On x86 the UD1 instruction is used to carry metadata in the ECX register
@@ -24,7 +25,10 @@
#define KASAN_ECX_WRITE 0x10
#define KASAN_ECX_SIZE_MASK 0x0f
#define KASAN_ECX_SIZE(ecx) (1 << ((ecx) & KASAN_ECX_SIZE_MASK))
+#define KASAN_SHADOW_SCALE_SHIFT 4
+#else
#define KASAN_SHADOW_SCALE_SHIFT 3
+#endif
/*
* Compiler uses shadow offset assuming that addresses start
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index e69b7210aaae..4a5a4a4d43db 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -465,4 +465,9 @@ void __init kasan_init(void)
init_task.kasan_depth = 0;
kasan_init_generic();
+
+ if (boot_cpu_has(X86_FEATURE_LAM))
+ kasan_init_sw_tags();
+ else
+ pr_info("KernelAddressSanitizer not initialized (sw-tags): hardware doesn't support LAM\n");
}
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index a4bb610a7a6f..d13ea8da7bfd 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -112,7 +112,8 @@ config KASAN_SW_TAGS
Requires GCC 11+ or Clang.
- Supported only on arm64 CPUs and relies on Top Byte Ignore.
+ Supported on arm64 CPUs that support Top Byte Ignore and on x86 CPUs
+ that support Linear Address Masking.
Consumes about 1/16th of available memory at kernel start and
add an overhead of ~20% for dynamic allocations.
--
2.51.0
^ permalink raw reply related [flat|nested] 53+ messages in thread
* Re: [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
` (17 preceding siblings ...)
2025-10-29 20:11 ` [PATCH v6 18/18] x86/kasan: Make software tag-based kasan available Maciej Wieczor-Retman
@ 2025-10-29 22:08 ` Andrew Morton
2025-10-29 23:13 ` Andrew Morton
2025-10-30 5:31 ` Maciej Wieczór-Retman
18 siblings, 2 replies; 53+ messages in thread
From: Andrew Morton @ 2025-10-29 22:08 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto, kasan-dev, linux-kernel,
linux-arm-kernel, x86, linux-kbuild, linux-mm, llvm, linux-doc
On Wed, 29 Oct 2025 19:05:27 +0000 Maciej Wieczor-Retman <m.wieczorretman@pm.me> wrote:
> The patchset aims to add a KASAN tag-based mode for the x86 architecture
> with the help of the new CPU feature called Linear Address Masking
> (LAM). Main improvement introduced by the series is 2x lower memory
> usage compared to KASAN's generic mode, the only currently available
> mode on x86. The tag based mode may also find errors that the generic
> mode couldn't because of differences in how these modes operate.
Thanks. Quite a lot of these patches aren't showing signs of review at
this time, so I'll skip v6 for now.
However patches 1&2 are fixes that have cc:stable. It's best to
separate these out from the overall add-a-feature series please - their
path-to-mainline will be quite different.
I grabbed just those two patches for some testing, however their
changelogging isn't fully appropriate. Can I ask that you resend these
as a two-patch series after updating the changelogs to clearly describe
the userspace-visible effects of the flaws which the patches fix?
This is to help -stable maintainers understand why we're proposing the
backports and it is to help people to predict whether these fixes might
address an issue which they or their customers are experiencing.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86
2025-10-29 22:08 ` [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
@ 2025-10-29 23:13 ` Andrew Morton
2025-10-30 5:31 ` Maciej Wieczór-Retman
1 sibling, 0 replies; 53+ messages in thread
From: Andrew Morton @ 2025-10-29 23:13 UTC (permalink / raw)
To: Maciej Wieczor-Retman, xin, peterz, kaleshsingh, kbingham, nathan,
ryabinin.a.a, dave.hansen, bp, morbo, jeremy.linton, smostafa,
kees, baohua, vbabka, justinstitt, wangkefeng.wang, leitao,
jan.kiszka, fujita.tomonori, hpa, urezki, ubizjak,
ada.coupriediaz, nick.desaulniers+lkml, ojeda, brgerst, elver,
pankaj.gupta, glider, mark.rutland, trintaeoitogc, jpoimboe,
thuth, pasha.tatashin, dvyukov, jhubbard, catalin.marinas,
yeoreum.yun, mhocko, lorenzo.stoakes, samuel.holland,
vincenzo.frascino, bigeasy, surenb, ardb, Liam.Howlett,
nicolas.schier, ziy, kas, tglx, mingo, broonie, corbet,
andreyknvl, maciej.wieczor-retman, david, maz, rppt, will, luto,
kasan-dev, linux-kernel, linux-arm-kernel, x86, linux-kbuild,
linux-mm, llvm, linux-doc
On Wed, 29 Oct 2025 15:08:06 -0700 Andrew Morton <akpm@linux-foundation.org> wrote:
> However patches 1&2 are fixes that have cc:stable. It's best to
> separate these out from the overall add-a-feature series please - their
> path-to-mainline will be quite different.
>
> I grabbed just those two patches for some testing,
x86_64 allmodconfig:
/opt/crosstool/gcc-13.2.0-nolibc/x86_64-linux/bin/x86_64-linux-ld: vmlinux.o: in function `pcpu_get_vm_areas':
(.text+0x101cc0f): undefined reference to `__kasan_unpoison_vmap_areas'
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86
2025-10-29 22:08 ` [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
2025-10-29 23:13 ` Andrew Morton
@ 2025-10-30 5:31 ` Maciej Wieczór-Retman
1 sibling, 0 replies; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-10-30 5:31 UTC (permalink / raw)
To: Andrew Morton
Cc: xin, peterz, kaleshsingh, kbingham, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto, kasan-dev, linux-kernel,
linux-arm-kernel, x86, linux-kbuild, linux-mm, llvm, linux-doc
Thanks for taking a look at the series!
On 2025-10-29 at 15:08:06 -0700, Andrew Morton wrote:
>On Wed, 29 Oct 2025 19:05:27 +0000 Maciej Wieczor-Retman <m.wieczorretman@pm.me> wrote:
>
>> The patchset aims to add a KASAN tag-based mode for the x86 architecture
>> with the help of the new CPU feature called Linear Address Masking
>> (LAM). Main improvement introduced by the series is 2x lower memory
>> usage compared to KASAN's generic mode, the only currently available
>> mode on x86. The tag based mode may also find errors that the generic
>> mode couldn't because of differences in how these modes operate.
>
>Thanks. Quite a lot of these patches aren't showing signs of review at
>this time, so I'll skip v6 for now.
>
>However patches 1&2 are fixes that have cc:stable. It's best to
>separate these out from the overall add-a-feature series please - their
>path-to-mainline will be quite different.
Okay, I'll send them separately
>I grabbed just those two patches for some testing, however their
>changelogging isn't fully appropriate. Can I ask that you resend these
>as a two-patch series after updating the changelogs to clearly describe
>the userspace-visible effects of the flaws which the patches fix?
>
>This is to help -stable maintainers understand why we're proposing the
>backports and it is to help people to predict whether these fixes might
>address an issue which they or their customers are experiencing.
Sure, I'll also fixup that undefined symbol error that you mentioned in
the second email.
kind regards
Maciej Wieczór-Retman
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 17/18] x86/kasan: Logical bit shift for kasan_mem_to_shadow
2025-10-29 20:11 ` [PATCH v6 17/18] x86/kasan: Logical bit shift for kasan_mem_to_shadow Maciej Wieczor-Retman
@ 2025-11-10 14:49 ` Marco Elver
2025-11-17 18:26 ` Maciej Wieczór-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Marco Elver @ 2025-11-10 14:49 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, pankaj.gupta, glider,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On Wed, 29 Oct 2025 at 21:11, Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> While generally tag-based KASAN adopts an arithemitc bit shift to
> convert a memory address to a shadow memory address, it doesn't work for
> all cases on x86. Testing different shadow memory offsets proved that
> either 4 or 5 level paging didn't work correctly or inline mode ran into
> issues. Thus the best working scheme is the logical bit shift and
> non-canonical shadow offset that x86 uses for generic KASAN, of course
> adjusted for the increased granularity from 8 to 16 bytes.
>
> Add an arch specific implementation of kasan_mem_to_shadow() that uses
> the logical bit shift.
>
> The non-canonical hook tries to calculate whether an address came from
> kasan_mem_to_shadow(). First it checks whether this address fits into
> the legal set of values possible to output from the mem to shadow
> function.
>
> Tie both generic and tag-based x86 KASAN modes to the address range
> check associated with generic KASAN.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
> ---
> Changelog v4:
> - Add this patch to the series.
>
> arch/x86/include/asm/kasan.h | 7 +++++++
> mm/kasan/report.c | 5 +++--
> 2 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
> index 375651d9b114..2372397bc3e5 100644
> --- a/arch/x86/include/asm/kasan.h
> +++ b/arch/x86/include/asm/kasan.h
> @@ -49,6 +49,13 @@
> #include <linux/bits.h>
>
> #ifdef CONFIG_KASAN_SW_TAGS
> +static inline void *__kasan_mem_to_shadow(const void *addr)
> +{
> + return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
> + + KASAN_SHADOW_OFFSET;
> +}
You're effectively undoing "kasan: sw_tags: Use arithmetic shift for
shadow computation" for x86 - why?
This function needs a comment explaining this.
Also, the commit message just says "it doesn't work for all cases" - why?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 10/18] x86/mm: Physical address comparisons in fill_p*d/pte
2025-10-29 20:07 ` [PATCH v6 10/18] x86/mm: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
@ 2025-11-10 16:24 ` Alexander Potapenko
2025-11-17 18:58 ` Maciej Wieczór-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-10 16:24 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On Wed, Oct 29, 2025 at 9:07 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> Calculating page offset returns a pointer without a tag. When comparing
> the calculated offset to a tagged page pointer an error is raised
> because they are not equal.
>
> Change pointer comparisons to physical address comparisons as to avoid
> issues with tagged pointers that pointer arithmetic would create. Open
> code pte_offset_kernel(), pmd_offset(), pud_offset() and p4d_offset().
> Because one parameter is always zero and the rest of the function
> insides are enclosed inside __va(), removing that layer lowers the
> complexity of final assembly.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
> ---
> Changelog v2:
> - Open code *_offset() to avoid it's internal __va().
>
> arch/x86/mm/init_64.c | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 0e4270e20fad..2d79fc0cf391 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -269,7 +269,10 @@ static p4d_t *fill_p4d(pgd_t *pgd, unsigned long vaddr)
> if (pgd_none(*pgd)) {
> p4d_t *p4d = (p4d_t *)spp_getpage();
> pgd_populate(&init_mm, pgd, p4d);
> - if (p4d != p4d_offset(pgd, 0))
> +
> + if (__pa(p4d) != (pgtable_l5_enabled() ?
> + __pa(pgd) :
> + (unsigned long)pgd_val(*pgd) & PTE_PFN_MASK))
Did you test with both 4- and 5-level paging?
If I understand correctly, p4d and pgd are supposed to be the same
under !pgtable_l5_enabled().
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 02/18] kasan: Unpoison vms[area] addresses with a common tag
2025-10-29 19:06 ` [PATCH v6 02/18] kasan: Unpoison vms[area] addresses with a common tag Maciej Wieczor-Retman
@ 2025-11-10 16:40 ` Alexander Potapenko
0 siblings, 0 replies; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-10 16:40 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc, stable, Baoquan He
> void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
> {
> int area;
>
> for (area = 0 ; area < nr_vms ; area++) {
> kasan_poison(vms[area]->addr, vms[area]->size,
> - arch_kasan_get_tag(vms[area]->addr), false);
> + arch_kasan_get_tag(vms[0]->addr), false);
> + arch_kasan_set_tag(vms[area]->addr, arch_kasan_get_tag(vms[0]->addr));
Like set_tag(), arch_kasan_set_tag() does not set the tag value in
place, so this line is a no-op.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 01/18] kasan: Unpoison pcpu chunks with base address tag
2025-10-29 19:05 ` [PATCH v6 01/18] kasan: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
@ 2025-11-10 17:32 ` Alexander Potapenko
2025-11-17 17:51 ` Maciej Wieczór-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-10 17:32 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc, stable, Baoquan He
On Wed, Oct 29, 2025 at 8:05 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> The problem presented here is related to NUMA systems and tag-based
> KASAN modes - software and hardware ones. It can be explained in the
> following points:
>
> 1. There can be more than one virtual memory chunk.
> 2. Chunk's base address has a tag.
> 3. The base address points at the first chunk and thus inherits
> the tag of the first chunk.
> 4. The subsequent chunks will be accessed with the tag from the
> first chunk.
> 5. Thus, the subsequent chunks need to have their tag set to
> match that of the first chunk.
>
> Refactor code by moving it into a helper in preparation for the actual
> fix.
The code in the helper function:
> +void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
> +{
> + int area;
> +
> + for (area = 0 ; area < nr_vms ; area++) {
> + kasan_poison(vms[area]->addr, vms[area]->size,
> + arch_kasan_get_tag(vms[area]->addr), false);
> + }
> +}
is different from what was originally called:
> - for (area = 0; area < nr_vms; area++)
> - vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
> - vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);
> + kasan_unpoison_vmap_areas(vms, nr_vms);
, so the patch description is a bit misleading.
Please also ensure you fix the errors reported by kbuild test robot.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 04/18] kasan: sw_tags: Support tag widths less than 8 bits
2025-10-29 19:06 ` [PATCH v6 04/18] kasan: sw_tags: Support tag widths less than 8 bits Maciej Wieczor-Retman
@ 2025-11-10 17:37 ` Alexander Potapenko
2025-11-17 18:35 ` Maciej Wieczór-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-10 17:37 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
> +++ b/include/linux/kasan-tags.h
> @@ -2,13 +2,16 @@
> #ifndef _LINUX_KASAN_TAGS_H
> #define _LINUX_KASAN_TAGS_H
>
> +#include <asm/kasan.h>
In Patch 07, this is changed to `#include <asm/kasan-tags.h>` what is
the point of doing that?
Wouldn't it be better to move the addition of kasan-tags.h for
different arches to this patch from Patch 07?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 14/18] x86: Minimal SLAB alignment
2025-10-29 20:09 ` [PATCH v6 14/18] x86: Minimal SLAB alignment Maciej Wieczor-Retman
@ 2025-11-10 17:48 ` Alexander Potapenko
2025-11-18 11:36 ` Maciej Wieczor-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-10 17:48 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
> diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
> index 69404eae9983..3232583b5487 100644
> --- a/arch/x86/include/asm/cache.h
> +++ b/arch/x86/include/asm/cache.h
> @@ -21,4 +21,8 @@
> #endif
> #endif
>
> +#ifdef CONFIG_KASAN_SW_TAGS
> +#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
I don't think linux/linkage.h (the only header included here) defines
KASAN_SHADOW_SCALE_SHIFT, does it?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 18/18] x86/kasan: Make software tag-based kasan available
2025-10-29 20:11 ` [PATCH v6 18/18] x86/kasan: Make software tag-based kasan available Maciej Wieczor-Retman
@ 2025-11-11 9:00 ` Alexander Potapenko
2025-11-18 11:48 ` Maciej Wieczor-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:00 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On Wed, Oct 29, 2025 at 9:11 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> - ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory
> + ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory (generic mode)
> + fffff40000000000 | -8 TB | fffffbffffffffff | 8 TB | KASAN shadow memory (software tag-based mode)
> __________________|____________|__________________|_________|____________________________________________________________
> + ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory (generic mode)
> + ffeffc0000000000 | -6 PB | fffffbffffffffff | 4 PB | KASAN shadow memory (software tag-based mode)
> __________________|____________|__________________|_________|____________________________________________________________
> + default 0xeffffc0000000000 if KASAN_SW_TAGS
> default 0xdffffc0000000000
Please elaborate in the patch description how these values were picked.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 16/18] arm64: Unify software tag-based KASAN inline recovery path
2025-10-29 20:10 ` [PATCH v6 16/18] arm64: Unify software tag-based KASAN inline recovery path Maciej Wieczor-Retman
@ 2025-11-11 9:02 ` Alexander Potapenko
0 siblings, 0 replies; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:02 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On Wed, Oct 29, 2025 at 9:10 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> To avoid having a copy of a long comment explaining the intricacies of
> the inline KASAN recovery system and issues for every architecture that
> uses the software tag-based mode, a unified kasan_die_unless_recover()
> function was added.
>
> Use kasan_die_unless_recover() in the kasan brk handler to cleanup the
> long comment, that's kept in the non-arch KASAN code.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Alexander Potapenko <glider@google.com>
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 13/18] x86/mm: LAM initialization
2025-10-29 20:08 ` [PATCH v6 13/18] x86/mm: LAM initialization Maciej Wieczor-Retman
@ 2025-11-11 9:04 ` Alexander Potapenko
0 siblings, 0 replies; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:04 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On Wed, Oct 29, 2025 at 9:08 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> To make use of KASAN's tag based mode on x86, Linear Address Masking
> (LAM) needs to be enabled. To do that the 28th bit in CR4 has to be set.
>
> Set the bit in early memory initialization.
>
> When launching secondary CPUs the LAM bit gets lost. To avoid this add
> it in a mask in head_64.S. The bitmask permits some bits of CR4 to pass
> from the primary CPU to the secondary CPUs without being cleared.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Alexander Potapenko <glider@google.com>
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 12/18] x86/mm: LAM compatible non-canonical definition
2025-10-29 20:08 ` [PATCH v6 12/18] x86/mm: LAM compatible non-canonical definition Maciej Wieczor-Retman
@ 2025-11-11 9:07 ` Alexander Potapenko
0 siblings, 0 replies; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:07 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On Wed, Oct 29, 2025 at 9:08 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> For an address to be canonical it has to have its top bits equal to each
> other. The number of bits depends on the paging level and whether
> they're supposed to be ones or zeroes depends on whether the address
> points to kernel or user space.
>
> With Linear Address Masking (LAM) enabled, the definition of linear
> address canonicality is modified. Not all of the previously required
> bits need to be equal, only the first and last from the previously equal
> bitmask. So for example a 5-level paging kernel address needs to have
> bits [63] and [56] set.
>
> Change the canonical checking function to use bit masks instead of bit
> shifts.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Alexander Potapenko <glider@google.com>
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 11/18] x86/kasan: KASAN raw shadow memory PTE init
2025-10-29 20:07 ` [PATCH v6 11/18] x86/kasan: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
@ 2025-11-11 9:11 ` Alexander Potapenko
0 siblings, 0 replies; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:11 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On Wed, Oct 29, 2025 at 9:07 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> In KASAN's generic mode the default value in shadow memory is zero.
> During initialization of shadow memory pages they are allocated and
> zeroed.
>
> In KASAN's tag-based mode the default tag for the arm64 architecture is
> 0xFE which corresponds to any memory that should not be accessed. On x86
> (where tags are 4-bit wide instead of 8-bit wide) that tag is 0xE so
> during the initializations all the bytes in shadow memory pages should
> be filled with it.
>
> Use memblock_alloc_try_nid_raw() instead of memblock_alloc_try_nid() to
> avoid zeroing out the memory so it can be set with the KASAN invalid
> tag.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 09/18] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic
2025-10-29 19:07 ` [PATCH v6 09/18] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
@ 2025-11-11 9:13 ` Alexander Potapenko
2025-11-17 18:43 ` Maciej Wieczór-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:13 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On Wed, Oct 29, 2025 at 8:08 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release.
> vm_reset_perms() calculates range's start and end addresses using min()
> and max() functions. To do that it compares pointers but, with KASAN
> software tags mode enabled, some are tagged - addr variable is, while
> start and end variables aren't. This can cause the wrong address to be
> chosen and result in various errors in different places.
>
> Reset tags in the address used as function argument in min(), max().
>
> execmem_cache_add() adds tagged pointers to a maple tree structure,
> which then are incorrectly compared when walking the tree. That results
> in different pointers being returned later and page permission violation
> errors panicking the kernel.
>
> Reset tag of the address range inserted into the maple tree inside
> execmem_vmalloc() which then gets propagated to execmem_cache_add().
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Alexander Potapenko <glider@google.com>
> diff --git a/mm/execmem.c b/mm/execmem.c
> index 810a4ba9c924..fd11409a6217 100644
> --- a/mm/execmem.c
> +++ b/mm/execmem.c
> @@ -59,7 +59,7 @@ static void *execmem_vmalloc(struct execmem_range *range, size_t size,
> return NULL;
> }
>
> - return p;
> + return kasan_reset_tag(p);
I think a comment would be nice here.
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3328,7 +3328,7 @@ static void vm_reset_perms(struct vm_struct *area)
> * the vm_unmap_aliases() flush includes the direct map.
> */
> for (i = 0; i < area->nr_pages; i += 1U << page_order) {
> - unsigned long addr = (unsigned long)page_address(area->pages[i]);
> + unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));
Ditto
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 05/18] kasan: Fix inline mode for x86 tag-based mode
2025-10-29 19:06 ` [PATCH v6 05/18] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
@ 2025-11-11 9:22 ` Alexander Potapenko
0 siblings, 0 replies; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:22 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
>
> Explicitly zero out hwasan-instrument-with-calls when enabling inline
> mode in tag-based KASAN.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 06/18] x86/kasan: Add arch specific kasan functions
2025-10-29 19:07 ` [PATCH v6 06/18] x86/kasan: Add arch specific kasan functions Maciej Wieczor-Retman
@ 2025-11-11 9:31 ` Alexander Potapenko
2025-11-17 18:41 ` Maciej Wieczór-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:31 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
> +#ifdef CONFIG_64BIT
> +static inline void *__tag_set(const void *__addr, u8 tag)
> +{
> + u64 addr = (u64)__addr;
> +
> + addr &= ~__tag_shifted(KASAN_TAG_MASK);
KASAN_TAG_MASK is only defined in Patch 07, does this patch compile?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 07/18] kasan: arm64: x86: Make special tags arch specific
2025-10-29 19:07 ` [PATCH v6 07/18] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
@ 2025-11-11 9:34 ` Alexander Potapenko
0 siblings, 0 replies; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:34 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
> -#include <asm/kasan.h>
> +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> +#include <asm/kasan-tags.h>
Perhaps moving this part to patch 04, along with the newly added
kasan-tags.h, would be cleaner.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 03/18] kasan: sw_tags: Use arithmetic shift for shadow computation
2025-10-29 19:06 ` [PATCH v6 03/18] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
@ 2025-11-11 9:39 ` Alexander Potapenko
2025-11-17 18:27 ` Maciej Wieczór-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:39 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index b00849ea8ffd..952ade776e51 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -61,8 +61,14 @@ int kasan_populate_early_shadow(const void *shadow_start,
> #ifndef kasan_mem_to_shadow
> static inline void *kasan_mem_to_shadow(const void *addr)
> {
> - return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
> - + KASAN_SHADOW_OFFSET;
> + void *scaled;
> +
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> + scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT);
> + else
> + scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT);
> +
> + return KASAN_SHADOW_OFFSET + scaled;
> }
> #endif
As Marco pointed out, this part is reverted in Patch 17. Any reason to do that?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 08/18] x86/mm: Reset tag for virtual to physical address conversions
2025-10-29 19:07 ` [PATCH v6 08/18] x86/mm: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
@ 2025-11-11 9:42 ` Alexander Potapenko
0 siblings, 0 replies; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 9:42 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On Wed, Oct 29, 2025 at 8:07 PM Maciej Wieczor-Retman
<m.wieczorretman@pm.me> wrote:
>
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> Any place where pointer arithmetic is used to convert a virtual address
> into a physical one can raise errors if the virtual address is tagged.
>
> Reset the pointer's tag by sign extending the tag bits in macros that do
> pointer arithmetic in address conversions. There will be no change in
> compiled code with KASAN disabled since the compiler will optimize the
> __tag_reset() out.
>
> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
Acked-by: Alexander Potapenko <glider@google.com>
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports
2025-10-29 20:09 ` [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports Maciej Wieczor-Retman
@ 2025-11-11 10:14 ` Alexander Potapenko
2025-11-11 10:27 ` Peter Zijlstra
1 sibling, 0 replies; 53+ messages in thread
From: Alexander Potapenko @ 2025-11-11 10:14 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
> +++ b/arch/x86/mm/kasan_inline.c
The name kasan_inline.c is confusing: a reader may imply that this
file is used for CONFIG_KASAN_INLINE, or that it contains inline
functions, while neither is true.
I suggest renaming it into something like kasan_sw_tags.c
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports
2025-10-29 20:09 ` [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports Maciej Wieczor-Retman
2025-11-11 10:14 ` Alexander Potapenko
@ 2025-11-11 10:27 ` Peter Zijlstra
2025-11-17 9:47 ` Maciej Wieczór-Retman
1 sibling, 1 reply; 53+ messages in thread
From: Peter Zijlstra @ 2025-11-11 10:27 UTC (permalink / raw)
To: Maciej Wieczor-Retman
Cc: xin, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto, kasan-dev, linux-kernel,
linux-arm-kernel, x86, linux-kbuild, linux-mm, llvm, linux-doc
On Wed, Oct 29, 2025 at 08:09:51PM +0000, Maciej Wieczor-Retman wrote:
> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>
> Inline KASAN on x86 should do tag mismatch reports by passing the
> metadata through the UD1 instruction and the faulty address through RDI,
> a scheme that's already used by UBSan and is easy to extend.
>
> The current LLVM way of passing KASAN software tag mode metadata is done
> using the INT3 instruction. However that should be changed because it
> doesn't align to how the kernel already handles UD1 for similar use
> cases. Since inline software tag-based KASAN doesn't work on x86 due to
> missing compiler support it can be fixed and the INT3 can be changed to
> UD1 at the same time.
>
> Add a kasan component to the #UD decoding and handling functions.
>
> Make part of that hook - which decides whether to die or recover from a
> tag mismatch - arch independent to avoid duplicating a long comment on
> both x86 and arm64 architectures.
>
> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
> index 396071832d02..375651d9b114 100644
> --- a/arch/x86/include/asm/kasan.h
> +++ b/arch/x86/include/asm/kasan.h
> @@ -6,6 +6,24 @@
> #include <linux/kasan-tags.h>
> #include <linux/types.h>
> #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
> +
> +/*
> + * LLVM ABI for reporting tag mismatches in inline KASAN mode.
> + * On x86 the UD1 instruction is used to carry metadata in the ECX register
> + * to the KASAN report. ECX is used to differentiate KASAN from UBSan when
> + * decoding the UD1 instruction.
> + *
> + * SIZE refers to how many bytes the faulty memory access
> + * requested.
> + * WRITE bit, when set, indicates the access was a write, otherwise
> + * it was a read.
> + * RECOVER bit, when set, should allow the kernel to carry on after
> + * a tag mismatch. Otherwise die() is called.
> + */
> +#define KASAN_ECX_RECOVER 0x20
> +#define KASAN_ECX_WRITE 0x10
> +#define KASAN_ECX_SIZE_MASK 0x0f
> +#define KASAN_ECX_SIZE(ecx) (1 << ((ecx) & KASAN_ECX_SIZE_MASK))
> #define KASAN_SHADOW_SCALE_SHIFT 3
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index 6b22611e69cc..40fefd306c76 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -179,6 +179,9 @@ __always_inline int decode_bug(unsigned long addr, s32 *imm, int *len)
> if (X86_MODRM_REG(v) == 0) /* EAX */
> return BUG_UD1_UBSAN;
>
> + if (X86_MODRM_REG(v) == 1) /* ECX */
> + return BUG_UD1_KASAN;
> +
> return BUG_UD1;
> }
>
> @@ -357,6 +360,11 @@ static noinstr bool handle_bug(struct pt_regs *regs)
> }
> break;
>
> + case BUG_UD1_KASAN:
> + kasan_inline_handler(regs);
> + handled = true;
> + break;
> +
> default:
> break;
> }
> +void kasan_inline_handler(struct pt_regs *regs)
> +{
> + int metadata = regs->cx;
> + u64 addr = regs->di;
> + u64 pc = regs->ip;
> + bool recover = metadata & KASAN_ECX_RECOVER;
> + bool write = metadata & KASAN_ECX_WRITE;
> + size_t size = KASAN_ECX_SIZE(metadata);
> +
> + if (user_mode(regs))
> + return;
> +
> + if (!kasan_report((void *)addr, size, write, pc))
> + return;
> +
> + kasan_die_unless_recover(recover, "Oops - KASAN", regs, metadata, die);
> +}
I'm confused. Going by the ARM64 code, the meta-data is constant per
site -- it is encoded in the break immediate.
And I suggested you do the same on x86 by using the single byte
displacement instruction encoding.
ud1 0xFF(%ecx), %ecx
Also, we don't have to use a fixed register for the address, you can do:
ud1 0xFF(%ecx), %reg
and have %reg tell us what register the address is in.
Then you can recover the meta-data from the displacement immediate and
the address from whatever register is denoted.
This avoids the 'callsite' from having to clobber cx and move the address
into di.
What you have here will work, and I don't suppose we care about code
density with KASAN much, but it could've been so much better :/
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports
2025-11-11 10:27 ` Peter Zijlstra
@ 2025-11-17 9:47 ` Maciej Wieczór-Retman
2025-11-18 20:35 ` Peter Zijlstra
0 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-11-17 9:47 UTC (permalink / raw)
To: Peter Zijlstra
Cc: xin, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto, kasan-dev, linux-kernel,
linux-arm-kernel, x86, linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-11 at 11:27:19 +0100, Peter Zijlstra wrote:
>On Wed, Oct 29, 2025 at 08:09:51PM +0000, Maciej Wieczor-Retman wrote:
>> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>>
>> Inline KASAN on x86 should do tag mismatch reports by passing the
>> metadata through the UD1 instruction and the faulty address through RDI,
>> a scheme that's already used by UBSan and is easy to extend.
>>
>> The current LLVM way of passing KASAN software tag mode metadata is done
>> using the INT3 instruction. However that should be changed because it
>> doesn't align to how the kernel already handles UD1 for similar use
>> cases. Since inline software tag-based KASAN doesn't work on x86 due to
>> missing compiler support it can be fixed and the INT3 can be changed to
>> UD1 at the same time.
>>
>> Add a kasan component to the #UD decoding and handling functions.
>>
>> Make part of that hook - which decides whether to die or recover from a
>> tag mismatch - arch independent to avoid duplicating a long comment on
>> both x86 and arm64 architectures.
>>
>
>> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
>> index 396071832d02..375651d9b114 100644
>> --- a/arch/x86/include/asm/kasan.h
>> +++ b/arch/x86/include/asm/kasan.h
>> @@ -6,6 +6,24 @@
>> #include <linux/kasan-tags.h>
>> #include <linux/types.h>
>> #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
>> +
>> +/*
>> + * LLVM ABI for reporting tag mismatches in inline KASAN mode.
>> + * On x86 the UD1 instruction is used to carry metadata in the ECX register
>> + * to the KASAN report. ECX is used to differentiate KASAN from UBSan when
>> + * decoding the UD1 instruction.
>> + *
>> + * SIZE refers to how many bytes the faulty memory access
>> + * requested.
>> + * WRITE bit, when set, indicates the access was a write, otherwise
>> + * it was a read.
>> + * RECOVER bit, when set, should allow the kernel to carry on after
>> + * a tag mismatch. Otherwise die() is called.
>> + */
>> +#define KASAN_ECX_RECOVER 0x20
>> +#define KASAN_ECX_WRITE 0x10
>> +#define KASAN_ECX_SIZE_MASK 0x0f
>> +#define KASAN_ECX_SIZE(ecx) (1 << ((ecx) & KASAN_ECX_SIZE_MASK))
>> #define KASAN_SHADOW_SCALE_SHIFT 3
>
>> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
>> index 6b22611e69cc..40fefd306c76 100644
>> --- a/arch/x86/kernel/traps.c
>> +++ b/arch/x86/kernel/traps.c
>> @@ -179,6 +179,9 @@ __always_inline int decode_bug(unsigned long addr, s32 *imm, int *len)
>> if (X86_MODRM_REG(v) == 0) /* EAX */
>> return BUG_UD1_UBSAN;
>>
>> + if (X86_MODRM_REG(v) == 1) /* ECX */
>> + return BUG_UD1_KASAN;
>> +
>> return BUG_UD1;
>> }
>>
>> @@ -357,6 +360,11 @@ static noinstr bool handle_bug(struct pt_regs *regs)
>> }
>> break;
>>
>> + case BUG_UD1_KASAN:
>> + kasan_inline_handler(regs);
>> + handled = true;
>> + break;
>> +
>> default:
>> break;
>> }
>
>> +void kasan_inline_handler(struct pt_regs *regs)
>> +{
>> + int metadata = regs->cx;
>> + u64 addr = regs->di;
>> + u64 pc = regs->ip;
>> + bool recover = metadata & KASAN_ECX_RECOVER;
>> + bool write = metadata & KASAN_ECX_WRITE;
>> + size_t size = KASAN_ECX_SIZE(metadata);
>> +
>> + if (user_mode(regs))
>> + return;
>> +
>> + if (!kasan_report((void *)addr, size, write, pc))
>> + return;
>> +
>> + kasan_die_unless_recover(recover, "Oops - KASAN", regs, metadata, die);
>> +}
>
>I'm confused. Going by the ARM64 code, the meta-data is constant per
>site -- it is encoded in the break immediate.
>
>And I suggested you do the same on x86 by using the single byte
>displacement instruction encoding.
>
> ud1 0xFF(%ecx), %ecx
>
>Also, we don't have to use a fixed register for the address, you can do:
>
> ud1 0xFF(%ecx), %reg
>
>and have %reg tell us what register the address is in.
>
>Then you can recover the meta-data from the displacement immediate and
>the address from whatever register is denoted.
>
>This avoids the 'callsite' from having to clobber cx and move the address
>into di.
>
>What you have here will work, and I don't suppose we care about code
>density with KASAN much, but it could've been so much better :/
Thanks for checking the patch out, maybe I got too focused on just
getting clang to work. You're right, I'll try using the displacement
encoding.
I was attempting a few different encodings because clang was fussy about
putting data where I wanted it. The one in the patch worked fine and I
thought it'd be consistent with the form that UBSan uses. But yeah, I'll
work on it more.
I'll also go and rebase my series onto your WARN() hackery one since
there are a lot of changes to traps.c.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 01/18] kasan: Unpoison pcpu chunks with base address tag
2025-11-10 17:32 ` Alexander Potapenko
@ 2025-11-17 17:51 ` Maciej Wieczór-Retman
0 siblings, 0 replies; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-11-17 17:51 UTC (permalink / raw)
To: Alexander Potapenko
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc, stable, Baoquan He
On 2025-11-10 at 18:32:21 +0100, Alexander Potapenko wrote:
>On Wed, Oct 29, 2025 at 8:05 PM Maciej Wieczor-Retman
><m.wieczorretman@pm.me> wrote:
>>
>> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>>
>> The problem presented here is related to NUMA systems and tag-based
>> KASAN modes - software and hardware ones. It can be explained in the
>> following points:
>>
>> 1. There can be more than one virtual memory chunk.
>> 2. Chunk's base address has a tag.
>> 3. The base address points at the first chunk and thus inherits
>> the tag of the first chunk.
>> 4. The subsequent chunks will be accessed with the tag from the
>> first chunk.
>> 5. Thus, the subsequent chunks need to have their tag set to
>> match that of the first chunk.
>>
>> Refactor code by moving it into a helper in preparation for the actual
>> fix.
>
>The code in the helper function:
>
>> +void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms)
>> +{
>> + int area;
>> +
>> + for (area = 0 ; area < nr_vms ; area++) {
>> + kasan_poison(vms[area]->addr, vms[area]->size,
>> + arch_kasan_get_tag(vms[area]->addr), false);
>> + }
>> +}
>
>is different from what was originally called:
>
>> - for (area = 0; area < nr_vms; area++)
>> - vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
>> - vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);
>> + kasan_unpoison_vmap_areas(vms, nr_vms);
>
>, so the patch description is a bit misleading.
>
>Please also ensure you fix the errors reported by kbuild test robot.
Thanks for looking at the series! Yes, I'll fix these two patches, I've
split them off into a separate 'fixes' series and I'm trying to make
sure it's an acutal refactor this time.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 17/18] x86/kasan: Logical bit shift for kasan_mem_to_shadow
2025-11-10 14:49 ` Marco Elver
@ 2025-11-17 18:26 ` Maciej Wieczór-Retman
0 siblings, 0 replies; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-11-17 18:26 UTC (permalink / raw)
To: Marco Elver
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, pankaj.gupta, glider,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-10 at 15:49:22 +0100, Marco Elver wrote:
>On Wed, 29 Oct 2025 at 21:11, Maciej Wieczor-Retman
><m.wieczorretman@pm.me> wrote:
>>
>> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>>
>> While generally tag-based KASAN adopts an arithemitc bit shift to
>> convert a memory address to a shadow memory address, it doesn't work for
>> all cases on x86. Testing different shadow memory offsets proved that
>> either 4 or 5 level paging didn't work correctly or inline mode ran into
>> issues. Thus the best working scheme is the logical bit shift and
>> non-canonical shadow offset that x86 uses for generic KASAN, of course
>> adjusted for the increased granularity from 8 to 16 bytes.
>>
>> Add an arch specific implementation of kasan_mem_to_shadow() that uses
>> the logical bit shift.
>>
>> The non-canonical hook tries to calculate whether an address came from
>> kasan_mem_to_shadow(). First it checks whether this address fits into
>> the legal set of values possible to output from the mem to shadow
>> function.
>>
>> Tie both generic and tag-based x86 KASAN modes to the address range
>> check associated with generic KASAN.
>>
>> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>> ---
>> Changelog v4:
>> - Add this patch to the series.
>>
>> arch/x86/include/asm/kasan.h | 7 +++++++
>> mm/kasan/report.c | 5 +++--
>> 2 files changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
>> index 375651d9b114..2372397bc3e5 100644
>> --- a/arch/x86/include/asm/kasan.h
>> +++ b/arch/x86/include/asm/kasan.h
>> @@ -49,6 +49,13 @@
>> #include <linux/bits.h>
>>
>> #ifdef CONFIG_KASAN_SW_TAGS
>> +static inline void *__kasan_mem_to_shadow(const void *addr)
>> +{
>> + return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>> + + KASAN_SHADOW_OFFSET;
>> +}
>
>You're effectively undoing "kasan: sw_tags: Use arithmetic shift for
>shadow computation" for x86 - why?
>This function needs a comment explaining this.
Sure, I'll add a comment here.
While the signed approach seems to work well for arm64 and risc-v it
doesn't play well with x86 which wants to keep the top bit for
canonicality checks.
Trying to keep signed mem to shadow scheme for all corner cases on all
configs always turned into ugly workarounds for something. There is a
mechanism, when there is a fault, to guess if the address came from a
KASAN check - some address format always didn't work when I tried
validating 4 and 5 paging levels. One approach to getting the signed mem
to shadow was also using a non-canonial kasan shadow offset. It worked
great for paging as far as I remember (some 5 lvl fixup code could be
removed) but it made the inline mode either hard to implement or much
slower due to extended checks.
>Also, the commit message just says "it doesn't work for all cases" - why?
Fair enough, this was a little not verbose. I'll update the patch
message with an explanation.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 03/18] kasan: sw_tags: Use arithmetic shift for shadow computation
2025-11-11 9:39 ` Alexander Potapenko
@ 2025-11-17 18:27 ` Maciej Wieczór-Retman
0 siblings, 0 replies; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-11-17 18:27 UTC (permalink / raw)
To: Alexander Potapenko
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-11 at 10:39:12 +0100, Alexander Potapenko wrote:
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index b00849ea8ffd..952ade776e51 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -61,8 +61,14 @@ int kasan_populate_early_shadow(const void *shadow_start,
>> #ifndef kasan_mem_to_shadow
>> static inline void *kasan_mem_to_shadow(const void *addr)
>> {
>> - return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
>> - + KASAN_SHADOW_OFFSET;
>> + void *scaled;
>> +
>> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>> + scaled = (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT);
>> + else
>> + scaled = (void *)((long)addr >> KASAN_SHADOW_SCALE_SHIFT);
>> +
>> + return KASAN_SHADOW_OFFSET + scaled;
>> }
>> #endif
>
>As Marco pointed out, this part is reverted in Patch 17. Any reason to do that?
I hope I was able to answer that in my reply to Marco
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 04/18] kasan: sw_tags: Support tag widths less than 8 bits
2025-11-10 17:37 ` Alexander Potapenko
@ 2025-11-17 18:35 ` Maciej Wieczór-Retman
0 siblings, 0 replies; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-11-17 18:35 UTC (permalink / raw)
To: Alexander Potapenko
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-10 at 18:37:59 +0100, Alexander Potapenko wrote:
>> +++ b/include/linux/kasan-tags.h
>> @@ -2,13 +2,16 @@
>> #ifndef _LINUX_KASAN_TAGS_H
>> #define _LINUX_KASAN_TAGS_H
>>
>> +#include <asm/kasan.h>
>
>In Patch 07, this is changed to `#include <asm/kasan-tags.h>` what is
>the point of doing that?
>Wouldn't it be better to move the addition of kasan-tags.h for
>different arches to this patch from Patch 07?
I wanted to keep the split between adding the generalized definitions
that Samuel did here, and my arch specific changes. Thought it'd be
easier to review for people if it was kept this way. But maybe it's a
good idea to just move the asm/kasan-tags changes here too, I'll
rearange the code a bit between these two patches.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 06/18] x86/kasan: Add arch specific kasan functions
2025-11-11 9:31 ` Alexander Potapenko
@ 2025-11-17 18:41 ` Maciej Wieczór-Retman
2025-11-18 15:49 ` Maciej Wieczór-Retman
0 siblings, 1 reply; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-11-17 18:41 UTC (permalink / raw)
To: Alexander Potapenko
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-11 at 10:31:13 +0100, Alexander Potapenko wrote:
>> +#ifdef CONFIG_64BIT
>> +static inline void *__tag_set(const void *__addr, u8 tag)
>> +{
>> + u64 addr = (u64)__addr;
>> +
>> + addr &= ~__tag_shifted(KASAN_TAG_MASK);
>
>KASAN_TAG_MASK is only defined in Patch 07, does this patch compile?
Seems I forgot to remove it from patch 7. It's originally defined
in the mmzone.h file and looked cleaner there according to Andrey.
Thanks for noticing it's still in patch 7, I'll get rid of it.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 09/18] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic
2025-11-11 9:13 ` Alexander Potapenko
@ 2025-11-17 18:43 ` Maciej Wieczór-Retman
0 siblings, 0 replies; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-11-17 18:43 UTC (permalink / raw)
To: Alexander Potapenko
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-11 at 10:13:57 +0100, Alexander Potapenko wrote:
>On Wed, Oct 29, 2025 at 8:08 PM Maciej Wieczor-Retman
><m.wieczorretman@pm.me> wrote:
>>
>> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>>
>> ARCH_HAS_EXECMEM_ROX was re-enabled in x86 at Linux 6.14 release.
>> vm_reset_perms() calculates range's start and end addresses using min()
>> and max() functions. To do that it compares pointers but, with KASAN
>> software tags mode enabled, some are tagged - addr variable is, while
>> start and end variables aren't. This can cause the wrong address to be
>> chosen and result in various errors in different places.
>>
>> Reset tags in the address used as function argument in min(), max().
>>
>> execmem_cache_add() adds tagged pointers to a maple tree structure,
>> which then are incorrectly compared when walking the tree. That results
>> in different pointers being returned later and page permission violation
>> errors panicking the kernel.
>>
>> Reset tag of the address range inserted into the maple tree inside
>> execmem_vmalloc() which then gets propagated to execmem_cache_add().
>>
>> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>Acked-by: Alexander Potapenko <glider@google.com>
>
>> diff --git a/mm/execmem.c b/mm/execmem.c
>> index 810a4ba9c924..fd11409a6217 100644
>> --- a/mm/execmem.c
>> +++ b/mm/execmem.c
>> @@ -59,7 +59,7 @@ static void *execmem_vmalloc(struct execmem_range *range, size_t size,
>> return NULL;
>> }
>>
>> - return p;
>> + return kasan_reset_tag(p);
>
>I think a comment would be nice here.
>
>
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -3328,7 +3328,7 @@ static void vm_reset_perms(struct vm_struct *area)
>> * the vm_unmap_aliases() flush includes the direct map.
>> */
>> for (i = 0; i < area->nr_pages; i += 1U << page_order) {
>> - unsigned long addr = (unsigned long)page_address(area->pages[i]);
>> + unsigned long addr = (unsigned long)kasan_reset_tag(page_address(area->pages[i]));
>
>Ditto
Thanks, will add some comments on why these are needed.
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 10/18] x86/mm: Physical address comparisons in fill_p*d/pte
2025-11-10 16:24 ` Alexander Potapenko
@ 2025-11-17 18:58 ` Maciej Wieczór-Retman
0 siblings, 0 replies; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-11-17 18:58 UTC (permalink / raw)
To: Alexander Potapenko
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-10 at 17:24:38 +0100, Alexander Potapenko wrote:
>On Wed, Oct 29, 2025 at 9:07 PM Maciej Wieczor-Retman
><m.wieczorretman@pm.me> wrote:
>>
>> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>>
>> Calculating page offset returns a pointer without a tag. When comparing
>> the calculated offset to a tagged page pointer an error is raised
>> because they are not equal.
>>
>> Change pointer comparisons to physical address comparisons as to avoid
>> issues with tagged pointers that pointer arithmetic would create. Open
>> code pte_offset_kernel(), pmd_offset(), pud_offset() and p4d_offset().
>> Because one parameter is always zero and the rest of the function
>> insides are enclosed inside __va(), removing that layer lowers the
>> complexity of final assembly.
>>
>> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>> ---
>> Changelog v2:
>> - Open code *_offset() to avoid it's internal __va().
>>
>> arch/x86/mm/init_64.c | 11 +++++++----
>> 1 file changed, 7 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>> index 0e4270e20fad..2d79fc0cf391 100644
>> --- a/arch/x86/mm/init_64.c
>> +++ b/arch/x86/mm/init_64.c
>> @@ -269,7 +269,10 @@ static p4d_t *fill_p4d(pgd_t *pgd, unsigned long vaddr)
>> if (pgd_none(*pgd)) {
>> p4d_t *p4d = (p4d_t *)spp_getpage();
>> pgd_populate(&init_mm, pgd, p4d);
>> - if (p4d != p4d_offset(pgd, 0))
>> +
>> + if (__pa(p4d) != (pgtable_l5_enabled() ?
>> + __pa(pgd) :
>> + (unsigned long)pgd_val(*pgd) & PTE_PFN_MASK))
>
>Did you test with both 4- and 5-level paging?
>If I understand correctly, p4d and pgd are supposed to be the same
>under !pgtable_l5_enabled().
Yes, I do test on both paging modes. Looking at p4d_offset() I think I
got the cases reversed somehow. Weird that it didn't raise any issues
afterwards. Thanks for pointing it out :)
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 14/18] x86: Minimal SLAB alignment
2025-11-10 17:48 ` Alexander Potapenko
@ 2025-11-18 11:36 ` Maciej Wieczor-Retman
0 siblings, 0 replies; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-11-18 11:36 UTC (permalink / raw)
To: Alexander Potapenko
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-10 at 18:48:35 +0100, Alexander Potapenko wrote:
>> diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
>> index 69404eae9983..3232583b5487 100644
>> --- a/arch/x86/include/asm/cache.h
>> +++ b/arch/x86/include/asm/cache.h
>> @@ -21,4 +21,8 @@
>> #endif
>> #endif
>>
>> +#ifdef CONFIG_KASAN_SW_TAGS
>> +#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
>
>I don't think linux/linkage.h (the only header included here) defines
>KASAN_SHADOW_SCALE_SHIFT, does it?
I revised all the x86 and non-arch places where ARCH_SLAB_MINALIGN is used and
all these places also include linux/slab.h which does include
KASAN_SHADOW_SCALE_SHIFT. So there are no cases where it's undefined.
The minalign makes sense defined here but including kasan headers causes
compilation errors all over the place. And I don't think moving
KASAN_SHADOW_SCALE_SHIFT here makes much sense?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 18/18] x86/kasan: Make software tag-based kasan available
2025-11-11 9:00 ` Alexander Potapenko
@ 2025-11-18 11:48 ` Maciej Wieczor-Retman
0 siblings, 0 replies; 53+ messages in thread
From: Maciej Wieczor-Retman @ 2025-11-18 11:48 UTC (permalink / raw)
To: Alexander Potapenko
Cc: xin, peterz, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
mark.rutland, trintaeoitogc, jpoimboe, thuth, pasha.tatashin,
dvyukov, jhubbard, catalin.marinas, yeoreum.yun, mhocko,
lorenzo.stoakes, samuel.holland, vincenzo.frascino, bigeasy,
surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas, tglx, mingo,
broonie, corbet, andreyknvl, maciej.wieczor-retman, david, maz,
rppt, will, luto, kasan-dev, linux-kernel, linux-arm-kernel, x86,
linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-11 at 10:00:59 +0100, Alexander Potapenko wrote:
>On Wed, Oct 29, 2025 at 9:11 PM Maciej Wieczor-Retman
><m.wieczorretman@pm.me> wrote:
>>
>> From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
>>
>
>> - ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory
>> + ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow memory (generic mode)
>> + fffff40000000000 | -8 TB | fffffbffffffffff | 8 TB | KASAN shadow memory (software tag-based mode)
>> __________________|____________|__________________|_________|____________________________________________________________
>
>
>> + ffdf000000000000 | -8.25 PB | fffffbffffffffff | ~8 PB | KASAN shadow memory (generic mode)
>> + ffeffc0000000000 | -6 PB | fffffbffffffffff | 4 PB | KASAN shadow memory (software tag-based mode)
>> __________________|____________|__________________|_________|____________________________________________________________
>
>> + default 0xeffffc0000000000 if KASAN_SW_TAGS
>> default 0xdffffc0000000000
>
>Please elaborate in the patch description how these values were picked.
Sure, will do :)
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 06/18] x86/kasan: Add arch specific kasan functions
2025-11-17 18:41 ` Maciej Wieczór-Retman
@ 2025-11-18 15:49 ` Maciej Wieczór-Retman
0 siblings, 0 replies; 53+ messages in thread
From: Maciej Wieczór-Retman @ 2025-11-18 15:49 UTC (permalink / raw)
To: Alexander Potapenko, xin, peterz, kaleshsingh, kbingham, akpm,
nathan, ryabinin.a.a, dave.hansen, bp, morbo, jeremy.linton,
smostafa, kees, baohua, vbabka, justinstitt, wangkefeng.wang,
leitao, jan.kiszka, fujita.tomonori, hpa, urezki, ubizjak,
ada.coupriediaz, nick.desaulniers+lkml, ojeda, brgerst, elver,
pankaj.gupta, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto, kasan-dev, linux-kernel,
linux-arm-kernel, x86, linux-kbuild, linux-mm, llvm, linux-doc
On 2025-11-17 at 18:41:35 +0000, Maciej Wieczór-Retman wrote:
>On 2025-11-11 at 10:31:13 +0100, Alexander Potapenko wrote:
>>> +#ifdef CONFIG_64BIT
>>> +static inline void *__tag_set(const void *__addr, u8 tag)
>>> +{
>>> + u64 addr = (u64)__addr;
>>> +
>>> + addr &= ~__tag_shifted(KASAN_TAG_MASK);
>>
>>KASAN_TAG_MASK is only defined in Patch 07, does this patch compile?
>
>Seems I forgot to remove it from patch 7. It's originally defined
>in the mmzone.h file and looked cleaner there according to Andrey.
>
>Thanks for noticing it's still in patch 7, I'll get rid of it.
You were right before, after removing that define in patch 7 it doesn't
compile. I think I'll just open code this definition here:
>>> + addr &= ~__tag_shifted((1UL << KASAN_TAG_WIDTH) - 1);
I don't see a nicer solution here if taking things from mmzone.h is out
of the question. I suppose a #ifndef KASAN_TAG_MASK placed here that
would just shadow the one in mmzone.h could work too?
^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports
2025-11-17 9:47 ` Maciej Wieczór-Retman
@ 2025-11-18 20:35 ` Peter Zijlstra
0 siblings, 0 replies; 53+ messages in thread
From: Peter Zijlstra @ 2025-11-18 20:35 UTC (permalink / raw)
To: Maciej Wieczór-Retman
Cc: xin, kaleshsingh, kbingham, akpm, nathan, ryabinin.a.a,
dave.hansen, bp, morbo, jeremy.linton, smostafa, kees, baohua,
vbabka, justinstitt, wangkefeng.wang, leitao, jan.kiszka,
fujita.tomonori, hpa, urezki, ubizjak, ada.coupriediaz,
nick.desaulniers+lkml, ojeda, brgerst, elver, pankaj.gupta,
glider, mark.rutland, trintaeoitogc, jpoimboe, thuth,
pasha.tatashin, dvyukov, jhubbard, catalin.marinas, yeoreum.yun,
mhocko, lorenzo.stoakes, samuel.holland, vincenzo.frascino,
bigeasy, surenb, ardb, Liam.Howlett, nicolas.schier, ziy, kas,
tglx, mingo, broonie, corbet, andreyknvl, maciej.wieczor-retman,
david, maz, rppt, will, luto, kasan-dev, linux-kernel,
linux-arm-kernel, x86, linux-kbuild, linux-mm, llvm, linux-doc
On Mon, Nov 17, 2025 at 09:47:20AM +0000, Maciej Wieczór-Retman wrote:
> >> +void kasan_inline_handler(struct pt_regs *regs)
> >> +{
> >> + int metadata = regs->cx;
> >> + u64 addr = regs->di;
> >> + u64 pc = regs->ip;
> >> + bool recover = metadata & KASAN_ECX_RECOVER;
> >> + bool write = metadata & KASAN_ECX_WRITE;
> >> + size_t size = KASAN_ECX_SIZE(metadata);
> >> +
> >> + if (user_mode(regs))
> >> + return;
> >> +
> >> + if (!kasan_report((void *)addr, size, write, pc))
> >> + return;
> >> +
> >> + kasan_die_unless_recover(recover, "Oops - KASAN", regs, metadata, die);
> >> +}
> >
> >I'm confused. Going by the ARM64 code, the meta-data is constant per
> >site -- it is encoded in the break immediate.
> >
> >And I suggested you do the same on x86 by using the single byte
> >displacement instruction encoding.
> >
> > ud1 0xFF(%ecx), %ecx
> >
> >Also, we don't have to use a fixed register for the address, you can do:
> >
> > ud1 0xFF(%ecx), %reg
> >
> >and have %reg tell us what register the address is in.
> >
> >Then you can recover the meta-data from the displacement immediate and
> >the address from whatever register is denoted.
> >
> >This avoids the 'callsite' from having to clobber cx and move the address
> >into di.
> >
> >What you have here will work, and I don't suppose we care about code
> >density with KASAN much, but it could've been so much better :/
>
> Thanks for checking the patch out, maybe I got too focused on just
> getting clang to work. You're right, I'll try using the displacement
> encoding.
>
> I was attempting a few different encodings because clang was fussy about
> putting data where I wanted it. The one in the patch worked fine and I
> thought it'd be consistent with the form that UBSan uses. But yeah, I'll
> work on it more.
>
> I'll also go and rebase my series onto your WARN() hackery one since
> there are a lot of changes to traps.c.
Thanks!
^ permalink raw reply [flat|nested] 53+ messages in thread
end of thread, other threads:[~2025-11-18 20:36 UTC | newest]
Thread overview: 53+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-29 19:05 [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2025-10-29 19:05 ` [PATCH v6 01/18] kasan: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
2025-11-10 17:32 ` Alexander Potapenko
2025-11-17 17:51 ` Maciej Wieczór-Retman
2025-10-29 19:06 ` [PATCH v6 02/18] kasan: Unpoison vms[area] addresses with a common tag Maciej Wieczor-Retman
2025-11-10 16:40 ` Alexander Potapenko
2025-10-29 19:06 ` [PATCH v6 03/18] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
2025-11-11 9:39 ` Alexander Potapenko
2025-11-17 18:27 ` Maciej Wieczór-Retman
2025-10-29 19:06 ` [PATCH v6 04/18] kasan: sw_tags: Support tag widths less than 8 bits Maciej Wieczor-Retman
2025-11-10 17:37 ` Alexander Potapenko
2025-11-17 18:35 ` Maciej Wieczór-Retman
2025-10-29 19:06 ` [PATCH v6 05/18] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
2025-11-11 9:22 ` Alexander Potapenko
2025-10-29 19:07 ` [PATCH v6 06/18] x86/kasan: Add arch specific kasan functions Maciej Wieczor-Retman
2025-11-11 9:31 ` Alexander Potapenko
2025-11-17 18:41 ` Maciej Wieczór-Retman
2025-11-18 15:49 ` Maciej Wieczór-Retman
2025-10-29 19:07 ` [PATCH v6 07/18] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
2025-11-11 9:34 ` Alexander Potapenko
2025-10-29 19:07 ` [PATCH v6 08/18] x86/mm: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
2025-11-11 9:42 ` Alexander Potapenko
2025-10-29 19:07 ` [PATCH v6 09/18] mm/execmem: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
2025-11-11 9:13 ` Alexander Potapenko
2025-11-17 18:43 ` Maciej Wieczór-Retman
2025-10-29 20:07 ` [PATCH v6 10/18] x86/mm: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
2025-11-10 16:24 ` Alexander Potapenko
2025-11-17 18:58 ` Maciej Wieczór-Retman
2025-10-29 20:07 ` [PATCH v6 11/18] x86/kasan: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
2025-11-11 9:11 ` Alexander Potapenko
2025-10-29 20:08 ` [PATCH v6 12/18] x86/mm: LAM compatible non-canonical definition Maciej Wieczor-Retman
2025-11-11 9:07 ` Alexander Potapenko
2025-10-29 20:08 ` [PATCH v6 13/18] x86/mm: LAM initialization Maciej Wieczor-Retman
2025-11-11 9:04 ` Alexander Potapenko
2025-10-29 20:09 ` [PATCH v6 14/18] x86: Minimal SLAB alignment Maciej Wieczor-Retman
2025-11-10 17:48 ` Alexander Potapenko
2025-11-18 11:36 ` Maciej Wieczor-Retman
2025-10-29 20:09 ` [PATCH v6 15/18] x86/kasan: Handle UD1 for inline KASAN reports Maciej Wieczor-Retman
2025-11-11 10:14 ` Alexander Potapenko
2025-11-11 10:27 ` Peter Zijlstra
2025-11-17 9:47 ` Maciej Wieczór-Retman
2025-11-18 20:35 ` Peter Zijlstra
2025-10-29 20:10 ` [PATCH v6 16/18] arm64: Unify software tag-based KASAN inline recovery path Maciej Wieczor-Retman
2025-11-11 9:02 ` Alexander Potapenko
2025-10-29 20:11 ` [PATCH v6 17/18] x86/kasan: Logical bit shift for kasan_mem_to_shadow Maciej Wieczor-Retman
2025-11-10 14:49 ` Marco Elver
2025-11-17 18:26 ` Maciej Wieczór-Retman
2025-10-29 20:11 ` [PATCH v6 18/18] x86/kasan: Make software tag-based kasan available Maciej Wieczor-Retman
2025-11-11 9:00 ` Alexander Potapenko
2025-11-18 11:48 ` Maciej Wieczor-Retman
2025-10-29 22:08 ` [PATCH v6 00/18] kasan: x86: arm64: KASAN tag-based mode for x86 Andrew Morton
2025-10-29 23:13 ` Andrew Morton
2025-10-30 5:31 ` Maciej Wieczór-Retman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).