* [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations
@ 2025-08-05 14:26 Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 1/9] kasan: introduce ARCH_DEFER_KASAN and unify static key across modes Sabyrzhan Tasbolatov
` (8 more replies)
0 siblings, 9 replies; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
This patch series addresses the fragmentation in KASAN initialization
across architectures by introducing a unified approach that eliminates
duplicate static keys and arch-specific kasan_arch_is_ready()
implementations.
The core issue is that different architectures have inconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions
or always-on behavior
This series implements two-level approach:
1. kasan_enabled() - compile-time check for KASAN configuration
2. kasan_shadow_initialized() - runtime check for shadow memory readiness
Changes in v4:
- Unified patches where ARCH_DEFER_KASAN is introduced and used
in the KASAN code (Andrey Ryabinin)
- Fixed kasan_enable() for HW_TAGS mode (Andrey Ryabinin)
- Replaced !kasan_enabled() with !kasan_shadow_initialized() in
loongarch which selects ARCH_DEFER_KASAN (Andrey Ryabinin)
- Addressed the issue in UML arch, where kasan_init_generic() is
called before jump_label_init() (Andrey Ryabinin)
Adding in TO additional recipients who developed KASAN in LoongArch, UML.
Tested on:
- powerpc - selects ARCH_DEFER_KASAN
Built ppc64_defconfig (PPC_BOOK3S_64) - OK
Booted via qemu-system-ppc64 - OK
- um - selects ARCH_DEFER_KASAN
Built defconfig with KASAN_INLINE - OK
Built defconfig with STATIC_LINK && KASAN_OUTLINE - OK
Booted ./linux - OK
- loongarch - selects ARCH_DEFER_KASAN
Built defconfig with KASAN_GENERIC - OK
Haven't tested the boot. Asking Loongarch developers to verify - N/A
But should be good, since Loongarch does not have specific "kasan_init()"
call like UML does. It selects ARCH_DEFER_KASAN and calls kasan_init()
in the end of setup_arch() after jump_label_init().
- arm64
Built defconfig, kvm_guest.config with HW_TAGS, SW_TAGS, GENERIC - OK
KASAN_KUNIT_TEST - OK
Booted via qemu-system-arm64 - OK
- x86_64
Built defconfig, kvm_guest.config with KASAN_GENERIC - OK
KASAN_KUNIT_TEST - OK
Booted via qemu-system-x86 - OK
- s390, riscv, xtensa, arm
Built defconfig with KASAN_GENERIC - OK
Previous v3 thread: https://lore.kernel.org/all/20250717142732.292822-1-snovitoll@gmail.com/
Previous v2 thread: https://lore.kernel.org/all/20250626153147.145312-1-snovitoll@gmail.com/
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Sabyrzhan Tasbolatov (9):
kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
kasan/powerpc: select ARCH_DEFER_KASAN and call kasan_init_generic
kasan/arm,arm64: call kasan_init_generic in kasan_init
kasan/xtensa: call kasan_init_generic in kasan_init
kasan/loongarch: select ARCH_DEFER_KASAN and call kasan_init_generic
kasan/um: select ARCH_DEFER_KASAN and call kasan_init_generic
kasan/x86: call kasan_init_generic in kasan_init
kasan/s390: call kasan_init_generic in kasan_init
kasan/riscv: call kasan_init_generic in kasan_init
arch/arm/mm/kasan_init.c | 2 +-
arch/arm64/mm/kasan_init.c | 4 +--
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/kasan.h | 7 -----
arch/loongarch/mm/kasan_init.c | 8 ++---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kasan.h | 12 --------
arch/powerpc/mm/kasan/init_32.c | 2 +-
arch/powerpc/mm/kasan/init_book3e_64.c | 2 +-
arch/powerpc/mm/kasan/init_book3s_64.c | 6 +---
arch/riscv/mm/kasan_init.c | 1 +
arch/s390/kernel/early.c | 3 +-
arch/um/Kconfig | 1 +
arch/um/include/asm/kasan.h | 5 ---
arch/um/kernel/mem.c | 12 ++++++--
arch/x86/mm/kasan_init_64.c | 2 +-
arch/xtensa/mm/kasan_init.c | 2 +-
include/linux/kasan-enabled.h | 36 +++++++++++++++++-----
include/linux/kasan.h | 42 ++++++++++++++++++++------
lib/Kconfig.kasan | 8 +++++
mm/kasan/common.c | 18 +++++++----
mm/kasan/generic.c | 23 ++++++++------
mm/kasan/hw_tags.c | 9 +-----
mm/kasan/kasan.h | 36 ++++++++++++++++------
mm/kasan/shadow.c | 32 +++++---------------
mm/kasan/sw_tags.c | 4 ++-
mm/kasan/tags.c | 2 +-
27 files changed, 157 insertions(+), 124 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v4 1/9] kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
@ 2025-08-05 14:26 ` Sabyrzhan Tasbolatov
2025-08-06 13:34 ` Andrey Ryabinin
2025-08-05 14:26 ` [PATCH v4 2/9] kasan/powerpc: select ARCH_DEFER_KASAN and call kasan_init_generic Sabyrzhan Tasbolatov
` (7 subsequent siblings)
8 siblings, 1 reply; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures that need
to defer KASAN initialization until shadow memory is properly set up,
and unify the static key infrastructure across all KASAN modes.
Some architectures (like PowerPC with radix MMU) need to set up their
shadow memory mappings before KASAN can be safely enabled, while others
(like s390, x86, arm) can enable KASAN much earlier or even from the
beginning.
Historically, the runtime static key kasan_flag_enabled existed only for
CONFIG_KASAN_HW_TAGS mode. Generic and SW_TAGS modes either relied on
architecture-specific kasan_arch_is_ready() implementations or evaluated
KASAN checks unconditionally, leading to code duplication.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
---
Changes in v4:
- Fixed HW_TAGS static key functionality (was broken in v3)
- Merged configuration and implementation for atomicity
---
include/linux/kasan-enabled.h | 36 +++++++++++++++++++++++-------
include/linux/kasan.h | 42 +++++++++++++++++++++++++++--------
lib/Kconfig.kasan | 8 +++++++
mm/kasan/common.c | 18 ++++++++++-----
mm/kasan/generic.c | 23 +++++++++++--------
mm/kasan/hw_tags.c | 9 +-------
mm/kasan/kasan.h | 36 +++++++++++++++++++++---------
mm/kasan/shadow.c | 32 ++++++--------------------
mm/kasan/sw_tags.c | 4 +++-
mm/kasan/tags.c | 2 +-
10 files changed, 133 insertions(+), 77 deletions(-)
diff --git a/include/linux/kasan-enabled.h b/include/linux/kasan-enabled.h
index 6f612d69ea0..52a3909f032 100644
--- a/include/linux/kasan-enabled.h
+++ b/include/linux/kasan-enabled.h
@@ -4,32 +4,52 @@
#include <linux/static_key.h>
-#ifdef CONFIG_KASAN_HW_TAGS
+/* Controls whether KASAN is enabled at all (compile-time check). */
+static __always_inline bool kasan_enabled(void)
+{
+ return IS_ENABLED(CONFIG_KASAN);
+}
+#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
+/*
+ * Global runtime flag for KASAN modes that need runtime control.
+ * Used by ARCH_DEFER_KASAN architectures and HW_TAGS mode.
+ */
DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
-static __always_inline bool kasan_enabled(void)
+/*
+ * Runtime control for shadow memory initialization or HW_TAGS mode.
+ * Uses static key for architectures that need deferred KASAN or HW_TAGS.
+ */
+static __always_inline bool kasan_shadow_initialized(void)
{
return static_branch_likely(&kasan_flag_enabled);
}
-static inline bool kasan_hw_tags_enabled(void)
+static inline void kasan_enable(void)
+{
+ static_branch_enable(&kasan_flag_enabled);
+}
+#else
+/* For architectures that can enable KASAN early, use compile-time check. */
+static __always_inline bool kasan_shadow_initialized(void)
{
return kasan_enabled();
}
-#else /* CONFIG_KASAN_HW_TAGS */
+static inline void kasan_enable(void) {}
+#endif /* CONFIG_ARCH_DEFER_KASAN || CONFIG_KASAN_HW_TAGS */
-static inline bool kasan_enabled(void)
+#ifdef CONFIG_KASAN_HW_TAGS
+static inline bool kasan_hw_tags_enabled(void)
{
- return IS_ENABLED(CONFIG_KASAN);
+ return kasan_shadow_initialized();
}
-
+#else
static inline bool kasan_hw_tags_enabled(void)
{
return false;
}
-
#endif /* CONFIG_KASAN_HW_TAGS */
#endif /* LINUX_KASAN_ENABLED_H */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 890011071f2..5bf05aed795 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -194,7 +194,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *s, void *object,
static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s,
void *object)
{
- if (kasan_enabled())
+ if (kasan_shadow_initialized())
return __kasan_slab_pre_free(s, object, _RET_IP_);
return false;
}
@@ -229,7 +229,7 @@ static __always_inline bool kasan_slab_free(struct kmem_cache *s,
void *object, bool init,
bool still_accessible)
{
- if (kasan_enabled())
+ if (kasan_shadow_initialized())
return __kasan_slab_free(s, object, init, still_accessible);
return false;
}
@@ -237,7 +237,7 @@ static __always_inline bool kasan_slab_free(struct kmem_cache *s,
void __kasan_kfree_large(void *ptr, unsigned long ip);
static __always_inline void kasan_kfree_large(void *ptr)
{
- if (kasan_enabled())
+ if (kasan_shadow_initialized())
__kasan_kfree_large(ptr, _RET_IP_);
}
@@ -302,7 +302,7 @@ bool __kasan_mempool_poison_pages(struct page *page, unsigned int order,
static __always_inline bool kasan_mempool_poison_pages(struct page *page,
unsigned int order)
{
- if (kasan_enabled())
+ if (kasan_shadow_initialized())
return __kasan_mempool_poison_pages(page, order, _RET_IP_);
return true;
}
@@ -356,7 +356,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip);
*/
static __always_inline bool kasan_mempool_poison_object(void *ptr)
{
- if (kasan_enabled())
+ if (kasan_shadow_initialized())
return __kasan_mempool_poison_object(ptr, _RET_IP_);
return true;
}
@@ -543,6 +543,12 @@ void kasan_report_async(void);
#endif /* CONFIG_KASAN_HW_TAGS */
+#ifdef CONFIG_KASAN_GENERIC
+void __init kasan_init_generic(void);
+#else
+static inline void kasan_init_generic(void) { }
+#endif
+
#ifdef CONFIG_KASAN_SW_TAGS
void __init kasan_init_sw_tags(void);
#else
@@ -562,11 +568,29 @@ static inline void kasan_init_hw_tags(void) { }
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
-int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
-void kasan_release_vmalloc(unsigned long start, unsigned long end,
+
+int __kasan_populate_vmalloc(unsigned long addr, unsigned long size);
+static inline int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
+{
+ if (!kasan_shadow_initialized())
+ return 0;
+ return __kasan_populate_vmalloc(addr, size);
+}
+
+void __kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end,
unsigned long flags);
+static inline void kasan_release_vmalloc(unsigned long start,
+ unsigned long end,
+ unsigned long free_region_start,
+ unsigned long free_region_end,
+ unsigned long flags)
+{
+ if (kasan_shadow_initialized())
+ __kasan_release_vmalloc(start, end, free_region_start,
+ free_region_end, flags);
+}
#else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
@@ -592,7 +616,7 @@ static __always_inline void *kasan_unpoison_vmalloc(const void *start,
unsigned long size,
kasan_vmalloc_flags_t flags)
{
- if (kasan_enabled())
+ if (kasan_shadow_initialized())
return __kasan_unpoison_vmalloc(start, size, flags);
return (void *)start;
}
@@ -601,7 +625,7 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size);
static __always_inline void kasan_poison_vmalloc(const void *start,
unsigned long size)
{
- if (kasan_enabled())
+ if (kasan_shadow_initialized())
__kasan_poison_vmalloc(start, size);
}
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f82889a830f..38456560c85 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -19,6 +19,14 @@ config ARCH_DISABLE_KASAN_INLINE
Disables both inline and stack instrumentation. Selected by
architectures that do not support these instrumentation types.
+config ARCH_DEFER_KASAN
+ bool
+ help
+ Architectures should select this if they need to defer KASAN
+ initialization until shadow memory is properly set up. This
+ enables runtime control via static keys. Otherwise, KASAN uses
+ compile-time constants for better performance.
+
config CC_HAS_KASAN_GENERIC
def_bool $(cc-option, -fsanitize=kernel-address)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index ed4873e18c7..dff5f7bfad1 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -32,6 +32,15 @@
#include "kasan.h"
#include "../slab.h"
+#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
+/*
+ * Definition of the unified static key declared in kasan-enabled.h.
+ * This provides consistent runtime enable/disable across KASAN modes.
+ */
+DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
+EXPORT_SYMBOL(kasan_flag_enabled);
+#endif
+
struct slab *kasan_addr_to_slab(const void *addr)
{
if (virt_addr_valid(addr))
@@ -250,7 +259,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
unsigned long ip)
{
- if (!kasan_arch_is_ready() || is_kfence_address(object))
+ if (is_kfence_address(object))
return false;
return check_slab_allocation(cache, object, ip);
}
@@ -258,7 +267,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
bool still_accessible)
{
- if (!kasan_arch_is_ready() || is_kfence_address(object))
+ if (is_kfence_address(object))
return false;
poison_slab_object(cache, object, init, still_accessible);
@@ -282,9 +291,6 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
static inline bool check_page_allocation(void *ptr, unsigned long ip)
{
- if (!kasan_arch_is_ready())
- return false;
-
if (ptr != page_address(virt_to_head_page(ptr))) {
kasan_report_invalid_free(ptr, ip, KASAN_REPORT_INVALID_FREE);
return true;
@@ -511,7 +517,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
return true;
}
- if (is_kfence_address(ptr) || !kasan_arch_is_ready())
+ if (is_kfence_address(ptr))
return true;
slab = folio_slab(folio);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index d54e89f8c3e..1d20b925b9d 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -36,6 +36,17 @@
#include "kasan.h"
#include "../slab.h"
+/*
+ * Initialize Generic KASAN and enable runtime checks.
+ * This should be called from arch kasan_init() once shadow memory is ready.
+ */
+void __init kasan_init_generic(void)
+{
+ kasan_enable();
+
+ pr_info("KernelAddressSanitizer initialized (generic)\n");
+}
+
/*
* All functions below always inlined so compiler could
* perform better optimizations in each of __asan_loadX/__assn_storeX
@@ -165,7 +176,7 @@ static __always_inline bool check_region_inline(const void *addr,
size_t size, bool write,
unsigned long ret_ip)
{
- if (!kasan_arch_is_ready())
+ if (!kasan_shadow_initialized())
return true;
if (unlikely(size == 0))
@@ -189,13 +200,10 @@ bool kasan_check_range(const void *addr, size_t size, bool write,
return check_region_inline(addr, size, write, ret_ip);
}
-bool kasan_byte_accessible(const void *addr)
+bool __kasan_byte_accessible(const void *addr)
{
s8 shadow_byte;
- if (!kasan_arch_is_ready())
- return true;
-
shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(addr));
return shadow_byte >= 0 && shadow_byte < KASAN_GRANULE_SIZE;
@@ -495,9 +503,6 @@ static void release_alloc_meta(struct kasan_alloc_meta *meta)
static void release_free_meta(const void *object, struct kasan_free_meta *meta)
{
- if (!kasan_arch_is_ready())
- return;
-
/* Check if free meta is valid. */
if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREE_META)
return;
@@ -562,7 +567,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
kasan_save_track(&alloc_meta->alloc_track, flags);
}
-void kasan_save_free_info(struct kmem_cache *cache, void *object)
+void __kasan_save_free_info(struct kmem_cache *cache, void *object)
{
struct kasan_free_meta *free_meta;
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 9a6927394b5..c8289a3feab 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -45,13 +45,6 @@ static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
-/*
- * Whether KASAN is enabled at all.
- * The value remains false until KASAN is initialized by kasan_init_hw_tags().
- */
-DEFINE_STATIC_KEY_FALSE(kasan_flag_enabled);
-EXPORT_SYMBOL(kasan_flag_enabled);
-
/*
* Whether the selected mode is synchronous, asynchronous, or asymmetric.
* Defaults to KASAN_MODE_SYNC.
@@ -260,7 +253,7 @@ void __init kasan_init_hw_tags(void)
kasan_init_tags();
/* KASAN is now initialized, enable it. */
- static_branch_enable(&kasan_flag_enabled);
+ kasan_enable();
pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
kasan_mode_info(),
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 129178be5e6..2d67a99898e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -398,7 +398,13 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, depot_flags_t depot_flags);
void kasan_set_track(struct kasan_track *track, depot_stack_handle_t stack);
void kasan_save_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
-void kasan_save_free_info(struct kmem_cache *cache, void *object);
+
+void __kasan_save_free_info(struct kmem_cache *cache, void *object);
+static inline void kasan_save_free_info(struct kmem_cache *cache, void *object)
+{
+ if (kasan_shadow_initialized())
+ __kasan_save_free_info(cache, object);
+}
#ifdef CONFIG_KASAN_GENERIC
bool kasan_quarantine_put(struct kmem_cache *cache, void *object);
@@ -499,6 +505,7 @@ static inline bool kasan_byte_accessible(const void *addr)
#else /* CONFIG_KASAN_HW_TAGS */
+void __kasan_poison(const void *addr, size_t size, u8 value, bool init);
/**
* kasan_poison - mark the memory range as inaccessible
* @addr: range start address, must be aligned to KASAN_GRANULE_SIZE
@@ -506,7 +513,11 @@ static inline bool kasan_byte_accessible(const void *addr)
* @value: value that's written to metadata for the range
* @init: whether to initialize the memory range (only for hardware tag-based)
*/
-void kasan_poison(const void *addr, size_t size, u8 value, bool init);
+static inline void kasan_poison(const void *addr, size_t size, u8 value, bool init)
+{
+ if (kasan_shadow_initialized())
+ __kasan_poison(addr, size, value, init);
+}
/**
* kasan_unpoison - mark the memory range as accessible
@@ -521,12 +532,19 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init);
*/
void kasan_unpoison(const void *addr, size_t size, bool init);
-bool kasan_byte_accessible(const void *addr);
+bool __kasan_byte_accessible(const void *addr);
+static inline bool kasan_byte_accessible(const void *addr)
+{
+ if (!kasan_shadow_initialized())
+ return true;
+ return __kasan_byte_accessible(addr);
+}
#endif /* CONFIG_KASAN_HW_TAGS */
#ifdef CONFIG_KASAN_GENERIC
+void __kasan_poison_last_granule(const void *address, size_t size);
/**
* kasan_poison_last_granule - mark the last granule of the memory range as
* inaccessible
@@ -536,7 +554,11 @@ bool kasan_byte_accessible(const void *addr);
* This function is only available for the generic mode, as it's the only mode
* that has partially poisoned memory granules.
*/
-void kasan_poison_last_granule(const void *address, size_t size);
+static inline void kasan_poison_last_granule(const void *address, size_t size)
+{
+ if (kasan_shadow_initialized())
+ __kasan_poison_last_granule(address, size);
+}
#else /* CONFIG_KASAN_GENERIC */
@@ -544,12 +566,6 @@ static inline void kasan_poison_last_granule(const void *address, size_t size) {
#endif /* CONFIG_KASAN_GENERIC */
-#ifndef kasan_arch_is_ready
-static inline bool kasan_arch_is_ready(void) { return true; }
-#elif !defined(CONFIG_KASAN_GENERIC) || !defined(CONFIG_KASAN_OUTLINE)
-#error kasan_arch_is_ready only works in KASAN generic outline mode!
-#endif
-
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
void kasan_kunit_test_suite_start(void);
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index d2c70cd2afb..90c508cad63 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -121,13 +121,10 @@ void *__hwasan_memcpy(void *dest, const void *src, ssize_t len) __alias(__asan_m
EXPORT_SYMBOL(__hwasan_memcpy);
#endif
-void kasan_poison(const void *addr, size_t size, u8 value, bool init)
+void __kasan_poison(const void *addr, size_t size, u8 value, bool init)
{
void *shadow_start, *shadow_end;
- if (!kasan_arch_is_ready())
- return;
-
/*
* Perform shadow offset calculation based on untagged address, as
* some of the callers (e.g. kasan_poison_new_object) pass tagged
@@ -145,14 +142,11 @@ void kasan_poison(const void *addr, size_t size, u8 value, bool init)
__memset(shadow_start, value, shadow_end - shadow_start);
}
-EXPORT_SYMBOL_GPL(kasan_poison);
+EXPORT_SYMBOL_GPL(__kasan_poison);
#ifdef CONFIG_KASAN_GENERIC
-void kasan_poison_last_granule(const void *addr, size_t size)
+void __kasan_poison_last_granule(const void *addr, size_t size)
{
- if (!kasan_arch_is_ready())
- return;
-
if (size & KASAN_GRANULE_MASK) {
u8 *shadow = (u8 *)kasan_mem_to_shadow(addr + size);
*shadow = size & KASAN_GRANULE_MASK;
@@ -353,7 +347,7 @@ static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
return 0;
}
-static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
+static int __kasan_populate_vmalloc_do(unsigned long start, unsigned long end)
{
unsigned long nr_pages, nr_total = PFN_UP(end - start);
struct vmalloc_populate_data data;
@@ -385,14 +379,11 @@ static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
return ret;
}
-int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
+int __kasan_populate_vmalloc(unsigned long addr, unsigned long size)
{
unsigned long shadow_start, shadow_end;
int ret;
- if (!kasan_arch_is_ready())
- return 0;
-
if (!is_vmalloc_or_module_addr((void *)addr))
return 0;
@@ -414,7 +405,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
shadow_start = PAGE_ALIGN_DOWN(shadow_start);
shadow_end = PAGE_ALIGN(shadow_end);
- ret = __kasan_populate_vmalloc(shadow_start, shadow_end);
+ ret = __kasan_populate_vmalloc_do(shadow_start, shadow_end);
if (ret)
return ret;
@@ -551,7 +542,7 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
* pages entirely covered by the free region, we will not run in to any
* trouble - any simultaneous allocations will be for disjoint regions.
*/
-void kasan_release_vmalloc(unsigned long start, unsigned long end,
+void __kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long free_region_start,
unsigned long free_region_end,
unsigned long flags)
@@ -560,9 +551,6 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
unsigned long region_start, region_end;
unsigned long size;
- if (!kasan_arch_is_ready())
- return;
-
region_start = ALIGN(start, KASAN_MEMORY_PER_SHADOW_PAGE);
region_end = ALIGN_DOWN(end, KASAN_MEMORY_PER_SHADOW_PAGE);
@@ -611,9 +599,6 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
* with setting memory tags, so the KASAN_VMALLOC_INIT flag is ignored.
*/
- if (!kasan_arch_is_ready())
- return (void *)start;
-
if (!is_vmalloc_or_module_addr(start))
return (void *)start;
@@ -636,9 +621,6 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
*/
void __kasan_poison_vmalloc(const void *start, unsigned long size)
{
- if (!kasan_arch_is_ready())
- return;
-
if (!is_vmalloc_or_module_addr(start))
return;
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index b9382b5b6a3..51a376940ea 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -45,6 +45,8 @@ void __init kasan_init_sw_tags(void)
kasan_init_tags();
+ kasan_enable();
+
pr_info("KernelAddressSanitizer initialized (sw-tags, stacktrace=%s)\n",
str_on_off(kasan_stack_collection_enabled()));
}
@@ -120,7 +122,7 @@ bool kasan_check_range(const void *addr, size_t size, bool write,
return true;
}
-bool kasan_byte_accessible(const void *addr)
+bool __kasan_byte_accessible(const void *addr)
{
u8 tag = get_tag(addr);
void *untagged_addr = kasan_reset_tag(addr);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index d65d48b85f9..b9f31293622 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -142,7 +142,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
save_stack_info(cache, object, flags, false);
}
-void kasan_save_free_info(struct kmem_cache *cache, void *object)
+void __kasan_save_free_info(struct kmem_cache *cache, void *object)
{
save_stack_info(cache, object, 0, true);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 2/9] kasan/powerpc: select ARCH_DEFER_KASAN and call kasan_init_generic
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 1/9] kasan: introduce ARCH_DEFER_KASAN and unify static key across modes Sabyrzhan Tasbolatov
@ 2025-08-05 14:26 ` Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 3/9] kasan/arm,arm64: call kasan_init_generic in kasan_init Sabyrzhan Tasbolatov
` (6 subsequent siblings)
8 siblings, 0 replies; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
PowerPC with radix MMU is the primary architecture that needs deferred
KASAN initialization, as it requires complex shadow memory setup before
KASAN can be safely enabled.
Select ARCH_DEFER_KASAN for PPC_RADIX_MMU to enable the static key
mechanism for runtime KASAN control. Other PowerPC configurations
(like book3e and 32-bit) can enable KASAN early and will use
compile-time constants instead.
Remove the PowerPC-specific static key and kasan_arch_is_ready()
implementation in favor of the unified interface.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Fixes: 55d77bae7342 ("kasan: fix Oops due to missing calls to kasan_arch_is_ready()")
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kasan.h | 12 ------------
arch/powerpc/mm/kasan/init_32.c | 2 +-
arch/powerpc/mm/kasan/init_book3e_64.c | 2 +-
arch/powerpc/mm/kasan/init_book3s_64.c | 6 +-----
5 files changed, 4 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 93402a1d9c9..11c8ef2d88e 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -122,6 +122,7 @@ config PPC
# Please keep this list sorted alphabetically.
#
select ARCH_32BIT_OFF_T if PPC32
+ select ARCH_DEFER_KASAN if PPC_RADIX_MMU
select ARCH_DISABLE_KASAN_INLINE if PPC_RADIX_MMU
select ARCH_DMA_DEFAULT_COHERENT if !NOT_COHERENT_CACHE
select ARCH_ENABLE_MEMORY_HOTPLUG
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
index b5bbb94c51f..957a57c1db5 100644
--- a/arch/powerpc/include/asm/kasan.h
+++ b/arch/powerpc/include/asm/kasan.h
@@ -53,18 +53,6 @@
#endif
#ifdef CONFIG_KASAN
-#ifdef CONFIG_PPC_BOOK3S_64
-DECLARE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
-
-static __always_inline bool kasan_arch_is_ready(void)
-{
- if (static_branch_likely(&powerpc_kasan_enabled_key))
- return true;
- return false;
-}
-
-#define kasan_arch_is_ready kasan_arch_is_ready
-#endif
void kasan_early_init(void);
void kasan_mmu_init(void);
diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c
index 03666d790a5..1d083597464 100644
--- a/arch/powerpc/mm/kasan/init_32.c
+++ b/arch/powerpc/mm/kasan/init_32.c
@@ -165,7 +165,7 @@ void __init kasan_init(void)
/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KASAN init done\n");
+ kasan_init_generic();
}
void __init kasan_late_init(void)
diff --git a/arch/powerpc/mm/kasan/init_book3e_64.c b/arch/powerpc/mm/kasan/init_book3e_64.c
index 60c78aac0f6..0d3a73d6d4b 100644
--- a/arch/powerpc/mm/kasan/init_book3e_64.c
+++ b/arch/powerpc/mm/kasan/init_book3e_64.c
@@ -127,7 +127,7 @@ void __init kasan_init(void)
/* Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KASAN init done\n");
+ kasan_init_generic();
}
void __init kasan_late_init(void) { }
diff --git a/arch/powerpc/mm/kasan/init_book3s_64.c b/arch/powerpc/mm/kasan/init_book3s_64.c
index 7d959544c07..dcafa641804 100644
--- a/arch/powerpc/mm/kasan/init_book3s_64.c
+++ b/arch/powerpc/mm/kasan/init_book3s_64.c
@@ -19,8 +19,6 @@
#include <linux/memblock.h>
#include <asm/pgalloc.h>
-DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
-
static void __init kasan_init_phys_region(void *start, void *end)
{
unsigned long k_start, k_end, k_cur;
@@ -92,11 +90,9 @@ void __init kasan_init(void)
*/
memset(kasan_early_shadow_page, 0, PAGE_SIZE);
- static_branch_inc(&powerpc_kasan_enabled_key);
-
/* Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KASAN init done\n");
+ kasan_init_generic();
}
void __init kasan_early_init(void) { }
--
2.34.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 3/9] kasan/arm,arm64: call kasan_init_generic in kasan_init
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 1/9] kasan: introduce ARCH_DEFER_KASAN and unify static key across modes Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 2/9] kasan/powerpc: select ARCH_DEFER_KASAN and call kasan_init_generic Sabyrzhan Tasbolatov
@ 2025-08-05 14:26 ` Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 4/9] kasan/xtensa: " Sabyrzhan Tasbolatov
` (5 subsequent siblings)
8 siblings, 0 replies; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
Call kasan_init_generic() which handles Generic KASAN initialization.
Since arm64 doesn't select ARCH_DEFER_KASAN, this will be a no-op for
the runtime flag but will print the initialization banner.
For SW_TAGS and HW_TAGS modes, their respective init functions will
handle the flag enabling.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
---
arch/arm/mm/kasan_init.c | 2 +-
arch/arm64/mm/kasan_init.c | 4 +---
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c
index 111d4f70313..c6625e808bf 100644
--- a/arch/arm/mm/kasan_init.c
+++ b/arch/arm/mm/kasan_init.c
@@ -300,6 +300,6 @@ void __init kasan_init(void)
local_flush_tlb_all();
memset(kasan_early_shadow_page, 0, PAGE_SIZE);
- pr_info("Kernel address sanitizer initialized\n");
init_task.kasan_depth = 0;
+ kasan_init_generic();
}
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d541ce45dae..abeb81bf6eb 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -399,14 +399,12 @@ void __init kasan_init(void)
{
kasan_init_shadow();
kasan_init_depth();
-#if defined(CONFIG_KASAN_GENERIC)
+ kasan_init_generic();
/*
* Generic KASAN is now fully initialized.
* Software and Hardware Tag-Based modes still require
* kasan_init_sw_tags() and kasan_init_hw_tags() correspondingly.
*/
- pr_info("KernelAddressSanitizer initialized (generic)\n");
-#endif
}
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
--
2.34.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 4/9] kasan/xtensa: call kasan_init_generic in kasan_init
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
` (2 preceding siblings ...)
2025-08-05 14:26 ` [PATCH v4 3/9] kasan/arm,arm64: call kasan_init_generic in kasan_init Sabyrzhan Tasbolatov
@ 2025-08-05 14:26 ` Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 5/9] kasan/loongarch: select ARCH_DEFER_KASAN and call kasan_init_generic Sabyrzhan Tasbolatov
` (4 subsequent siblings)
8 siblings, 0 replies; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
Call kasan_init_generic() which handles Generic KASAN initialization
and prints the banner. Since xtensa doesn't select ARCH_DEFER_KASAN,
kasan_enable() will be a no-op.
Note that arch/xtensa still uses "current" instead of "init_task" pointer
in `current->kasan_depth = 0;` to enable error messages. This is left
unchanged as it cannot be tested.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
---
arch/xtensa/mm/kasan_init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c
index f39c4d83173..0524b9ed5e6 100644
--- a/arch/xtensa/mm/kasan_init.c
+++ b/arch/xtensa/mm/kasan_init.c
@@ -94,5 +94,5 @@ void __init kasan_init(void)
/* At this point kasan is fully initialized. Enable error messages. */
current->kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized\n");
+ kasan_init_generic();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 5/9] kasan/loongarch: select ARCH_DEFER_KASAN and call kasan_init_generic
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
` (3 preceding siblings ...)
2025-08-05 14:26 ` [PATCH v4 4/9] kasan/xtensa: " Sabyrzhan Tasbolatov
@ 2025-08-05 14:26 ` Sabyrzhan Tasbolatov
2025-08-05 17:17 ` Andrey Ryabinin
2025-08-05 14:26 ` [PATCH v4 6/9] kasan/um: " Sabyrzhan Tasbolatov
` (3 subsequent siblings)
8 siblings, 1 reply; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
LoongArch needs deferred KASAN initialization as it has a custom
kasan_arch_is_ready() implementation that tracks shadow memory
readiness via the kasan_early_stage flag.
Select ARCH_DEFER_KASAN to enable the unified static key mechanism
for runtime KASAN control. Call kasan_init_generic() which handles
Generic KASAN initialization and enables the static key.
Replace kasan_arch_is_ready() with kasan_enabled() and delete the
flag kasan_early_stage in favor of the unified kasan_enabled()
interface.
Note that init_task.kasan_depth = 0 is called after kasan_init_generic(),
which is different than in other arch kasan_init(). This is left
unchanged as it cannot be tested.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
---
Changes in v4:
- Replaced !kasan_enabled() with !kasan_shadow_initialized() in
loongarch which selects ARCH_DEFER_KASAN (Andrey Ryabinin)
---
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/kasan.h | 7 -------
arch/loongarch/mm/kasan_init.c | 8 ++------
3 files changed, 3 insertions(+), 13 deletions(-)
diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index f0abc38c40a..f6304c073ec 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -9,6 +9,7 @@ config LOONGARCH
select ACPI_PPTT if ACPI
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
select ARCH_BINFMT_ELF_STATE
+ select ARCH_DEFER_KASAN
select ARCH_DISABLE_KASAN_INLINE
select ARCH_ENABLE_MEMORY_HOTPLUG
select ARCH_ENABLE_MEMORY_HOTREMOVE
diff --git a/arch/loongarch/include/asm/kasan.h b/arch/loongarch/include/asm/kasan.h
index 62f139a9c87..0e50e5b5e05 100644
--- a/arch/loongarch/include/asm/kasan.h
+++ b/arch/loongarch/include/asm/kasan.h
@@ -66,7 +66,6 @@
#define XKPRANGE_WC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_WC_KASAN_OFFSET)
#define XKVRANGE_VC_SHADOW_OFFSET (KASAN_SHADOW_START + XKVRANGE_VC_KASAN_OFFSET)
-extern bool kasan_early_stage;
extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
#define kasan_mem_to_shadow kasan_mem_to_shadow
@@ -75,12 +74,6 @@ void *kasan_mem_to_shadow(const void *addr);
#define kasan_shadow_to_mem kasan_shadow_to_mem
const void *kasan_shadow_to_mem(const void *shadow_addr);
-#define kasan_arch_is_ready kasan_arch_is_ready
-static __always_inline bool kasan_arch_is_ready(void)
-{
- return !kasan_early_stage;
-}
-
#define addr_has_metadata addr_has_metadata
static __always_inline bool addr_has_metadata(const void *addr)
{
diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c
index d2681272d8f..57fb6e98376 100644
--- a/arch/loongarch/mm/kasan_init.c
+++ b/arch/loongarch/mm/kasan_init.c
@@ -40,11 +40,9 @@ static pgd_t kasan_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
#define __pte_none(early, pte) (early ? pte_none(pte) : \
((pte_val(pte) & _PFN_MASK) == (unsigned long)__pa(kasan_early_shadow_page)))
-bool kasan_early_stage = true;
-
void *kasan_mem_to_shadow(const void *addr)
{
- if (!kasan_arch_is_ready()) {
+ if (!kasan_shadow_initialized()) {
return (void *)(kasan_early_shadow_page);
} else {
unsigned long maddr = (unsigned long)addr;
@@ -298,8 +296,6 @@ void __init kasan_init(void)
kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
kasan_mem_to_shadow((void *)KFENCE_AREA_END));
- kasan_early_stage = false;
-
/* Populate the linear mapping */
for_each_mem_range(i, &pa_start, &pa_end) {
void *start = (void *)phys_to_virt(pa_start);
@@ -329,5 +325,5 @@ void __init kasan_init(void)
/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized.\n");
+ kasan_init_generic();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 6/9] kasan/um: select ARCH_DEFER_KASAN and call kasan_init_generic
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
` (4 preceding siblings ...)
2025-08-05 14:26 ` [PATCH v4 5/9] kasan/loongarch: select ARCH_DEFER_KASAN and call kasan_init_generic Sabyrzhan Tasbolatov
@ 2025-08-05 14:26 ` Sabyrzhan Tasbolatov
2025-08-05 17:19 ` Andrey Ryabinin
2025-08-05 14:26 ` [PATCH v4 7/9] kasan/x86: call kasan_init_generic in kasan_init Sabyrzhan Tasbolatov
` (2 subsequent siblings)
8 siblings, 1 reply; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
UserMode Linux needs deferred KASAN initialization as it has a custom
kasan_arch_is_ready() implementation that tracks shadow memory readiness
via the kasan_um_is_ready flag.
As it's explained in commit 5b301409e8bc("UML: add support for KASAN
under x86_64"), if CONFIG_STATIC_LINK=y, then it works only with
CONFIG_KASAN_OUTLINE instrumentation.
Calling kasan_init_generic() in the end of kasan_init() like in other
arch does not work for UML as kasan_init() is called way before
main()->linux_main(). It produces the SEGFAULT in:
kasan_init()
kasan_init_generic
kasan_enable
static_key_enable
STATIC_KEY_CHECK_USE
...
<kasan_init+173> movabs r9, kasan_flag_enabled
<kasan_init+183> movabs r8, __func__.2
<kasan_init+193> movabs rcx, 0x60a04540
<kasan_init+203> movabs rdi, 0x60a045a0
<kasan_init+213> movabs r10, warn_slowpath_fmt
WARN_ON_ONCE("static key '%pS' used before call to jump_label_init()")
<kasan_init+226> movabs r12, kasan_flag_enabled
That's why we need to call kasan_init_generic() which enables the
static flag after jump_label_init(). The earliest available place
is arch_mm_preinit().
kasan_init()
main()
start_kernel
setup_arch
jump_label_init
...
mm_core_init
arch_mm_preinit
kasan_init_generic()
PowerPC, for example, has kasan_late_init() in arch_mm_preinit().
Though there is no static key enabling there, but it should be the best
place to enable KASAN "fully".
Verified with defconfig, enabling KASAN.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
---
Changes in v4:
- Addressed the issue in UML arch, where kasan_init_generic() is
called before jump_label_init() (Andrey Ryabinin)
---
arch/um/Kconfig | 1 +
arch/um/include/asm/kasan.h | 5 -----
arch/um/kernel/mem.c | 12 +++++++++---
3 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 9083bfdb773..8d14c8fc2cd 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -5,6 +5,7 @@ menu "UML-specific options"
config UML
bool
default y
+ select ARCH_DEFER_KASAN
select ARCH_WANTS_DYNAMIC_TASK_STRUCT
select ARCH_HAS_CACHE_LINE_SIZE
select ARCH_HAS_CPU_FINALIZE_INIT
diff --git a/arch/um/include/asm/kasan.h b/arch/um/include/asm/kasan.h
index f97bb1f7b85..81bcdc0f962 100644
--- a/arch/um/include/asm/kasan.h
+++ b/arch/um/include/asm/kasan.h
@@ -24,11 +24,6 @@
#ifdef CONFIG_KASAN
void kasan_init(void);
-extern int kasan_um_is_ready;
-
-#ifdef CONFIG_STATIC_LINK
-#define kasan_arch_is_ready() (kasan_um_is_ready)
-#endif
#else
static inline void kasan_init(void) { }
#endif /* CONFIG_KASAN */
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 76bec7de81b..704a26211ed 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -21,10 +21,10 @@
#include <os.h>
#include <um_malloc.h>
#include <linux/sched/task.h>
+#include <linux/kasan.h>
#ifdef CONFIG_KASAN
-int kasan_um_is_ready;
-void kasan_init(void)
+void __init kasan_init(void)
{
/*
* kasan_map_memory will map all of the required address space and
@@ -32,7 +32,10 @@ void kasan_init(void)
*/
kasan_map_memory((void *)KASAN_SHADOW_START, KASAN_SHADOW_SIZE);
init_task.kasan_depth = 0;
- kasan_um_is_ready = true;
+ /* Since kasan_init() is called before main(),
+ * KASAN is initialized but the enablement is deferred after
+ * jump_label_init(). See arch_mm_preinit().
+ */
}
static void (*kasan_init_ptr)(void)
@@ -58,6 +61,9 @@ static unsigned long brk_end;
void __init arch_mm_preinit(void)
{
+ /* Safe to call after jump_label_init(). Enables KASAN. */
+ kasan_init_generic();
+
/* clear the zero-page */
memset(empty_zero_page, 0, PAGE_SIZE);
--
2.34.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 7/9] kasan/x86: call kasan_init_generic in kasan_init
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
` (5 preceding siblings ...)
2025-08-05 14:26 ` [PATCH v4 6/9] kasan/um: " Sabyrzhan Tasbolatov
@ 2025-08-05 14:26 ` Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 8/9] kasan/s390: " Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 9/9] kasan/riscv: " Sabyrzhan Tasbolatov
8 siblings, 0 replies; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
Call kasan_init_generic() which handles Generic KASAN initialization
and prints the banner. Since x86 doesn't select ARCH_DEFER_KASAN,
kasan_enable() will be a no-op, and kasan_enabled() will return
IS_ENABLED(CONFIG_KASAN) for optimal compile-time behavior.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
---
arch/x86/mm/kasan_init_64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0539efd0d21..998b6010d6d 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -451,5 +451,5 @@ void __init kasan_init(void)
__flush_tlb_all();
init_task.kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized\n");
+ kasan_init_generic();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 8/9] kasan/s390: call kasan_init_generic in kasan_init
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
` (6 preceding siblings ...)
2025-08-05 14:26 ` [PATCH v4 7/9] kasan/x86: call kasan_init_generic in kasan_init Sabyrzhan Tasbolatov
@ 2025-08-05 14:26 ` Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 9/9] kasan/riscv: " Sabyrzhan Tasbolatov
8 siblings, 0 replies; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
Call kasan_init_generic() which handles Generic KASAN initialization
and prints the banner. Since s390 doesn't select ARCH_DEFER_KASAN,
kasan_enable() will be a no-op, and kasan_enabled() will return
IS_ENABLED(CONFIG_KASAN) for optimal compile-time behavior.
s390 sets up KASAN mappings in the decompressor and can run with KASAN
enabled from very early, so it doesn't need runtime control.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
---
arch/s390/kernel/early.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c
index 9adfbdd377d..544e5403dd9 100644
--- a/arch/s390/kernel/early.c
+++ b/arch/s390/kernel/early.c
@@ -21,6 +21,7 @@
#include <linux/kernel.h>
#include <asm/asm-extable.h>
#include <linux/memblock.h>
+#include <linux/kasan.h>
#include <asm/access-regs.h>
#include <asm/asm-offsets.h>
#include <asm/machine.h>
@@ -65,7 +66,7 @@ static void __init kasan_early_init(void)
{
#ifdef CONFIG_KASAN
init_task.kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized\n");
+ kasan_init_generic();
#endif
}
--
2.34.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 9/9] kasan/riscv: call kasan_init_generic in kasan_init
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
` (7 preceding siblings ...)
2025-08-05 14:26 ` [PATCH v4 8/9] kasan/s390: " Sabyrzhan Tasbolatov
@ 2025-08-05 14:26 ` Sabyrzhan Tasbolatov
2025-08-05 16:06 ` Alexandre Ghiti
8 siblings, 1 reply; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-05 14:26 UTC (permalink / raw)
To: ryabinin.a.a, hca, christophe.leroy, andreyknvl, agordeev, akpm,
zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm, snovitoll
Call kasan_init_generic() which handles Generic KASAN initialization
and prints the banner. Since riscv doesn't select ARCH_DEFER_KASAN,
kasan_enable() will be a no-op, and kasan_enabled() will return
IS_ENABLED(CONFIG_KASAN) for optimal compile-time behavior.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
---
arch/riscv/mm/kasan_init.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 41c635d6aca..ba2709b1eec 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -530,6 +530,7 @@ void __init kasan_init(void)
memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
init_task.kasan_depth = 0;
+ kasan_init_generic();
csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | satp_mode);
local_flush_tlb_all();
--
2.34.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v4 9/9] kasan/riscv: call kasan_init_generic in kasan_init
2025-08-05 14:26 ` [PATCH v4 9/9] kasan/riscv: " Sabyrzhan Tasbolatov
@ 2025-08-05 16:06 ` Alexandre Ghiti
0 siblings, 0 replies; 19+ messages in thread
From: Alexandre Ghiti @ 2025-08-05 16:06 UTC (permalink / raw)
To: Sabyrzhan Tasbolatov, ryabinin.a.a, hca, christophe.leroy,
andreyknvl, agordeev, akpm, zhangqing, chenhuacai, trishalfonso,
davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm
Hi Sabyrzhan,
On 8/5/25 16:26, Sabyrzhan Tasbolatov wrote:
> Call kasan_init_generic() which handles Generic KASAN initialization
> and prints the banner. Since riscv doesn't select ARCH_DEFER_KASAN,
> kasan_enable() will be a no-op, and kasan_enabled() will return
> IS_ENABLED(CONFIG_KASAN) for optimal compile-time behavior.
>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
> Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
> ---
> arch/riscv/mm/kasan_init.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
> index 41c635d6aca..ba2709b1eec 100644
> --- a/arch/riscv/mm/kasan_init.c
> +++ b/arch/riscv/mm/kasan_init.c
> @@ -530,6 +530,7 @@ void __init kasan_init(void)
>
> memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
> init_task.kasan_depth = 0;
> + kasan_init_generic();
This is right before actually setting the new mapping to the mmu (which
is done below by setting a register called SATP). It does not seem to be
a problem though, just wanted to let you know.
It boots fine with defconfig + kasan inline so:
Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Thanks,
Alex
>
> csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | satp_mode);
> local_flush_tlb_all();
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/9] kasan/loongarch: select ARCH_DEFER_KASAN and call kasan_init_generic
2025-08-05 14:26 ` [PATCH v4 5/9] kasan/loongarch: select ARCH_DEFER_KASAN and call kasan_init_generic Sabyrzhan Tasbolatov
@ 2025-08-05 17:17 ` Andrey Ryabinin
2025-08-06 4:37 ` Sabyrzhan Tasbolatov
0 siblings, 1 reply; 19+ messages in thread
From: Andrey Ryabinin @ 2025-08-05 17:17 UTC (permalink / raw)
To: Sabyrzhan Tasbolatov, hca, christophe.leroy, andreyknvl, agordeev,
akpm, zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm
On 8/5/25 4:26 PM, Sabyrzhan Tasbolatov wrote:
> LoongArch needs deferred KASAN initialization as it has a custom
> kasan_arch_is_ready() implementation that tracks shadow memory
> readiness via the kasan_early_stage flag.
>
> Select ARCH_DEFER_KASAN to enable the unified static key mechanism
> for runtime KASAN control. Call kasan_init_generic() which handles
> Generic KASAN initialization and enables the static key.
>
> Replace kasan_arch_is_ready() with kasan_enabled() and delete the
> flag kasan_early_stage in favor of the unified kasan_enabled()
> interface.
>
> Note that init_task.kasan_depth = 0 is called after kasan_init_generic(),
> which is different than in other arch kasan_init(). This is left
> unchanged as it cannot be tested.
>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
> Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
> ---
> Changes in v4:
> - Replaced !kasan_enabled() with !kasan_shadow_initialized() in
> loongarch which selects ARCH_DEFER_KASAN (Andrey Ryabinin)
> ---
> arch/loongarch/Kconfig | 1 +
> arch/loongarch/include/asm/kasan.h | 7 -------
> arch/loongarch/mm/kasan_init.c | 8 ++------
> 3 files changed, 3 insertions(+), 13 deletions(-)
>
> diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
> index f0abc38c40a..f6304c073ec 100644
> --- a/arch/loongarch/Kconfig
> +++ b/arch/loongarch/Kconfig
> @@ -9,6 +9,7 @@ config LOONGARCH
> select ACPI_PPTT if ACPI
> select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
> select ARCH_BINFMT_ELF_STATE
> + select ARCH_DEFER_KASAN
> select ARCH_DISABLE_KASAN_INLINE
> select ARCH_ENABLE_MEMORY_HOTPLUG
> select ARCH_ENABLE_MEMORY_HOTREMOVE
> diff --git a/arch/loongarch/include/asm/kasan.h b/arch/loongarch/include/asm/kasan.h
> index 62f139a9c87..0e50e5b5e05 100644
> --- a/arch/loongarch/include/asm/kasan.h
> +++ b/arch/loongarch/include/asm/kasan.h
> @@ -66,7 +66,6 @@
> #define XKPRANGE_WC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_WC_KASAN_OFFSET)
> #define XKVRANGE_VC_SHADOW_OFFSET (KASAN_SHADOW_START + XKVRANGE_VC_KASAN_OFFSET)
>
> -extern bool kasan_early_stage;
> extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
>
> #define kasan_mem_to_shadow kasan_mem_to_shadow
> @@ -75,12 +74,6 @@ void *kasan_mem_to_shadow(const void *addr);
> #define kasan_shadow_to_mem kasan_shadow_to_mem
> const void *kasan_shadow_to_mem(const void *shadow_addr);
>
> -#define kasan_arch_is_ready kasan_arch_is_ready
> -static __always_inline bool kasan_arch_is_ready(void)
> -{
> - return !kasan_early_stage;
> -}
> -
> #define addr_has_metadata addr_has_metadata
> static __always_inline bool addr_has_metadata(const void *addr)
> {
> diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c
> index d2681272d8f..57fb6e98376 100644
> --- a/arch/loongarch/mm/kasan_init.c
> +++ b/arch/loongarch/mm/kasan_init.c
> @@ -40,11 +40,9 @@ static pgd_t kasan_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
> #define __pte_none(early, pte) (early ? pte_none(pte) : \
> ((pte_val(pte) & _PFN_MASK) == (unsigned long)__pa(kasan_early_shadow_page)))
>
> -bool kasan_early_stage = true;
> -
> void *kasan_mem_to_shadow(const void *addr)
> {
> - if (!kasan_arch_is_ready()) {
> + if (!kasan_shadow_initialized()) {
> return (void *)(kasan_early_shadow_page);
> } else {
> unsigned long maddr = (unsigned long)addr;
> @@ -298,8 +296,6 @@ void __init kasan_init(void)
> kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
> kasan_mem_to_shadow((void *)KFENCE_AREA_END));
>
> - kasan_early_stage = false;
> -
There is a reason for this line to be here.
Your patch will change the result of the follow up kasan_mem_to_shadow() call and
feed the wrong address to kasan_map_populate()
> /* Populate the linear mapping */
> for_each_mem_range(i, &pa_start, &pa_end) {
> void *start = (void *)phys_to_virt(pa_start);
> @@ -329,5 +325,5 @@ void __init kasan_init(void)
>
> /* At this point kasan is fully initialized. Enable error messages */
> init_task.kasan_depth = 0;
> - pr_info("KernelAddressSanitizer initialized.\n");
> + kasan_init_generic();
> }
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 6/9] kasan/um: select ARCH_DEFER_KASAN and call kasan_init_generic
2025-08-05 14:26 ` [PATCH v4 6/9] kasan/um: " Sabyrzhan Tasbolatov
@ 2025-08-05 17:19 ` Andrey Ryabinin
2025-08-06 4:35 ` Sabyrzhan Tasbolatov
0 siblings, 1 reply; 19+ messages in thread
From: Andrey Ryabinin @ 2025-08-05 17:19 UTC (permalink / raw)
To: Sabyrzhan Tasbolatov, hca, christophe.leroy, andreyknvl, agordeev,
akpm, zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm
On 8/5/25 4:26 PM, Sabyrzhan Tasbolatov wrote:
>
> diff --git a/arch/um/Kconfig b/arch/um/Kconfig
> index 9083bfdb773..8d14c8fc2cd 100644
> --- a/arch/um/Kconfig
> +++ b/arch/um/Kconfig
> @@ -5,6 +5,7 @@ menu "UML-specific options"
> config UML
> bool
> default y
> + select ARCH_DEFER_KASAN
select ARCH_DEFER_KASAN if STATIC_LINK
> select ARCH_WANTS_DYNAMIC_TASK_STRUCT
> select ARCH_HAS_CACHE_LINE_SIZE
> select ARCH_HAS_CPU_FINALIZE_INIT
> diff --git a/arch/um/include/asm/kasan.h b/arch/um/include/asm/kasan.h
> index f97bb1f7b85..81bcdc0f962 100644
> --- a/arch/um/include/asm/kasan.h
> +++ b/arch/um/include/asm/kasan.h
> @@ -24,11 +24,6 @@
>
> #ifdef CONFIG_KASAN
> void kasan_init(void);
> -extern int kasan_um_is_ready;
> -
> -#ifdef CONFIG_STATIC_LINK
> -#define kasan_arch_is_ready() (kasan_um_is_ready)
> -#endif
> #else
> static inline void kasan_init(void) { }
> #endif /* CONFIG_KASAN */
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 6/9] kasan/um: select ARCH_DEFER_KASAN and call kasan_init_generic
2025-08-05 17:19 ` Andrey Ryabinin
@ 2025-08-06 4:35 ` Sabyrzhan Tasbolatov
2025-08-06 13:49 ` Andrey Ryabinin
0 siblings, 1 reply; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-06 4:35 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: hca, christophe.leroy, andreyknvl, agordeev, akpm, zhangqing,
chenhuacai, trishalfonso, davidgow, glider, dvyukov, kasan-dev,
linux-kernel, loongarch, linuxppc-dev, linux-riscv, linux-s390,
linux-um, linux-mm
On Tue, Aug 5, 2025 at 10:19 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>
>
>
> On 8/5/25 4:26 PM, Sabyrzhan Tasbolatov wrote:
> >
> > diff --git a/arch/um/Kconfig b/arch/um/Kconfig
> > index 9083bfdb773..8d14c8fc2cd 100644
> > --- a/arch/um/Kconfig
> > +++ b/arch/um/Kconfig
> > @@ -5,6 +5,7 @@ menu "UML-specific options"
> > config UML
> > bool
> > default y
> > + select ARCH_DEFER_KASAN
>
> select ARCH_DEFER_KASAN if STATIC_LINK
As pointed out in commit 5b301409e8bc("UML: add support for KASAN
under x86_64"),
: Also note that, while UML supports both KASAN in inline mode
(CONFIG_KASAN_INLINE)
: and static linking (CONFIG_STATIC_LINK), it does not support both at
the same time.
I've tested that for UML,
ARCH_DEFER_KASAN works if STATIC_LINK && KASAN_OUTLINE
ARCH_DEFER_KASAN works if KASAN_INLINE && !STATIC_LINK
ARCH_DEFER_KASAN if STATIC_LINK, and KASAN_INLINE=y by default from defconfig
crashes with SEGFAULT here (I didn't understand what it is, I think
the main() constructors
is not prepared in UML):
► 0 0x609d6f87 strlen+43
1 0x60a20db0 _dl_new_object+48
2 0x60a24627 _dl_non_dynamic_init+103
3 0x60a25f9a __libc_init_first+42
4 0x609eb6b2 __libc_start_main_impl+2434
5 0x6004a025 _start+37
Since this is the case only for UML, AFAIU, I don't think we want to change
conditions in lib/Kconfig.kasan. Shall I leave UML Kconfig as it is? e.g.
select ARCH_DEFER_KASAN
>
> > select ARCH_WANTS_DYNAMIC_TASK_STRUCT
> > select ARCH_HAS_CACHE_LINE_SIZE
> > select ARCH_HAS_CPU_FINALIZE_INIT
> > diff --git a/arch/um/include/asm/kasan.h b/arch/um/include/asm/kasan.h
> > index f97bb1f7b85..81bcdc0f962 100644
> > --- a/arch/um/include/asm/kasan.h
> > +++ b/arch/um/include/asm/kasan.h
> > @@ -24,11 +24,6 @@
> >
> > #ifdef CONFIG_KASAN
> > void kasan_init(void);
> > -extern int kasan_um_is_ready;
> > -
> > -#ifdef CONFIG_STATIC_LINK
> > -#define kasan_arch_is_ready() (kasan_um_is_ready)
> > -#endif
> > #else
> > static inline void kasan_init(void) { }
> > #endif /* CONFIG_KASAN */
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/9] kasan/loongarch: select ARCH_DEFER_KASAN and call kasan_init_generic
2025-08-05 17:17 ` Andrey Ryabinin
@ 2025-08-06 4:37 ` Sabyrzhan Tasbolatov
0 siblings, 0 replies; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-06 4:37 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: hca, christophe.leroy, andreyknvl, agordeev, akpm, zhangqing,
chenhuacai, trishalfonso, davidgow, glider, dvyukov, kasan-dev,
linux-kernel, loongarch, linuxppc-dev, linux-riscv, linux-s390,
linux-um, linux-mm
On Tue, Aug 5, 2025 at 10:18 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>
>
>
> On 8/5/25 4:26 PM, Sabyrzhan Tasbolatov wrote:
> > LoongArch needs deferred KASAN initialization as it has a custom
> > kasan_arch_is_ready() implementation that tracks shadow memory
> > readiness via the kasan_early_stage flag.
> >
> > Select ARCH_DEFER_KASAN to enable the unified static key mechanism
> > for runtime KASAN control. Call kasan_init_generic() which handles
> > Generic KASAN initialization and enables the static key.
> >
> > Replace kasan_arch_is_ready() with kasan_enabled() and delete the
> > flag kasan_early_stage in favor of the unified kasan_enabled()
> > interface.
> >
> > Note that init_task.kasan_depth = 0 is called after kasan_init_generic(),
> > which is different than in other arch kasan_init(). This is left
> > unchanged as it cannot be tested.
> >
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
> > Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
> > ---
> > Changes in v4:
> > - Replaced !kasan_enabled() with !kasan_shadow_initialized() in
> > loongarch which selects ARCH_DEFER_KASAN (Andrey Ryabinin)
> > ---
> > arch/loongarch/Kconfig | 1 +
> > arch/loongarch/include/asm/kasan.h | 7 -------
> > arch/loongarch/mm/kasan_init.c | 8 ++------
> > 3 files changed, 3 insertions(+), 13 deletions(-)
> >
> > diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
> > index f0abc38c40a..f6304c073ec 100644
> > --- a/arch/loongarch/Kconfig
> > +++ b/arch/loongarch/Kconfig
> > @@ -9,6 +9,7 @@ config LOONGARCH
> > select ACPI_PPTT if ACPI
> > select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
> > select ARCH_BINFMT_ELF_STATE
> > + select ARCH_DEFER_KASAN
> > select ARCH_DISABLE_KASAN_INLINE
> > select ARCH_ENABLE_MEMORY_HOTPLUG
> > select ARCH_ENABLE_MEMORY_HOTREMOVE
> > diff --git a/arch/loongarch/include/asm/kasan.h b/arch/loongarch/include/asm/kasan.h
> > index 62f139a9c87..0e50e5b5e05 100644
> > --- a/arch/loongarch/include/asm/kasan.h
> > +++ b/arch/loongarch/include/asm/kasan.h
> > @@ -66,7 +66,6 @@
> > #define XKPRANGE_WC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_WC_KASAN_OFFSET)
> > #define XKVRANGE_VC_SHADOW_OFFSET (KASAN_SHADOW_START + XKVRANGE_VC_KASAN_OFFSET)
> >
> > -extern bool kasan_early_stage;
> > extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
> >
> > #define kasan_mem_to_shadow kasan_mem_to_shadow
> > @@ -75,12 +74,6 @@ void *kasan_mem_to_shadow(const void *addr);
> > #define kasan_shadow_to_mem kasan_shadow_to_mem
> > const void *kasan_shadow_to_mem(const void *shadow_addr);
> >
> > -#define kasan_arch_is_ready kasan_arch_is_ready
> > -static __always_inline bool kasan_arch_is_ready(void)
> > -{
> > - return !kasan_early_stage;
> > -}
> > -
> > #define addr_has_metadata addr_has_metadata
> > static __always_inline bool addr_has_metadata(const void *addr)
> > {
> > diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c
> > index d2681272d8f..57fb6e98376 100644
> > --- a/arch/loongarch/mm/kasan_init.c
> > +++ b/arch/loongarch/mm/kasan_init.c
> > @@ -40,11 +40,9 @@ static pgd_t kasan_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
> > #define __pte_none(early, pte) (early ? pte_none(pte) : \
> > ((pte_val(pte) & _PFN_MASK) == (unsigned long)__pa(kasan_early_shadow_page)))
> >
> > -bool kasan_early_stage = true;
> > -
> > void *kasan_mem_to_shadow(const void *addr)
> > {
> > - if (!kasan_arch_is_ready()) {
> > + if (!kasan_shadow_initialized()) {
> > return (void *)(kasan_early_shadow_page);
> > } else {
> > unsigned long maddr = (unsigned long)addr;
> > @@ -298,8 +296,6 @@ void __init kasan_init(void)
> > kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
> > kasan_mem_to_shadow((void *)KFENCE_AREA_END));
> >
> > - kasan_early_stage = false;
> > -
>
> There is a reason for this line to be here.
> Your patch will change the result of the follow up kasan_mem_to_shadow() call and
> feed the wrong address to kasan_map_populate()
Thanks, I've missed it. Here the upcoming v5 for this:
diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c
index d2681272d8f..0e6622b57ce 100644
--- a/arch/loongarch/mm/kasan_init.c
+++ b/arch/loongarch/mm/kasan_init.c
@@ -40,11 +40,9 @@ static pgd_t kasan_pg_dir[PTRS_PER_PGD] __initdata
__aligned(PAGE_SIZE);
#define __pte_none(early, pte) (early ? pte_none(pte) : \
((pte_val(pte) & _PFN_MASK) == (unsigned long)__pa(kasan_early_shadow_page)))
-bool kasan_early_stage = true;
-
void *kasan_mem_to_shadow(const void *addr)
{
- if (!kasan_arch_is_ready()) {
+ if (!kasan_shadow_initialized()) {
return (void *)(kasan_early_shadow_page);
} else {
unsigned long maddr = (unsigned long)addr;
@@ -298,7 +296,10 @@ void __init kasan_init(void)
kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START),
kasan_mem_to_shadow((void *)KFENCE_AREA_END));
- kasan_early_stage = false;
+ /* Enable KASAN here before kasan_mem_to_shadow() which checks
+ * if kasan_shadow_initialized().
+ */
+ kasan_init_generic();
/* Populate the linear mapping */
for_each_mem_range(i, &pa_start, &pa_end) {
@@ -329,5 +330,4 @@ void __init kasan_init(void)
/* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0;
- pr_info("KernelAddressSanitizer initialized.\n");
}
--
2.34.1
>
>
> > /* Populate the linear mapping */
> > for_each_mem_range(i, &pa_start, &pa_end) {
> > void *start = (void *)phys_to_virt(pa_start);
> > @@ -329,5 +325,5 @@ void __init kasan_init(void)
> >
> > /* At this point kasan is fully initialized. Enable error messages */
> > init_task.kasan_depth = 0;
> > - pr_info("KernelAddressSanitizer initialized.\n");
> > + kasan_init_generic();
> > }
>
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v4 1/9] kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
2025-08-05 14:26 ` [PATCH v4 1/9] kasan: introduce ARCH_DEFER_KASAN and unify static key across modes Sabyrzhan Tasbolatov
@ 2025-08-06 13:34 ` Andrey Ryabinin
2025-08-06 14:15 ` Sabyrzhan Tasbolatov
0 siblings, 1 reply; 19+ messages in thread
From: Andrey Ryabinin @ 2025-08-06 13:34 UTC (permalink / raw)
To: Sabyrzhan Tasbolatov, hca, christophe.leroy, andreyknvl, agordeev,
akpm, zhangqing, chenhuacai, trishalfonso, davidgow
Cc: glider, dvyukov, kasan-dev, linux-kernel, loongarch, linuxppc-dev,
linux-riscv, linux-s390, linux-um, linux-mm
On 8/5/25 4:26 PM, Sabyrzhan Tasbolatov wrote:
> Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures that need
> to defer KASAN initialization until shadow memory is properly set up,
> and unify the static key infrastructure across all KASAN modes.
>
> Some architectures (like PowerPC with radix MMU) need to set up their
> shadow memory mappings before KASAN can be safely enabled, while others
> (like s390, x86, arm) can enable KASAN much earlier or even from the
> beginning.
>
> Historically, the runtime static key kasan_flag_enabled existed only for
> CONFIG_KASAN_HW_TAGS mode. Generic and SW_TAGS modes either relied on
> architecture-specific kasan_arch_is_ready() implementations or evaluated
> KASAN checks unconditionally, leading to code duplication.
>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
> Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
> ---
> Changes in v4:
> - Fixed HW_TAGS static key functionality (was broken in v3)
I don't think it fixed. Before you patch kasan_enabled() esentially
worked like this:
if (IS_ENABLED(CONFIG_KASAN_HW_TAGS))
return static_branch_likely(&kasan_flag_enabled);
else
return IS_ENABLED(CONFIG_KASAN);
Now it's just IS_ENABLED(CONFIG_KASAN);
And there are bunch of kasan_enabled() calls left whose behavior changed for
no reason.
> - Merged configuration and implementation for atomicity
> ---
> include/linux/kasan-enabled.h | 36 +++++++++++++++++++++++-------
> include/linux/kasan.h | 42 +++++++++++++++++++++++++++--------
> lib/Kconfig.kasan | 8 +++++++
> mm/kasan/common.c | 18 ++++++++++-----
> mm/kasan/generic.c | 23 +++++++++++--------
> mm/kasan/hw_tags.c | 9 +-------
> mm/kasan/kasan.h | 36 +++++++++++++++++++++---------
> mm/kasan/shadow.c | 32 ++++++--------------------
> mm/kasan/sw_tags.c | 4 +++-
> mm/kasan/tags.c | 2 +-
> 10 files changed, 133 insertions(+), 77 deletions(-)
>
> diff --git a/include/linux/kasan-enabled.h b/include/linux/kasan-enabled.h
> index 6f612d69ea0..52a3909f032 100644
> --- a/include/linux/kasan-enabled.h
> +++ b/include/linux/kasan-enabled.h
> @@ -4,32 +4,52 @@
>
> #include <linux/static_key.h>
>
> -#ifdef CONFIG_KASAN_HW_TAGS
> +/* Controls whether KASAN is enabled at all (compile-time check). */
> +static __always_inline bool kasan_enabled(void)
> +{
> + return IS_ENABLED(CONFIG_KASAN);
> +}
>
> +#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
> +/*
> + * Global runtime flag for KASAN modes that need runtime control.
> + * Used by ARCH_DEFER_KASAN architectures and HW_TAGS mode.
> + */
> DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
>
> -static __always_inline bool kasan_enabled(void)
> +/*
> + * Runtime control for shadow memory initialization or HW_TAGS mode.
> + * Uses static key for architectures that need deferred KASAN or HW_TAGS.
> + */
> +static __always_inline bool kasan_shadow_initialized(void)
Don't rename it, just leave as is - kasan_enabled().
It's better name, shorter and you don't need to convert call sites, so
there is less chance of mistakes due to unchanged kasan_enabled() -> kasan_shadow_initialized().
> {
> return static_branch_likely(&kasan_flag_enabled);
> }
>
> -static inline bool kasan_hw_tags_enabled(void)
> +static inline void kasan_enable(void)
> +{
> + static_branch_enable(&kasan_flag_enabled);
> +}
> +#else
> +/* For architectures that can enable KASAN early, use compile-time check. */
> +static __always_inline bool kasan_shadow_initialized(void)
> {
> return kasan_enabled();
> }
>
...
>
> void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
> -int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
> -void kasan_release_vmalloc(unsigned long start, unsigned long end,
> +
> +int __kasan_populate_vmalloc(unsigned long addr, unsigned long size);
> +static inline int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
> +{
> + if (!kasan_shadow_initialized())
> + return 0;
What's the point of moving these checks to header?
Leave it in C, it's easier to grep and navigate code this way.
> + return __kasan_populate_vmalloc(addr, size);
> +}
> +
> +void __kasan_release_vmalloc(unsigned long start, unsigned long end,
> unsigned long free_region_start,
> unsigned long free_region_end,
> unsigned long flags);
> +static inline void kasan_release_vmalloc(unsigned long start,
> + unsigned long end,
> + unsigned long free_region_start,
> + unsigned long free_region_end,
> + unsigned long flags)
> +{
> + if (kasan_shadow_initialized())
> + __kasan_release_vmalloc(start, end, free_region_start,
> + free_region_end, flags);
> +}
>
...> @@ -250,7 +259,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
> bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> unsigned long ip)
> {
> - if (!kasan_arch_is_ready() || is_kfence_address(object))
> + if (is_kfence_address(object))
> return false;
> return check_slab_allocation(cache, object, ip);
> }
> @@ -258,7 +267,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
> bool still_accessible)
> {
> - if (!kasan_arch_is_ready() || is_kfence_address(object))
> + if (is_kfence_address(object))
> return false;
>
> poison_slab_object(cache, object, init, still_accessible);
> @@ -282,9 +291,6 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
>
> static inline bool check_page_allocation(void *ptr, unsigned long ip)
> {
> - if (!kasan_arch_is_ready())
> - return false;
> -
Well, you can't do this yet, because no arch using ARCH_DEFER_KASAN yet, so this breaks
bisectability.
Leave it, and remove with separate patch only when there are no users left.
> if (ptr != page_address(virt_to_head_page(ptr))) {
> kasan_report_invalid_free(ptr, ip, KASAN_REPORT_INVALID_FREE);
> return true;
> @@ -511,7 +517,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
> return true;
> }
>
> - if (is_kfence_address(ptr) || !kasan_arch_is_ready())
> + if (is_kfence_address(ptr))
> return true;
>
> slab = folio_slab(folio);
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 6/9] kasan/um: select ARCH_DEFER_KASAN and call kasan_init_generic
2025-08-06 4:35 ` Sabyrzhan Tasbolatov
@ 2025-08-06 13:49 ` Andrey Ryabinin
0 siblings, 0 replies; 19+ messages in thread
From: Andrey Ryabinin @ 2025-08-06 13:49 UTC (permalink / raw)
To: Sabyrzhan Tasbolatov
Cc: hca, christophe.leroy, andreyknvl, agordeev, akpm, zhangqing,
chenhuacai, trishalfonso, davidgow, glider, dvyukov, kasan-dev,
linux-kernel, loongarch, linuxppc-dev, linux-riscv, linux-s390,
linux-um, linux-mm
On 8/6/25 6:35 AM, Sabyrzhan Tasbolatov wrote:
> On Tue, Aug 5, 2025 at 10:19 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>>
>>
>>
>> On 8/5/25 4:26 PM, Sabyrzhan Tasbolatov wrote:
>>>
>>> diff --git a/arch/um/Kconfig b/arch/um/Kconfig
>>> index 9083bfdb773..8d14c8fc2cd 100644
>>> --- a/arch/um/Kconfig
>>> +++ b/arch/um/Kconfig
>>> @@ -5,6 +5,7 @@ menu "UML-specific options"
>>> config UML
>>> bool
>>> default y
>>> + select ARCH_DEFER_KASAN
>>
>> select ARCH_DEFER_KASAN if STATIC_LINK
>
> As pointed out in commit 5b301409e8bc("UML: add support for KASAN
> under x86_64"),
>
> : Also note that, while UML supports both KASAN in inline mode
> (CONFIG_KASAN_INLINE)
> : and static linking (CONFIG_STATIC_LINK), it does not support both at
> the same time.
>
> I've tested that for UML,
> ARCH_DEFER_KASAN works if STATIC_LINK && KASAN_OUTLINE
> ARCH_DEFER_KASAN works if KASAN_INLINE && !STATIC_LINK
>
> ARCH_DEFER_KASAN if STATIC_LINK, and KASAN_INLINE=y by default from defconfig
> crashes with SEGFAULT here (I didn't understand what it is, I think
> the main() constructors
> is not prepared in UML):
>
> ► 0 0x609d6f87 strlen+43
> 1 0x60a20db0 _dl_new_object+48
> 2 0x60a24627 _dl_non_dynamic_init+103
> 3 0x60a25f9a __libc_init_first+42
> 4 0x609eb6b2 __libc_start_main_impl+2434
> 5 0x6004a025 _start+37
>
No surprise here, kasan_arch_is_ready() or ARCH_DEFER_KASAN doesn't work with KASAN_INLINE=y
This configuration combination (STATIC_LINK + KASAN_INLINE) wasn't possible before:
#ifndef kasan_arch_is_ready
static inline bool kasan_arch_is_ready(void) { return true; }
#elif !defined(CONFIG_KASAN_GENERIC) || !defined(CONFIG_KASAN_OUTLINE)
#error kasan_arch_is_ready only works in KASAN generic outline mode!
#endif
> Since this is the case only for UML, AFAIU, I don't think we want to change
> conditions in lib/Kconfig.kasan. Shall I leave UML Kconfig as it is? e.g.
>
> select ARCH_DEFER_KASAN
>
No, this should have if STATIC_LINK
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 1/9] kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
2025-08-06 13:34 ` Andrey Ryabinin
@ 2025-08-06 14:15 ` Sabyrzhan Tasbolatov
2025-08-06 19:51 ` Andrey Ryabinin
0 siblings, 1 reply; 19+ messages in thread
From: Sabyrzhan Tasbolatov @ 2025-08-06 14:15 UTC (permalink / raw)
To: Andrey Ryabinin
Cc: hca, christophe.leroy, andreyknvl, agordeev, akpm, zhangqing,
chenhuacai, trishalfonso, davidgow, glider, dvyukov, kasan-dev,
linux-kernel, loongarch, linuxppc-dev, linux-riscv, linux-s390,
linux-um, linux-mm
On Wed, Aug 6, 2025 at 6:35 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>
>
>
> On 8/5/25 4:26 PM, Sabyrzhan Tasbolatov wrote:
> > Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures that need
> > to defer KASAN initialization until shadow memory is properly set up,
> > and unify the static key infrastructure across all KASAN modes.
> >
> > Some architectures (like PowerPC with radix MMU) need to set up their
> > shadow memory mappings before KASAN can be safely enabled, while others
> > (like s390, x86, arm) can enable KASAN much earlier or even from the
> > beginning.
> >
> > Historically, the runtime static key kasan_flag_enabled existed only for
> > CONFIG_KASAN_HW_TAGS mode. Generic and SW_TAGS modes either relied on
> > architecture-specific kasan_arch_is_ready() implementations or evaluated
> > KASAN checks unconditionally, leading to code duplication.
> >
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
> > Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
> > ---
> > Changes in v4:
> > - Fixed HW_TAGS static key functionality (was broken in v3)
>
> I don't think it fixed. Before you patch kasan_enabled() esentially
> worked like this:
>
> if (IS_ENABLED(CONFIG_KASAN_HW_TAGS))
> return static_branch_likely(&kasan_flag_enabled);
> else
> return IS_ENABLED(CONFIG_KASAN);
>
> Now it's just IS_ENABLED(CONFIG_KASAN);
In v4 it is:
#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
static __always_inline bool kasan_shadow_initialized(void)
{
return static_branch_likely(&kasan_flag_enabled);
}
#else
static __always_inline bool kasan_shadow_initialized(void)
{
return kasan_enabled(); // which is IS_ENABLED(CONFIG_KASAN);
}
#endif
So for HW_TAGS, KASAN is enabled in kasan_init_hw_tags().
>
> And there are bunch of kasan_enabled() calls left whose behavior changed for
> no reason.
By having in v5 the only check kasan_enabled() and used in current mainline code
should be right. I've addressed this comment below. Thanks!
>
>
> > - Merged configuration and implementation for atomicity
> > ---
> > include/linux/kasan-enabled.h | 36 +++++++++++++++++++++++-------
> > include/linux/kasan.h | 42 +++++++++++++++++++++++++++--------
> > lib/Kconfig.kasan | 8 +++++++
> > mm/kasan/common.c | 18 ++++++++++-----
> > mm/kasan/generic.c | 23 +++++++++++--------
> > mm/kasan/hw_tags.c | 9 +-------
> > mm/kasan/kasan.h | 36 +++++++++++++++++++++---------
> > mm/kasan/shadow.c | 32 ++++++--------------------
> > mm/kasan/sw_tags.c | 4 +++-
> > mm/kasan/tags.c | 2 +-
> > 10 files changed, 133 insertions(+), 77 deletions(-)
> >
> > diff --git a/include/linux/kasan-enabled.h b/include/linux/kasan-enabled.h
> > index 6f612d69ea0..52a3909f032 100644
> > --- a/include/linux/kasan-enabled.h
> > +++ b/include/linux/kasan-enabled.h
> > @@ -4,32 +4,52 @@
> >
> > #include <linux/static_key.h>
> >
> > -#ifdef CONFIG_KASAN_HW_TAGS
> > +/* Controls whether KASAN is enabled at all (compile-time check). */
> > +static __always_inline bool kasan_enabled(void)
> > +{
> > + return IS_ENABLED(CONFIG_KASAN);
> > +}
> >
> > +#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
> > +/*
> > + * Global runtime flag for KASAN modes that need runtime control.
> > + * Used by ARCH_DEFER_KASAN architectures and HW_TAGS mode.
> > + */
> > DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
> >
> > -static __always_inline bool kasan_enabled(void)
> > +/*
> > + * Runtime control for shadow memory initialization or HW_TAGS mode.
> > + * Uses static key for architectures that need deferred KASAN or HW_TAGS.
> > + */
> > +static __always_inline bool kasan_shadow_initialized(void)
>
> Don't rename it, just leave as is - kasan_enabled().
> It's better name, shorter and you don't need to convert call sites, so
> there is less chance of mistakes due to unchanged kasan_enabled() -> kasan_shadow_initialized().
I actually had the only check "kasan_enabled()" in v2, but went to
double check approach in v3
after this comment:
https://lore.kernel.org/all/CA+fCnZcGyTECP15VMSPh+duLmxNe=ApHfOnbAY3NqtFHZvceZw@mail.gmail.com/
Ok, we will have the **only** check kasan_enabled() then in
kasan-enabled.h which
#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
static __always_inline bool kasan_enabled(void)
{
return static_branch_likely(&kasan_flag_enabled);
}
#else
static inline bool kasan_enabled(void)
{
return IS_ENABLED(CONFIG_KASAN);
}
And will remove kasan_arch_is_ready (current kasan_shadow_initialized in v4).
So it is the single place to check if KASAN is enabled for all arch
and internal KASAN code.
Same behavior is in the current mainline code but only for HW_TAGS.
Is this correct?
>
>
> > {
> > return static_branch_likely(&kasan_flag_enabled);
> > }
> >
> > -static inline bool kasan_hw_tags_enabled(void)
> > +static inline void kasan_enable(void)
> > +{
> > + static_branch_enable(&kasan_flag_enabled);
> > +}
> > +#else
> > +/* For architectures that can enable KASAN early, use compile-time check. */
> > +static __always_inline bool kasan_shadow_initialized(void)
> > {
> > return kasan_enabled();
> > }
> >
>
> ...
>
> >
> > void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
> > -int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
> > -void kasan_release_vmalloc(unsigned long start, unsigned long end,
> > +
> > +int __kasan_populate_vmalloc(unsigned long addr, unsigned long size);
> > +static inline int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
> > +{
> > + if (!kasan_shadow_initialized())
> > + return 0;
>
>
> What's the point of moving these checks to header?
> Leave it in C, it's easier to grep and navigate code this way.
Andrey Konovalov had comments [1] to avoid checks in C
by moving them to headers under __wrappers.
: 1. Avoid spraying kasan_arch_is_ready() throughout the KASAN
: implementation and move these checks into include/linux/kasan.h (and
: add __wrappers when required).
[1] https://lore.kernel.org/all/CA+fCnZcGyTECP15VMSPh+duLmxNe=ApHfOnbAY3NqtFHZvceZw@mail.gmail.com/
>
>
> > + return __kasan_populate_vmalloc(addr, size);
> > +}
> > +
> > +void __kasan_release_vmalloc(unsigned long start, unsigned long end,
> > unsigned long free_region_start,
> > unsigned long free_region_end,
> > unsigned long flags);
> > +static inline void kasan_release_vmalloc(unsigned long start,
> > + unsigned long end,
> > + unsigned long free_region_start,
> > + unsigned long free_region_end,
> > + unsigned long flags)
> > +{
> > + if (kasan_shadow_initialized())
> > + __kasan_release_vmalloc(start, end, free_region_start,
> > + free_region_end, flags);
> > +}
> >
>
> ...> @@ -250,7 +259,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
> > bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> > unsigned long ip)
> > {
> > - if (!kasan_arch_is_ready() || is_kfence_address(object))
> > + if (is_kfence_address(object))
> > return false;
> > return check_slab_allocation(cache, object, ip);
> > }
> > @@ -258,7 +267,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
> > bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
> > bool still_accessible)
> > {
> > - if (!kasan_arch_is_ready() || is_kfence_address(object))
> > + if (is_kfence_address(object))
> > return false;
> >
> > poison_slab_object(cache, object, init, still_accessible);
> > @@ -282,9 +291,6 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
> >
> > static inline bool check_page_allocation(void *ptr, unsigned long ip)
> > {
> > - if (!kasan_arch_is_ready())
> > - return false;
> > -
>
>
> Well, you can't do this yet, because no arch using ARCH_DEFER_KASAN yet, so this breaks
> bisectability.
> Leave it, and remove with separate patch only when there are no users left.
Will do in v5 at the end of patch series.
>
> > if (ptr != page_address(virt_to_head_page(ptr))) {
> > kasan_report_invalid_free(ptr, ip, KASAN_REPORT_INVALID_FREE);
> > return true;
> > @@ -511,7 +517,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
> > return true;
> > }
> >
> > - if (is_kfence_address(ptr) || !kasan_arch_is_ready())
> > + if (is_kfence_address(ptr))
> > return true;
> >
> > slab = folio_slab(folio);
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 1/9] kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
2025-08-06 14:15 ` Sabyrzhan Tasbolatov
@ 2025-08-06 19:51 ` Andrey Ryabinin
0 siblings, 0 replies; 19+ messages in thread
From: Andrey Ryabinin @ 2025-08-06 19:51 UTC (permalink / raw)
To: Sabyrzhan Tasbolatov
Cc: hca, christophe.leroy, andreyknvl, agordeev, akpm, zhangqing,
chenhuacai, trishalfonso, davidgow, glider, dvyukov, kasan-dev,
linux-kernel, loongarch, linuxppc-dev, linux-riscv, linux-s390,
linux-um, linux-mm
On 8/6/25 4:15 PM, Sabyrzhan Tasbolatov wrote:
> On Wed, Aug 6, 2025 at 6:35 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>>
>>
>>
>> On 8/5/25 4:26 PM, Sabyrzhan Tasbolatov wrote:
>>> Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures that need
>>> to defer KASAN initialization until shadow memory is properly set up,
>>> and unify the static key infrastructure across all KASAN modes.
>>>
>>> Some architectures (like PowerPC with radix MMU) need to set up their
>>> shadow memory mappings before KASAN can be safely enabled, while others
>>> (like s390, x86, arm) can enable KASAN much earlier or even from the
>>> beginning.
>>>
>>> Historically, the runtime static key kasan_flag_enabled existed only for
>>> CONFIG_KASAN_HW_TAGS mode. Generic and SW_TAGS modes either relied on
>>> architecture-specific kasan_arch_is_ready() implementations or evaluated
>>> KASAN checks unconditionally, leading to code duplication.
>>>
>>> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
>>> Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
>>> ---
>>> Changes in v4:
>>> - Fixed HW_TAGS static key functionality (was broken in v3)
>>
>> I don't think it fixed. Before you patch kasan_enabled() esentially
>> worked like this:
>>
>> if (IS_ENABLED(CONFIG_KASAN_HW_TAGS))
>> return static_branch_likely(&kasan_flag_enabled);
>> else
>> return IS_ENABLED(CONFIG_KASAN);
>>
>> Now it's just IS_ENABLED(CONFIG_KASAN);
>
> In v4 it is:
>
> #if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
> static __always_inline bool kasan_shadow_initialized(void)
> {
> return static_branch_likely(&kasan_flag_enabled);
> }
> #else
> static __always_inline bool kasan_shadow_initialized(void)
> {
> return kasan_enabled(); // which is IS_ENABLED(CONFIG_KASAN);
> }
> #endif
>
> So for HW_TAGS, KASAN is enabled in kasan_init_hw_tags().
You are referring to kasan_shadow_initialized(), but I was talking about kasan_enabled() specifically.
E.g. your patch changes behavior for kasan_init_slab_obj() which doesn't use kasan_shadow_initialized()
(in the case of HW_TAGS=y && kasan_flag_enabled = false) :
static __always_inline void * __must_check kasan_init_slab_obj(
struct kmem_cache *cache, const void *object)
{
if (kasan_enabled())
return __kasan_init_slab_obj(cache, object);
return (void *)object;
}
>>> +#if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
>>> +/*
>>> + * Global runtime flag for KASAN modes that need runtime control.
>>> + * Used by ARCH_DEFER_KASAN architectures and HW_TAGS mode.
>>> + */
>>> DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
>>>
>>> -static __always_inline bool kasan_enabled(void)
>>> +/*
>>> + * Runtime control for shadow memory initialization or HW_TAGS mode.
>>> + * Uses static key for architectures that need deferred KASAN or HW_TAGS.
>>> + */
>>> +static __always_inline bool kasan_shadow_initialized(void)
>>
>> Don't rename it, just leave as is - kasan_enabled().
>> It's better name, shorter and you don't need to convert call sites, so
>> there is less chance of mistakes due to unchanged kasan_enabled() -> kasan_shadow_initialized().
>
> I actually had the only check "kasan_enabled()" in v2, but went to
> double check approach in v3
> after this comment:
> https://lore.kernel.org/all/CA+fCnZcGyTECP15VMSPh+duLmxNe=ApHfOnbAY3NqtFHZvceZw@mail.gmail.com/
AFAIU the comment suggest that we need two checks/flags, one in kasan_enabled() which checks
whether kasan was enabled via cmdline (currently only for HW_TAGS)
and one in kasan_arch_is_ready()(or kasan_shadow_initialized()) which checks if arch initialized KASAN.
And this not what v3/v4 does. v4 basically have one check, just under different name.
Separate checks might be needed if we have code paths that need 'kasan_arch_is_ready() && !kasan_enabled()'
and vise versa '!kasan_arch_is_ready() && kasan_enabled()'.
From the top of my head, I can't say if we have such cases.
>
> Ok, we will have the **only** check kasan_enabled() then in
> kasan-enabled.h which
>
> #if defined(CONFIG_ARCH_DEFER_KASAN) || defined(CONFIG_KASAN_HW_TAGS)
> static __always_inline bool kasan_enabled(void)
> {
> return static_branch_likely(&kasan_flag_enabled);
> }
> #else
> static inline bool kasan_enabled(void)
> {
> return IS_ENABLED(CONFIG_KASAN);
> }
>
> And will remove kasan_arch_is_ready (current kasan_shadow_initialized in v4).
>
> So it is the single place to check if KASAN is enabled for all arch
> and internal KASAN code.
> Same behavior is in the current mainline code but only for HW_TAGS.
>
> Is this correct?
>
Yep, that's what I meant.
>>
>>
>>> {
>>> return static_branch_likely(&kasan_flag_enabled);
>>> }
>>>
>>> -static inline bool kasan_hw_tags_enabled(void)
>>> +static inline void kasan_enable(void)
>>> +{
>>> + static_branch_enable(&kasan_flag_enabled);
>>> +}
>>> +#else
>>> +/* For architectures that can enable KASAN early, use compile-time check. */
>>> +static __always_inline bool kasan_shadow_initialized(void)
>>> {
>>> return kasan_enabled();
>>> }
>>>
>>
>> ...
>>
>>>
>>> void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
>>> -int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
>>> -void kasan_release_vmalloc(unsigned long start, unsigned long end,
>>> +
>>> +int __kasan_populate_vmalloc(unsigned long addr, unsigned long size);
>>> +static inline int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
>>> +{
>>> + if (!kasan_shadow_initialized())
>>> + return 0;
>>
>>
>> What's the point of moving these checks to header?
>> Leave it in C, it's easier to grep and navigate code this way.
>
> Andrey Konovalov had comments [1] to avoid checks in C
> by moving them to headers under __wrappers.
>
> : 1. Avoid spraying kasan_arch_is_ready() throughout the KASAN
> : implementation and move these checks into include/linux/kasan.h (and
> : add __wrappers when required).
>
> [1] https://lore.kernel.org/all/CA+fCnZcGyTECP15VMSPh+duLmxNe=ApHfOnbAY3NqtFHZvceZw@mail.gmail.com/
>
I think Andrey K. meant cases when we have multiple implementations of one function for each mode.
In such case it makes sense to merge multiple kasan_arch_is_ready() checks into one in the header.
But in case like with kasan_populate_vmalloc() we have only one implementation so I don't see any
value in adding wrapper/moving to header.
>>
>>
>>> + return __kasan_populate_vmalloc(addr, size);
>>> +}
>>> +
>>> +void __kasan_release_vmalloc(unsigned long start, unsigned long end,
>>> unsigned long free_region_start,
>>> unsigned long free_region_end,
>>> unsigned long flags);
>>> +static inline void kasan_release_vmalloc(unsigned long start,
>>> + unsigned long end,
>>> + unsigned long free_region_start,
>>> + unsigned long free_region_end,
>>> + unsigned long flags)
>>> +{
>>> + if (kasan_shadow_initialized())
>>> + __kasan_release_vmalloc(start, end, free_region_start,
>>> + free_region_end, flags);
>>> +}
>>>
>>
>> ...> @@ -250,7 +259,7 @@ static inline void poison_slab_object(struct kmem_cache *cache, void *object,
>>> bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
>>> unsigned long ip)
>>> {
>>> - if (!kasan_arch_is_ready() || is_kfence_address(object))
>>> + if (is_kfence_address(object))
>>> return false;
>>> return check_slab_allocation(cache, object, ip);
>>> }
>>> @@ -258,7 +267,7 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object,
>>> bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
>>> bool still_accessible)
>>> {
>>> - if (!kasan_arch_is_ready() || is_kfence_address(object))
>>> + if (is_kfence_address(object))
>>> return false;
>>>
>>> poison_slab_object(cache, object, init, still_accessible);
>>> @@ -282,9 +291,6 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init,
>>>
>>> static inline bool check_page_allocation(void *ptr, unsigned long ip)
>>> {
>>> - if (!kasan_arch_is_ready())
>>> - return false;
>>> -
>>
>>
>> Well, you can't do this yet, because no arch using ARCH_DEFER_KASAN yet, so this breaks
>> bisectability.
>> Leave it, and remove with separate patch only when there are no users left.
>
> Will do in v5 at the end of patch series.
>
>>
>>> if (ptr != page_address(virt_to_head_page(ptr))) {
>>> kasan_report_invalid_free(ptr, ip, KASAN_REPORT_INVALID_FREE);
>>> return true;
>>> @@ -511,7 +517,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
>>> return true;
>>> }
>>>
>>> - if (is_kfence_address(ptr) || !kasan_arch_is_ready())
>>> + if (is_kfence_address(ptr))
>>> return true;
>>>
>>> slab = folio_slab(folio);
>>
>>
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2025-08-06 19:52 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-05 14:26 [PATCH v4 0/9] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 1/9] kasan: introduce ARCH_DEFER_KASAN and unify static key across modes Sabyrzhan Tasbolatov
2025-08-06 13:34 ` Andrey Ryabinin
2025-08-06 14:15 ` Sabyrzhan Tasbolatov
2025-08-06 19:51 ` Andrey Ryabinin
2025-08-05 14:26 ` [PATCH v4 2/9] kasan/powerpc: select ARCH_DEFER_KASAN and call kasan_init_generic Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 3/9] kasan/arm,arm64: call kasan_init_generic in kasan_init Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 4/9] kasan/xtensa: " Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 5/9] kasan/loongarch: select ARCH_DEFER_KASAN and call kasan_init_generic Sabyrzhan Tasbolatov
2025-08-05 17:17 ` Andrey Ryabinin
2025-08-06 4:37 ` Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 6/9] kasan/um: " Sabyrzhan Tasbolatov
2025-08-05 17:19 ` Andrey Ryabinin
2025-08-06 4:35 ` Sabyrzhan Tasbolatov
2025-08-06 13:49 ` Andrey Ryabinin
2025-08-05 14:26 ` [PATCH v4 7/9] kasan/x86: call kasan_init_generic in kasan_init Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 8/9] kasan/s390: " Sabyrzhan Tasbolatov
2025-08-05 14:26 ` [PATCH v4 9/9] kasan/riscv: " Sabyrzhan Tasbolatov
2025-08-05 16:06 ` Alexandre Ghiti
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).