* [PATCH 0/2] introduce kasan stonly-mode in hw-tags
@ 2025-08-11 17:36 Yeoreum Yun
2025-08-11 17:36 ` [PATCH 1/2] kasan/hw-tags: introduce store only mode Yeoreum Yun
2025-08-11 17:36 ` [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases Yeoreum Yun
0 siblings, 2 replies; 11+ messages in thread
From: Yeoreum Yun @ 2025-08-11 17:36 UTC (permalink / raw)
To: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, catalin.marinas, will, akpm, scott, jhubbard,
pankaj.gupta, leitao, kaleshsingh, maz, broonie, oliver.upton,
james.morse, ardb, hardevsinh.palaniya, david, yang
Cc: kasan-dev, workflows, linux-doc, linux-kernel, linux-arm-kernel,
linux-mm, Yeoreum Yun
Hardware tag based KASAN is implemented using the Memory Tagging Extension
(MTE) feature.
MTE is built on top of the ARMv8.0 virtual address tagging TBI
(Top Byte Ignore) feature and allows software to access a 4-bit
allocation tag for each 16-byte granule in the physical address space.
A logical tag is derived from bits 59-56 of the virtual
address used for the memory access. A CPU with MTE enabled will compare
the logical tag against the allocation tag and potentially raise an
tag check fault on mismatch, subject to system registers configuration.
Since ARMv8.9, FEAT_MTE_STORE_ONLY can be used to restrict raise of tag
check fault on store operation only.
Using this feature (FEAT_MTE_STORE_ONLY), introduce KASAN store-only mode
which restricts KASAN check store operation only.
This mode omits KASAN check for fetch/load operation.
Therefore, it might be used not only debugging purpose but also in
normal environment.
Yeoreum Yun (2):
kasan/hw-tags: introduce store only mode
kasan: apply store-only mode in kasan kunit testcases
Documentation/dev-tools/kasan.rst | 3 +
arch/arm64/include/asm/memory.h | 1 +
arch/arm64/include/asm/mte-kasan.h | 6 +
arch/arm64/kernel/cpufeature.c | 6 +
arch/arm64/kernel/mte.c | 14 +
include/linux/kasan.h | 2 +
mm/kasan/hw_tags.c | 76 +++++-
mm/kasan/kasan.h | 10 +
mm/kasan/kasan_test_c.c | 423 +++++++++++++++++++++++------
9 files changed, 457 insertions(+), 84 deletions(-)
base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/2] kasan/hw-tags: introduce store only mode
2025-08-11 17:36 [PATCH 0/2] introduce kasan stonly-mode in hw-tags Yeoreum Yun
@ 2025-08-11 17:36 ` Yeoreum Yun
2025-08-12 16:25 ` Andrey Konovalov
2025-08-11 17:36 ` [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases Yeoreum Yun
1 sibling, 1 reply; 11+ messages in thread
From: Yeoreum Yun @ 2025-08-11 17:36 UTC (permalink / raw)
To: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, catalin.marinas, will, akpm, scott, jhubbard,
pankaj.gupta, leitao, kaleshsingh, maz, broonie, oliver.upton,
james.morse, ardb, hardevsinh.palaniya, david, yang
Cc: kasan-dev, workflows, linux-doc, linux-kernel, linux-arm-kernel,
linux-mm, Yeoreum Yun
Since Armv8.9, FEATURE_MTE_STORE_ONLY feature is introduced to restrict
raise of tag check fault on store operation only.
Introcude KASAN store only mode based on this feature.
KASAN store only mode restricts KASAN checks operation for store only and
omits the checks for fetch/read operation when accessing memory.
So it might be used not only debugging enviroment but also normal
enviroment to check memory safty.
This features can be controlled with "kasan.stonly" arguments.
When "kasan.stonly=on", KASAN checks store only mode otherwise
KASAN checks all operations.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
Documentation/dev-tools/kasan.rst | 3 ++
arch/arm64/include/asm/memory.h | 1 +
arch/arm64/include/asm/mte-kasan.h | 6 +++
arch/arm64/kernel/cpufeature.c | 6 +++
arch/arm64/kernel/mte.c | 14 ++++++
include/linux/kasan.h | 2 +
mm/kasan/hw_tags.c | 76 +++++++++++++++++++++++++++++-
mm/kasan/kasan.h | 10 ++++
8 files changed, 116 insertions(+), 2 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index 0a1418ab72fd..7567a2ca0e39 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -163,6 +163,9 @@ disabling KASAN altogether or controlling its features:
This parameter is intended to allow sampling only large page_alloc
allocations, which is the biggest source of the performance overhead.
+- ``kasan.stonly=off`` or ``kasan.stonly=on`` controls whether KASAN checks
+ store operation only or all operation.
+
Error reports
~~~~~~~~~~~~~
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 5213248e081b..9d8c72c9c91f 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -308,6 +308,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
#define arch_enable_tag_checks_sync() mte_enable_kernel_sync()
#define arch_enable_tag_checks_async() mte_enable_kernel_async()
#define arch_enable_tag_checks_asymm() mte_enable_kernel_asymm()
+#define arch_enable_tag_checks_stonly() mte_enable_kernel_stonly()
#define arch_suppress_tag_checks_start() mte_enable_tco()
#define arch_suppress_tag_checks_stop() mte_disable_tco()
#define arch_force_async_tag_fault() mte_check_tfsr_exit()
diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
index 2e98028c1965..d75908ed9d0f 100644
--- a/arch/arm64/include/asm/mte-kasan.h
+++ b/arch/arm64/include/asm/mte-kasan.h
@@ -200,6 +200,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
void mte_enable_kernel_sync(void);
void mte_enable_kernel_async(void);
void mte_enable_kernel_asymm(void);
+int mte_enable_kernel_stonly(void);
#else /* CONFIG_ARM64_MTE */
@@ -251,6 +252,11 @@ static inline void mte_enable_kernel_asymm(void)
{
}
+static inline int mte_enable_kenrel_stonly(void)
+{
+ return -EINVAL;
+}
+
#endif /* CONFIG_ARM64_MTE */
#endif /* __ASSEMBLY__ */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9ad065f15f1d..fdc510fe0187 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2404,6 +2404,11 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
kasan_init_hw_tags_cpu();
}
+
+static void cpu_enable_mte_stonly(struct arm64_cpu_capabilities const *cap)
+{
+ kasan_late_init_hw_tags_cpu();
+}
#endif /* CONFIG_ARM64_MTE */
static void user_feature_fixup(void)
@@ -2922,6 +2927,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_MTE_STORE_ONLY,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
+ .cpu_enable = cpu_enable_mte_stonly,
ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTESTOREONLY, IMP)
},
#endif /* CONFIG_ARM64_MTE */
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index e5e773844889..a1cb2a8a79a1 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -157,6 +157,20 @@ void mte_enable_kernel_asymm(void)
mte_enable_kernel_sync();
}
}
+
+int mte_enable_kernel_stonly(void)
+{
+ if (!cpus_have_cap(ARM64_MTE_STORE_ONLY))
+ return -EINVAL;
+
+ sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCSO_MASK,
+ SYS_FIELD_PREP(SCTLR_EL1, TCSO, 1));
+ isb();
+
+ pr_info_once("MTE: enabled stonly mode at EL1\n");
+
+ return 0;
+}
#endif
#ifdef CONFIG_KASAN_HW_TAGS
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 890011071f2b..28951b29c593 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -552,9 +552,11 @@ static inline void kasan_init_sw_tags(void) { }
#ifdef CONFIG_KASAN_HW_TAGS
void kasan_init_hw_tags_cpu(void);
void __init kasan_init_hw_tags(void);
+void kasan_late_init_hw_tags_cpu(void);
#else
static inline void kasan_init_hw_tags_cpu(void) { }
static inline void kasan_init_hw_tags(void) { }
+static inline void kasan_late_init_hw_tags_cpu(void) { }
#endif
#ifdef CONFIG_KASAN_VMALLOC
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 9a6927394b54..2caa6fe5ed47 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -41,9 +41,16 @@ enum kasan_arg_vmalloc {
KASAN_ARG_VMALLOC_ON,
};
+enum kasan_arg_stonly {
+ KASAN_ARG_STONLY_DEFAULT,
+ KASAN_ARG_STONLY_OFF,
+ KASAN_ARG_STONLY_ON,
+};
+
static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
+static enum kasan_arg_stonly kasan_arg_stonly __ro_after_init;
/*
* Whether KASAN is enabled at all.
@@ -67,6 +74,9 @@ DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
#endif
EXPORT_SYMBOL_GPL(kasan_flag_vmalloc);
+DEFINE_STATIC_KEY_FALSE(kasan_flag_stonly);
+EXPORT_SYMBOL_GPL(kasan_flag_stonly);
+
#define PAGE_ALLOC_SAMPLE_DEFAULT 1
#define PAGE_ALLOC_SAMPLE_ORDER_DEFAULT 3
@@ -141,6 +151,23 @@ static int __init early_kasan_flag_vmalloc(char *arg)
}
early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
+/* kasan.stonly=off/on */
+static int __init early_kasan_flag_stonly(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_stonly = KASAN_ARG_STONLY_OFF;
+ else if (!strcmp(arg, "on"))
+ kasan_arg_stonly = KASAN_ARG_STONLY_ON;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.stonly", early_kasan_flag_stonly);
+
static inline const char *kasan_mode_info(void)
{
if (kasan_mode == KASAN_MODE_ASYNC)
@@ -219,6 +246,20 @@ void kasan_init_hw_tags_cpu(void)
kasan_enable_hw_tags();
}
+/*
+ * kasan_late_init_hw_tags_cpu_post() is called for each CPU after
+ * all cpus are bring-up at boot.
+ * Not marked as __init as a CPU can be hot-plugged after boot.
+ */
+void kasan_late_init_hw_tags_cpu(void)
+{
+ /*
+ * Enable stonly mode only when explicitly requested through the command line.
+ * If system doesn't support, kasan checks all operation.
+ */
+ kasan_enable_stonly();
+}
+
/* kasan_init_hw_tags() is called once on boot CPU. */
void __init kasan_init_hw_tags(void)
{
@@ -257,15 +298,28 @@ void __init kasan_init_hw_tags(void)
break;
}
+ switch (kasan_arg_stonly) {
+ case KASAN_ARG_STONLY_DEFAULT:
+ /* Default is specified by kasan_flag_stonly definition. */
+ break;
+ case KASAN_ARG_STONLY_OFF:
+ static_branch_disable(&kasan_flag_stonly);
+ break;
+ case KASAN_ARG_STONLY_ON:
+ static_branch_enable(&kasan_flag_stonly);
+ break;
+ }
+
kasan_init_tags();
/* KASAN is now initialized, enable it. */
static_branch_enable(&kasan_flag_enabled);
- pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
+ pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s stonly=%s\n",
kasan_mode_info(),
str_on_off(kasan_vmalloc_enabled()),
- str_on_off(kasan_stack_collection_enabled()));
+ str_on_off(kasan_stack_collection_enabled()),
+ str_on_off(kasan_stonly_enabled()));
}
#ifdef CONFIG_KASAN_VMALLOC
@@ -394,6 +448,22 @@ void kasan_enable_hw_tags(void)
hw_enable_tag_checks_sync();
}
+void kasan_enable_stonly(void)
+{
+ if (kasan_arg_stonly == KASAN_ARG_STONLY_ON) {
+ if (hw_enable_tag_checks_stonly()) {
+ static_branch_disable(&kasan_flag_stonly);
+ kasan_arg_stonly = KASAN_ARG_STONLY_OFF;
+ pr_warn_once("KernelAddressSanitizer: store only mode isn't supported (hw-tags)\n");
+ }
+ }
+}
+
+bool kasan_stonly_enabled(void)
+{
+ return static_branch_unlikely(&kasan_flag_stonly);
+}
+
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
EXPORT_SYMBOL_IF_KUNIT(kasan_enable_hw_tags);
@@ -404,4 +474,6 @@ VISIBLE_IF_KUNIT void kasan_force_async_fault(void)
}
EXPORT_SYMBOL_IF_KUNIT(kasan_force_async_fault);
+EXPORT_SYMBOL_IF_KUNIT(kasan_stonly_enabled);
+
#endif
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 129178be5e64..cfbcebdbcbec 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -33,6 +33,7 @@ static inline bool kasan_stack_collection_enabled(void)
#include "../slab.h"
DECLARE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
+DECLARE_STATIC_KEY_FALSE(kasan_flag_stonly);
enum kasan_mode {
KASAN_MODE_SYNC,
@@ -428,6 +429,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define hw_enable_tag_checks_sync() arch_enable_tag_checks_sync()
#define hw_enable_tag_checks_async() arch_enable_tag_checks_async()
#define hw_enable_tag_checks_asymm() arch_enable_tag_checks_asymm()
+#define hw_enable_tag_checks_stonly() arch_enable_tag_checks_stonly()
#define hw_suppress_tag_checks_start() arch_suppress_tag_checks_start()
#define hw_suppress_tag_checks_stop() arch_suppress_tag_checks_stop()
#define hw_force_async_tag_fault() arch_force_async_tag_fault()
@@ -437,10 +439,18 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
arch_set_mem_tag_range((addr), (size), (tag), (init))
void kasan_enable_hw_tags(void);
+void kasan_enable_stonly(void);
+bool kasan_stonly_enabled(void);
#else /* CONFIG_KASAN_HW_TAGS */
static inline void kasan_enable_hw_tags(void) { }
+static inline void kasan_enable_stonly(void) { }
+
+static inline bool kasan_stonly_enabled(void)
+{
+ return false;
+}
#endif /* CONFIG_KASAN_HW_TAGS */
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-11 17:36 [PATCH 0/2] introduce kasan stonly-mode in hw-tags Yeoreum Yun
2025-08-11 17:36 ` [PATCH 1/2] kasan/hw-tags: introduce store only mode Yeoreum Yun
@ 2025-08-11 17:36 ` Yeoreum Yun
2025-08-12 16:28 ` Andrey Konovalov
1 sibling, 1 reply; 11+ messages in thread
From: Yeoreum Yun @ 2025-08-11 17:36 UTC (permalink / raw)
To: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, catalin.marinas, will, akpm, scott, jhubbard,
pankaj.gupta, leitao, kaleshsingh, maz, broonie, oliver.upton,
james.morse, ardb, hardevsinh.palaniya, david, yang
Cc: kasan-dev, workflows, linux-doc, linux-kernel, linux-arm-kernel,
linux-mm, Yeoreum Yun
When KASAN is configured in store-only mode,
fetch/load operations do not trigger tag check faults.
As a result, the outcome of some test cases may differ
compared to when KASAN is configured without store-only mode.
To address this:
1. Replace fetch/load expressions that would
normally trigger tag check faults with store operation
when running under store-only and sync mode.
In case of async/asymm mode, skip the store operation triggering
tag check fault since it corrupts memory.
2. Skip some testcases affected by initial value
(i.e) atomic_cmpxchg() testcase maybe successd if
it passes valid atomic_t address and invalid oldaval address.
In this case, if invalid atomic_t doesn't have the same oldval,
it won't trigger store operation so the test will pass.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
mm/kasan/kasan_test_c.c | 423 ++++++++++++++++++++++++++++++++--------
1 file changed, 341 insertions(+), 82 deletions(-)
diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
index 2aa12dfa427a..22d5d6d6cd9f 100644
--- a/mm/kasan/kasan_test_c.c
+++ b/mm/kasan/kasan_test_c.c
@@ -94,11 +94,13 @@ static void kasan_test_exit(struct kunit *test)
}
/**
- * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
- * KASAN report; causes a KUnit test failure otherwise.
+ * _KUNIT_EXPECT_KASAN_TEMPLATE - check that the executed expression produces
+ * a KASAN report or not; a KUnit test failure when it's different from @produce.
*
* @test: Currently executing KUnit test.
- * @expression: Expression that must produce a KASAN report.
+ * @expr: Expression produce a KASAN report or not.
+ * @expr_str: Expression string
+ * @produce: expression should produce a KASAN report.
*
* For hardware tag-based KASAN, when a synchronous tag fault happens, tag
* checking is auto-disabled. When this happens, this test handler reenables
@@ -110,25 +112,29 @@ static void kasan_test_exit(struct kunit *test)
* Use READ/WRITE_ONCE() for the accesses and compiler barriers around the
* expression to prevent that.
*
- * In between KUNIT_EXPECT_KASAN_FAIL checks, test_status.report_found is kept
+ * In between _KUNIT_EXPECT_KASAN_TEMPLATE checks, test_status.report_found is kept
* as false. This allows detecting KASAN reports that happen outside of the
* checks by asserting !test_status.report_found at the start of
- * KUNIT_EXPECT_KASAN_FAIL and in kasan_test_exit.
+ * _KUNIT_EXPECT_KASAN_TEMPLATE and in kasan_test_exit.
*/
-#define KUNIT_EXPECT_KASAN_FAIL(test, expression) do { \
+#define _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, expr_str, produce) \
+do { \
if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
kasan_sync_fault_possible()) \
migrate_disable(); \
KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found)); \
barrier(); \
- expression; \
+ expr; \
barrier(); \
if (kasan_async_fault_possible()) \
kasan_force_async_fault(); \
- if (!READ_ONCE(test_status.report_found)) { \
- KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN failure " \
- "expected in \"" #expression \
- "\", but none occurred"); \
+ if (READ_ONCE(test_status.report_found) != produce) { \
+ KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN %s " \
+ "expected in \"" expr_str \
+ "\", but %soccurred", \
+ (produce ? "failure" : "success"), \
+ (test_status.report_found ? \
+ "" : "none ")); \
} \
if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
kasan_sync_fault_possible()) { \
@@ -141,6 +147,26 @@ static void kasan_test_exit(struct kunit *test)
WRITE_ONCE(test_status.async_fault, false); \
} while (0)
+/*
+ * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
+ * KASAN report; causes a KUnit test failure otherwise.
+ *
+ * @test: Currently executing KUnit test.
+ * @expr: Expression produce a KASAN report.
+ */
+#define KUNIT_EXPECT_KASAN_FAIL(test, expr) \
+ _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, true)
+
+/*
+ * KUNIT_EXPECT_KASAN_SUCCESS - check that the executed expression doesn't
+ * produces a KASAN report; causes a KUnit test failure otherwise.
+ *
+ * @test: Currently executing KUnit test.
+ * @expr: Expression doesn't produce a KASAN report.
+ */
+#define KUNIT_EXPECT_KASAN_SUCCESS(test, expr) \
+ _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, false)
+
#define KASAN_TEST_NEEDS_CONFIG_ON(test, config) do { \
if (!IS_ENABLED(config)) \
kunit_skip((test), "Test requires " #config "=y"); \
@@ -183,8 +209,15 @@ static void kmalloc_oob_right(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + 5] = 'y');
/* Out-of-bounds access past the aligned kmalloc object. */
- KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
- ptr[size + KASAN_GRANULE_SIZE + 5]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] =
+ ptr[size + KASAN_GRANULE_SIZE + 5]);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ ptr[size + KASAN_GRANULE_SIZE + 5] = ptr[0]);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
+ ptr[size + KASAN_GRANULE_SIZE + 5]);
kfree(ptr);
}
@@ -198,7 +231,13 @@ static void kmalloc_oob_left(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
OPTIMIZER_HIDE_VAR(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr = *(ptr - 1));
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, *(ptr - 1) = *(ptr));
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
+
kfree(ptr);
}
@@ -211,7 +250,13 @@ static void kmalloc_node_oob_right(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
OPTIMIZER_HIDE_VAR(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = ptr[0]);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
+
kfree(ptr);
}
@@ -291,7 +336,12 @@ static void kmalloc_large_uaf(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
kfree(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
}
static void kmalloc_large_invalid_free(struct kunit *test)
@@ -323,7 +373,13 @@ static void page_alloc_oob_right(struct kunit *test)
ptr = page_address(pages);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = ptr[0]);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
+
free_pages((unsigned long)ptr, order);
}
@@ -338,7 +394,12 @@ static void page_alloc_uaf(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
free_pages((unsigned long)ptr, order);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
}
static void krealloc_more_oob_helper(struct kunit *test,
@@ -455,10 +516,15 @@ static void krealloc_uaf(struct kunit *test)
ptr1 = kmalloc(size1, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
kfree(ptr1);
-
KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL));
KUNIT_ASSERT_NULL(test, ptr2);
- KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
+
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, *(volatile char *)ptr1);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1 = 0);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
}
static void kmalloc_oob_16(struct kunit *test)
@@ -501,7 +567,13 @@ static void kmalloc_uaf_16(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
kfree(ptr2);
- KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr1 = *ptr2);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, *ptr2 = *ptr1);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
+
kfree(ptr1);
}
@@ -640,8 +712,17 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
memset((char *)ptr, 0, 64);
OPTIMIZER_HIDE_VAR(ptr);
OPTIMIZER_HIDE_VAR(invalid_size);
- KUNIT_EXPECT_KASAN_FAIL(test,
- memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ memmove((char *)ptr + 4, (char *)ptr, invalid_size));
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+
kfree(ptr);
}
@@ -654,7 +735,13 @@ static void kmalloc_uaf(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
kfree(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
+
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[8]);
+ if (!kasan_sync_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8] = 0);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
}
static void kmalloc_uaf_memset(struct kunit *test)
@@ -701,7 +788,13 @@ static void kmalloc_uaf2(struct kunit *test)
goto again;
}
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[40]);
+ if (!kasan_sync_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40] = 0);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
+
KUNIT_EXPECT_PTR_NE(test, ptr1, ptr2);
kfree(ptr2);
@@ -727,19 +820,35 @@ static void kmalloc_uaf3(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
kfree(ptr2);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[8]);
+ if (!kasan_sync_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8] = 0);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
}
static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
{
int *i_unsafe = unsafe;
- KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*i_unsafe));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
+
KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, smp_load_acquire(i_unsafe));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_read(unsafe));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
+
KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
@@ -752,18 +861,38 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
+
+ /*
+ * The result of the test below may vary due to garbage values of unsafe in
+ * store-only mode. Therefore, skip this test when KASAN is configured
+ * in store-only mode.
+ */
+ if (!kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
+
KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
+ /*
+ * The result of the test below may vary due to garbage values of unsafe in
+ * store-only mode. Therefore, skip this test when KASAN is configured
+ * in store-only mode.
+ */
+ if (!kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
+ }
+
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_long_read(unsafe));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
+
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
@@ -776,16 +905,32 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
+
+ /*
+ * The result of the test below may vary due to garbage values in
+ * store-only mode. Therefore, skip this test when KASAN is configured
+ * in store-only mode.
+ */
+ if (!kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
+
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
+
+ /*
+ * The result of the test below may vary due to garbage values in
+ * store-only mode. Therefore, skip this test when KASAN is configured
+ * in store-only mode.
+ */
+ if (!kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
+ }
}
static void kasan_atomics(struct kunit *test)
@@ -842,8 +987,18 @@ static void ksize_unpoisons_memory(struct kunit *test)
/* These must trigger a KASAN report. */
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
+
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size + 5]);
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[real_size - 1]);
+ if (!kasan_sync_fault_possible()) {
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5] = 0);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1] = 0);
+ }
+ } else {
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
+ }
kfree(ptr);
}
@@ -863,8 +1018,17 @@ static void ksize_uaf(struct kunit *test)
OPTIMIZER_HIDE_VAR(ptr);
KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr));
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size]);
+ if (!kasan_sync_fault_possible()) {
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size] = 0);
+ }
+ } else {
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
+ }
}
/*
@@ -886,7 +1050,11 @@ static void rcu_uaf_reclaim(struct rcu_head *rp)
container_of(rp, struct kasan_rcu_info, rcu);
kfree(fp);
- ((volatile struct kasan_rcu_info *)fp)->i;
+
+ if (kasan_stonly_enabled() && !kasan_async_fault_possible())
+ ((volatile struct kasan_rcu_info *)fp)->i = 0;
+ else
+ ((volatile struct kasan_rcu_info *)fp)->i;
}
static void rcu_uaf(struct kunit *test)
@@ -899,9 +1067,14 @@ static void rcu_uaf(struct kunit *test)
global_rcu_ptr = rcu_dereference_protected(
(struct kasan_rcu_info __rcu *)ptr, NULL);
- KUNIT_EXPECT_KASAN_FAIL(test,
- call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
- rcu_barrier());
+ if (kasan_stonly_enabled() && kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
+ rcu_barrier());
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
+ rcu_barrier());
}
static void workqueue_uaf_work(struct work_struct *work)
@@ -924,8 +1097,12 @@ static void workqueue_uaf(struct kunit *test)
queue_work(workqueue, work);
destroy_workqueue(workqueue);
- KUNIT_EXPECT_KASAN_FAIL(test,
- ((volatile struct work_struct *)work)->data);
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ ((volatile struct work_struct *)work)->data);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ ((volatile struct work_struct *)work)->data);
}
static void kfree_via_page(struct kunit *test)
@@ -972,7 +1149,12 @@ static void kmem_cache_oob(struct kunit *test)
return;
}
- KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, *p = p[size + OOB_TAG_OFF]);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, p[size + OOB_TAG_OFF] = *p);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
kmem_cache_free(cache, p);
kmem_cache_destroy(cache);
@@ -1068,7 +1250,12 @@ static void kmem_cache_rcu_uaf(struct kunit *test)
*/
rcu_barrier();
- KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*p));
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*p, 0));
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
kmem_cache_destroy(cache);
}
@@ -1206,7 +1393,13 @@ static void mempool_oob_right_helper(struct kunit *test, mempool_t *pool, size_t
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
KUNIT_EXPECT_KASAN_FAIL(test,
((volatile char *)&elem[size])[0]);
- else
+ else if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0] = 0);
+ } else
KUNIT_EXPECT_KASAN_FAIL(test,
((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
@@ -1273,7 +1466,13 @@ static void mempool_uaf_helper(struct kunit *test, mempool_t *pool, bool page)
mempool_free(elem, pool);
ptr = page ? page_address((struct page *)elem) : elem;
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
}
static void mempool_kmalloc_uaf(struct kunit *test)
@@ -1532,8 +1731,13 @@ static void kasan_memchr(struct kunit *test)
OPTIMIZER_HIDE_VAR(ptr);
OPTIMIZER_HIDE_VAR(size);
- KUNIT_EXPECT_KASAN_FAIL(test,
- kasan_ptr_result = memchr(ptr, '1', size + 1));
+
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ kasan_ptr_result = memchr(ptr, '1', size + 1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ kasan_ptr_result = memchr(ptr, '1', size + 1));
kfree(ptr);
}
@@ -1559,8 +1763,14 @@ static void kasan_memcmp(struct kunit *test)
OPTIMIZER_HIDE_VAR(ptr);
OPTIMIZER_HIDE_VAR(size);
- KUNIT_EXPECT_KASAN_FAIL(test,
- kasan_int_result = memcmp(ptr, arr, size+1));
+
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ kasan_int_result = memcmp(ptr, arr, size+1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ kasan_int_result = memcmp(ptr, arr, size+1));
+
kfree(ptr);
}
@@ -1593,9 +1803,16 @@ static void kasan_strings(struct kunit *test)
KUNIT_EXPECT_EQ(test, KASAN_GRANULE_SIZE - 2,
strscpy(ptr, src + 1, KASAN_GRANULE_SIZE));
- /* strscpy should fail if the first byte is unreadable. */
- KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
- KASAN_GRANULE_SIZE));
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
+ KASAN_GRANULE_SIZE));
+ if (!kasan_async_fault_possible())
+ /* strscpy should fail when the first byte is to be written. */
+ KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr + size, src, KASAN_GRANULE_SIZE));
+ } else
+ /* strscpy should fail if the first byte is unreadable. */
+ KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
+ KASAN_GRANULE_SIZE));
kfree(src);
kfree(ptr);
@@ -1607,17 +1824,22 @@ static void kasan_strings(struct kunit *test)
* will likely point to zeroed byte.
*/
ptr += 16;
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
-
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
-
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
-
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
-
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strchr(ptr, '1'));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strrchr(ptr, '1'));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strcmp(ptr, "2"));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strncmp(ptr, "2", 1));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strlen(ptr));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strnlen(ptr, 1));
+ } else {
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
+ }
}
static void kasan_bitops_modify(struct kunit *test, int nr, void *addr)
@@ -1636,12 +1858,27 @@ static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr)
{
KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit(nr, addr));
KUNIT_EXPECT_KASAN_FAIL(test, __test_and_set_bit(nr, addr));
- KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
+
+ /*
+ * When KASAN is running in store-only mode,
+ * a fault won't occur even if the bit is set.
+ * Therefore, skip the test_and_set_bit_lock test in store-only mode.
+ */
+ if (!kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
+
KUNIT_EXPECT_KASAN_FAIL(test, test_and_clear_bit(nr, addr));
KUNIT_EXPECT_KASAN_FAIL(test, __test_and_clear_bit(nr, addr));
KUNIT_EXPECT_KASAN_FAIL(test, test_and_change_bit(nr, addr));
KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr));
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
+
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = test_bit(nr, addr));
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, set_bit(nr, addr));
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
+
if (nr < 7)
KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result =
xor_unlock_is_negative_byte(1 << nr, addr));
@@ -1765,7 +2002,12 @@ static void vmalloc_oob(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
/* An aligned access into the first out-of-bounds granule. */
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
+ if (kasan_stonly_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)v_ptr)[size + 5]);
+ if (!kasan_async_fault_possible())
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5] = 0);
+ } else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
/* Check that in-bounds accesses to the physical page are valid. */
page = vmalloc_to_page(v_ptr);
@@ -2042,16 +2284,33 @@ static void copy_user_test_oob(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test,
unused = copy_from_user(kmem, usermem, size + 1));
- KUNIT_EXPECT_KASAN_FAIL(test,
- unused = copy_to_user(usermem, kmem, size + 1));
+
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ unused = copy_to_user(usermem, kmem, size + 1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ unused = copy_to_user(usermem, kmem, size + 1));
+
KUNIT_EXPECT_KASAN_FAIL(test,
unused = __copy_from_user(kmem, usermem, size + 1));
- KUNIT_EXPECT_KASAN_FAIL(test,
- unused = __copy_to_user(usermem, kmem, size + 1));
+
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ unused = __copy_to_user(usermem, kmem, size + 1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ unused = __copy_to_user(usermem, kmem, size + 1));
+
KUNIT_EXPECT_KASAN_FAIL(test,
unused = __copy_from_user_inatomic(kmem, usermem, size + 1));
- KUNIT_EXPECT_KASAN_FAIL(test,
- unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
+
+ if (kasan_stonly_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
/*
* Prepare a long string in usermem to avoid the strncpy_from_user test
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] kasan/hw-tags: introduce store only mode
2025-08-11 17:36 ` [PATCH 1/2] kasan/hw-tags: introduce store only mode Yeoreum Yun
@ 2025-08-12 16:25 ` Andrey Konovalov
2025-08-13 6:26 ` Yeoreum Yun
0 siblings, 1 reply; 11+ messages in thread
From: Andrey Konovalov @ 2025-08-12 16:25 UTC (permalink / raw)
To: Yeoreum Yun
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
On Mon, Aug 11, 2025 at 7:36 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
>
> Since Armv8.9, FEATURE_MTE_STORE_ONLY feature is introduced to restrict
> raise of tag check fault on store operation only.
To clarify: this feature is independent on the sync/async/asymm modes?
So any mode can be used together with FEATURE_MTE_STORE_ONLY?
> Introcude KASAN store only mode based on this feature.
>
> KASAN store only mode restricts KASAN checks operation for store only and
> omits the checks for fetch/read operation when accessing memory.
> So it might be used not only debugging enviroment but also normal
> enviroment to check memory safty.
>
> This features can be controlled with "kasan.stonly" arguments.
> When "kasan.stonly=on", KASAN checks store only mode otherwise
> KASAN checks all operations.
"stonly" looks cryptic, how about "kasan.store_only"?
Also, are there any existing/planned modes/extensions of the feature?
E.g. read only? Knowing this will allow to better plan the
command-line parameter format.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---
> Documentation/dev-tools/kasan.rst | 3 ++
> arch/arm64/include/asm/memory.h | 1 +
> arch/arm64/include/asm/mte-kasan.h | 6 +++
> arch/arm64/kernel/cpufeature.c | 6 +++
> arch/arm64/kernel/mte.c | 14 ++++++
> include/linux/kasan.h | 2 +
> mm/kasan/hw_tags.c | 76 +++++++++++++++++++++++++++++-
> mm/kasan/kasan.h | 10 ++++
> 8 files changed, 116 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> index 0a1418ab72fd..7567a2ca0e39 100644
> --- a/Documentation/dev-tools/kasan.rst
> +++ b/Documentation/dev-tools/kasan.rst
> @@ -163,6 +163,9 @@ disabling KASAN altogether or controlling its features:
> This parameter is intended to allow sampling only large page_alloc
> allocations, which is the biggest source of the performance overhead.
>
> +- ``kasan.stonly=off`` or ``kasan.stonly=on`` controls whether KASAN checks
> + store operation only or all operation.
How about:
``kasan.store_only=off`` or ``=on`` controls whether KASAN checks only
the store (write) accesses only or all accesses (default: ``off``).
And let's put this next to kasan.mode, as the new parameter is related.
> +
> Error reports
> ~~~~~~~~~~~~~
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 5213248e081b..9d8c72c9c91f 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -308,6 +308,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> #define arch_enable_tag_checks_sync() mte_enable_kernel_sync()
> #define arch_enable_tag_checks_async() mte_enable_kernel_async()
> #define arch_enable_tag_checks_asymm() mte_enable_kernel_asymm()
> +#define arch_enable_tag_checks_stonly() mte_enable_kernel_stonly()
> #define arch_suppress_tag_checks_start() mte_enable_tco()
> #define arch_suppress_tag_checks_stop() mte_disable_tco()
> #define arch_force_async_tag_fault() mte_check_tfsr_exit()
> diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> index 2e98028c1965..d75908ed9d0f 100644
> --- a/arch/arm64/include/asm/mte-kasan.h
> +++ b/arch/arm64/include/asm/mte-kasan.h
> @@ -200,6 +200,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
> void mte_enable_kernel_sync(void);
> void mte_enable_kernel_async(void);
> void mte_enable_kernel_asymm(void);
> +int mte_enable_kernel_stonly(void);
>
> #else /* CONFIG_ARM64_MTE */
>
> @@ -251,6 +252,11 @@ static inline void mte_enable_kernel_asymm(void)
> {
> }
>
> +static inline int mte_enable_kenrel_stonly(void)
> +{
> + return -EINVAL;
> +}
> +
> #endif /* CONFIG_ARM64_MTE */
>
> #endif /* __ASSEMBLY__ */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9ad065f15f1d..fdc510fe0187 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2404,6 +2404,11 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
>
> kasan_init_hw_tags_cpu();
> }
> +
> +static void cpu_enable_mte_stonly(struct arm64_cpu_capabilities const *cap)
> +{
> + kasan_late_init_hw_tags_cpu();
> +}
> #endif /* CONFIG_ARM64_MTE */
>
> static void user_feature_fixup(void)
> @@ -2922,6 +2927,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> .capability = ARM64_MTE_STORE_ONLY,
> .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> .matches = has_cpuid_feature,
> + .cpu_enable = cpu_enable_mte_stonly,
> ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTESTOREONLY, IMP)
> },
> #endif /* CONFIG_ARM64_MTE */
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index e5e773844889..a1cb2a8a79a1 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -157,6 +157,20 @@ void mte_enable_kernel_asymm(void)
> mte_enable_kernel_sync();
> }
> }
> +
> +int mte_enable_kernel_stonly(void)
> +{
> + if (!cpus_have_cap(ARM64_MTE_STORE_ONLY))
> + return -EINVAL;
> +
> + sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCSO_MASK,
> + SYS_FIELD_PREP(SCTLR_EL1, TCSO, 1));
> + isb();
> +
> + pr_info_once("MTE: enabled stonly mode at EL1\n");
> +
> + return 0;
> +}
> #endif
>
> #ifdef CONFIG_KASAN_HW_TAGS
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 890011071f2b..28951b29c593 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -552,9 +552,11 @@ static inline void kasan_init_sw_tags(void) { }
> #ifdef CONFIG_KASAN_HW_TAGS
> void kasan_init_hw_tags_cpu(void);
> void __init kasan_init_hw_tags(void);
> +void kasan_late_init_hw_tags_cpu(void);
Why do we need a separate late init function? Can we not enable
store-only at the same place where we enable async/asymm?
> #else
> static inline void kasan_init_hw_tags_cpu(void) { }
> static inline void kasan_init_hw_tags(void) { }
> +static inline void kasan_late_init_hw_tags_cpu(void) { }
> #endif
>
> #ifdef CONFIG_KASAN_VMALLOC
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 9a6927394b54..2caa6fe5ed47 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -41,9 +41,16 @@ enum kasan_arg_vmalloc {
> KASAN_ARG_VMALLOC_ON,
> };
>
> +enum kasan_arg_stonly {
> + KASAN_ARG_STONLY_DEFAULT,
> + KASAN_ARG_STONLY_OFF,
> + KASAN_ARG_STONLY_ON,
> +};
> +
> static enum kasan_arg kasan_arg __ro_after_init;
> static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
> +static enum kasan_arg_stonly kasan_arg_stonly __ro_after_init;
>
> /*
> * Whether KASAN is enabled at all.
> @@ -67,6 +74,9 @@ DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
> #endif
> EXPORT_SYMBOL_GPL(kasan_flag_vmalloc);
>
> +DEFINE_STATIC_KEY_FALSE(kasan_flag_stonly);
> +EXPORT_SYMBOL_GPL(kasan_flag_stonly);
> +
> #define PAGE_ALLOC_SAMPLE_DEFAULT 1
> #define PAGE_ALLOC_SAMPLE_ORDER_DEFAULT 3
>
> @@ -141,6 +151,23 @@ static int __init early_kasan_flag_vmalloc(char *arg)
> }
> early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
>
> +/* kasan.stonly=off/on */
> +static int __init early_kasan_flag_stonly(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "off"))
> + kasan_arg_stonly = KASAN_ARG_STONLY_OFF;
> + else if (!strcmp(arg, "on"))
> + kasan_arg_stonly = KASAN_ARG_STONLY_ON;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.stonly", early_kasan_flag_stonly);
> +
> static inline const char *kasan_mode_info(void)
> {
> if (kasan_mode == KASAN_MODE_ASYNC)
> @@ -219,6 +246,20 @@ void kasan_init_hw_tags_cpu(void)
> kasan_enable_hw_tags();
> }
>
> +/*
> + * kasan_late_init_hw_tags_cpu_post() is called for each CPU after
> + * all cpus are bring-up at boot.
> + * Not marked as __init as a CPU can be hot-plugged after boot.
> + */
> +void kasan_late_init_hw_tags_cpu(void)
> +{
> + /*
> + * Enable stonly mode only when explicitly requested through the command line.
> + * If system doesn't support, kasan checks all operation.
> + */
> + kasan_enable_stonly();
> +}
> +
> /* kasan_init_hw_tags() is called once on boot CPU. */
> void __init kasan_init_hw_tags(void)
> {
> @@ -257,15 +298,28 @@ void __init kasan_init_hw_tags(void)
> break;
> }
>
> + switch (kasan_arg_stonly) {
> + case KASAN_ARG_STONLY_DEFAULT:
> + /* Default is specified by kasan_flag_stonly definition. */
> + break;
> + case KASAN_ARG_STONLY_OFF:
> + static_branch_disable(&kasan_flag_stonly);
> + break;
> + case KASAN_ARG_STONLY_ON:
> + static_branch_enable(&kasan_flag_stonly);
> + break;
> + }
> +
> kasan_init_tags();
>
> /* KASAN is now initialized, enable it. */
> static_branch_enable(&kasan_flag_enabled);
>
> - pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
> + pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s stonly=%s\n",
> kasan_mode_info(),
> str_on_off(kasan_vmalloc_enabled()),
> - str_on_off(kasan_stack_collection_enabled()));
> + str_on_off(kasan_stack_collection_enabled()),
> + str_on_off(kasan_stonly_enabled()));
> }
>
> #ifdef CONFIG_KASAN_VMALLOC
> @@ -394,6 +448,22 @@ void kasan_enable_hw_tags(void)
> hw_enable_tag_checks_sync();
> }
>
> +void kasan_enable_stonly(void)
> +{
> + if (kasan_arg_stonly == KASAN_ARG_STONLY_ON) {
> + if (hw_enable_tag_checks_stonly()) {
> + static_branch_disable(&kasan_flag_stonly);
> + kasan_arg_stonly = KASAN_ARG_STONLY_OFF;
> + pr_warn_once("KernelAddressSanitizer: store only mode isn't supported (hw-tags)\n");
> + }
> + }
> +}
> +
> +bool kasan_stonly_enabled(void)
> +{
> + return static_branch_unlikely(&kasan_flag_stonly);
> +}
> +
> #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
>
> EXPORT_SYMBOL_IF_KUNIT(kasan_enable_hw_tags);
> @@ -404,4 +474,6 @@ VISIBLE_IF_KUNIT void kasan_force_async_fault(void)
> }
> EXPORT_SYMBOL_IF_KUNIT(kasan_force_async_fault);
>
> +EXPORT_SYMBOL_IF_KUNIT(kasan_stonly_enabled);
> +
> #endif
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 129178be5e64..cfbcebdbcbec 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -33,6 +33,7 @@ static inline bool kasan_stack_collection_enabled(void)
> #include "../slab.h"
>
> DECLARE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
> +DECLARE_STATIC_KEY_FALSE(kasan_flag_stonly);
>
> enum kasan_mode {
> KASAN_MODE_SYNC,
> @@ -428,6 +429,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> #define hw_enable_tag_checks_sync() arch_enable_tag_checks_sync()
> #define hw_enable_tag_checks_async() arch_enable_tag_checks_async()
> #define hw_enable_tag_checks_asymm() arch_enable_tag_checks_asymm()
> +#define hw_enable_tag_checks_stonly() arch_enable_tag_checks_stonly()
> #define hw_suppress_tag_checks_start() arch_suppress_tag_checks_start()
> #define hw_suppress_tag_checks_stop() arch_suppress_tag_checks_stop()
> #define hw_force_async_tag_fault() arch_force_async_tag_fault()
> @@ -437,10 +439,18 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> arch_set_mem_tag_range((addr), (size), (tag), (init))
>
> void kasan_enable_hw_tags(void);
> +void kasan_enable_stonly(void);
> +bool kasan_stonly_enabled(void);
>
> #else /* CONFIG_KASAN_HW_TAGS */
>
> static inline void kasan_enable_hw_tags(void) { }
> +static inline void kasan_enable_stonly(void) { }
> +
> +static inline bool kasan_stonly_enabled(void)
> +{
> + return false;
> +}
>
> #endif /* CONFIG_KASAN_HW_TAGS */
>
> --
> LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-11 17:36 ` [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases Yeoreum Yun
@ 2025-08-12 16:28 ` Andrey Konovalov
2025-08-12 16:56 ` Yeoreum Yun
0 siblings, 1 reply; 11+ messages in thread
From: Andrey Konovalov @ 2025-08-12 16:28 UTC (permalink / raw)
To: Yeoreum Yun
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
On Mon, Aug 11, 2025 at 7:36 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
>
> When KASAN is configured in store-only mode,
> fetch/load operations do not trigger tag check faults.
> As a result, the outcome of some test cases may differ
> compared to when KASAN is configured without store-only mode.
>
> To address this:
> 1. Replace fetch/load expressions that would
> normally trigger tag check faults with store operation
> when running under store-only and sync mode.
> In case of async/asymm mode, skip the store operation triggering
> tag check fault since it corrupts memory.
>
> 2. Skip some testcases affected by initial value
> (i.e) atomic_cmpxchg() testcase maybe successd if
> it passes valid atomic_t address and invalid oldaval address.
> In this case, if invalid atomic_t doesn't have the same oldval,
> it won't trigger store operation so the test will pass.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---
> mm/kasan/kasan_test_c.c | 423 ++++++++++++++++++++++++++++++++--------
> 1 file changed, 341 insertions(+), 82 deletions(-)
>
> diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
> index 2aa12dfa427a..22d5d6d6cd9f 100644
> --- a/mm/kasan/kasan_test_c.c
> +++ b/mm/kasan/kasan_test_c.c
> @@ -94,11 +94,13 @@ static void kasan_test_exit(struct kunit *test)
> }
>
> /**
> - * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
> - * KASAN report; causes a KUnit test failure otherwise.
> + * _KUNIT_EXPECT_KASAN_TEMPLATE - check that the executed expression produces
> + * a KASAN report or not; a KUnit test failure when it's different from @produce.
> *
> * @test: Currently executing KUnit test.
> - * @expression: Expression that must produce a KASAN report.
> + * @expr: Expression produce a KASAN report or not.
> + * @expr_str: Expression string
> + * @produce: expression should produce a KASAN report.
> *
> * For hardware tag-based KASAN, when a synchronous tag fault happens, tag
> * checking is auto-disabled. When this happens, this test handler reenables
> @@ -110,25 +112,29 @@ static void kasan_test_exit(struct kunit *test)
> * Use READ/WRITE_ONCE() for the accesses and compiler barriers around the
> * expression to prevent that.
> *
> - * In between KUNIT_EXPECT_KASAN_FAIL checks, test_status.report_found is kept
> + * In between _KUNIT_EXPECT_KASAN_TEMPLATE checks, test_status.report_found is kept
> * as false. This allows detecting KASAN reports that happen outside of the
> * checks by asserting !test_status.report_found at the start of
> - * KUNIT_EXPECT_KASAN_FAIL and in kasan_test_exit.
> + * _KUNIT_EXPECT_KASAN_TEMPLATE and in kasan_test_exit.
> */
> -#define KUNIT_EXPECT_KASAN_FAIL(test, expression) do { \
> +#define _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, expr_str, produce) \
> +do { \
> if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
> kasan_sync_fault_possible()) \
> migrate_disable(); \
> KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found)); \
> barrier(); \
> - expression; \
> + expr; \
> barrier(); \
> if (kasan_async_fault_possible()) \
> kasan_force_async_fault(); \
> - if (!READ_ONCE(test_status.report_found)) { \
> - KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN failure " \
> - "expected in \"" #expression \
> - "\", but none occurred"); \
> + if (READ_ONCE(test_status.report_found) != produce) { \
> + KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN %s " \
> + "expected in \"" expr_str \
> + "\", but %soccurred", \
> + (produce ? "failure" : "success"), \
> + (test_status.report_found ? \
> + "" : "none ")); \
> } \
> if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
> kasan_sync_fault_possible()) { \
> @@ -141,6 +147,26 @@ static void kasan_test_exit(struct kunit *test)
> WRITE_ONCE(test_status.async_fault, false); \
> } while (0)
>
> +/*
> + * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
> + * KASAN report; causes a KUnit test failure otherwise.
> + *
> + * @test: Currently executing KUnit test.
> + * @expr: Expression produce a KASAN report.
> + */
> +#define KUNIT_EXPECT_KASAN_FAIL(test, expr) \
> + _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, true)
> +
> +/*
> + * KUNIT_EXPECT_KASAN_SUCCESS - check that the executed expression doesn't
> + * produces a KASAN report; causes a KUnit test failure otherwise.
> + *
> + * @test: Currently executing KUnit test.
> + * @expr: Expression doesn't produce a KASAN report.
> + */
> +#define KUNIT_EXPECT_KASAN_SUCCESS(test, expr) \
> + _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, false)
> +
> #define KASAN_TEST_NEEDS_CONFIG_ON(test, config) do { \
> if (!IS_ENABLED(config)) \
> kunit_skip((test), "Test requires " #config "=y"); \
> @@ -183,8 +209,15 @@ static void kmalloc_oob_right(struct kunit *test)
> KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + 5] = 'y');
>
> /* Out-of-bounds access past the aligned kmalloc object. */
> - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
> - ptr[size + KASAN_GRANULE_SIZE + 5]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] =
> + ptr[size + KASAN_GRANULE_SIZE + 5]);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + ptr[size + KASAN_GRANULE_SIZE + 5] = ptr[0]);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
> + ptr[size + KASAN_GRANULE_SIZE + 5]);
>
> kfree(ptr);
> }
> @@ -198,7 +231,13 @@ static void kmalloc_oob_left(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
>
> OPTIMIZER_HIDE_VAR(ptr);
> - KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr = *(ptr - 1));
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, *(ptr - 1) = *(ptr));
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
> +
> kfree(ptr);
> }
>
> @@ -211,7 +250,13 @@ static void kmalloc_node_oob_right(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
>
> OPTIMIZER_HIDE_VAR(ptr);
> - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = ptr[0]);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> +
> kfree(ptr);
> }
>
> @@ -291,7 +336,12 @@ static void kmalloc_large_uaf(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> kfree(ptr);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> }
>
> static void kmalloc_large_invalid_free(struct kunit *test)
> @@ -323,7 +373,13 @@ static void page_alloc_oob_right(struct kunit *test)
> ptr = page_address(pages);
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = ptr[0]);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> +
> free_pages((unsigned long)ptr, order);
> }
>
> @@ -338,7 +394,12 @@ static void page_alloc_uaf(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> free_pages((unsigned long)ptr, order);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> }
>
> static void krealloc_more_oob_helper(struct kunit *test,
> @@ -455,10 +516,15 @@ static void krealloc_uaf(struct kunit *test)
> ptr1 = kmalloc(size1, GFP_KERNEL);
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
> kfree(ptr1);
> -
> KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL));
> KUNIT_ASSERT_NULL(test, ptr2);
> - KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
> +
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, *(volatile char *)ptr1);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1 = 0);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
> }
>
> static void kmalloc_oob_16(struct kunit *test)
> @@ -501,7 +567,13 @@ static void kmalloc_uaf_16(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
> kfree(ptr2);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr1 = *ptr2);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, *ptr2 = *ptr1);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
> +
> kfree(ptr1);
> }
>
> @@ -640,8 +712,17 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
> memset((char *)ptr, 0, 64);
> OPTIMIZER_HIDE_VAR(ptr);
> OPTIMIZER_HIDE_VAR(invalid_size);
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> +
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + memmove((char *)ptr + 4, (char *)ptr, invalid_size));
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> +
> kfree(ptr);
> }
>
> @@ -654,7 +735,13 @@ static void kmalloc_uaf(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
>
> kfree(ptr);
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
> +
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[8]);
> + if (!kasan_sync_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8] = 0);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
> }
>
> static void kmalloc_uaf_memset(struct kunit *test)
> @@ -701,7 +788,13 @@ static void kmalloc_uaf2(struct kunit *test)
> goto again;
> }
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[40]);
> + if (!kasan_sync_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40] = 0);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
> +
> KUNIT_EXPECT_PTR_NE(test, ptr1, ptr2);
>
> kfree(ptr2);
> @@ -727,19 +820,35 @@ static void kmalloc_uaf3(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
> kfree(ptr2);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[8]);
> + if (!kasan_sync_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8] = 0);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
> }
>
> static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> {
> int *i_unsafe = unsafe;
>
> - KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*i_unsafe));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, smp_load_acquire(i_unsafe));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
>
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_read(unsafe));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> @@ -752,18 +861,38 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> +
> + /*
> + * The result of the test below may vary due to garbage values of unsafe in
> + * store-only mode. Therefore, skip this test when KASAN is configured
> + * in store-only mode.
> + */
> + if (!kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
>
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> + /*
> + * The result of the test below may vary due to garbage values of unsafe in
> + * store-only mode. Therefore, skip this test when KASAN is configured
> + * in store-only mode.
> + */
> + if (!kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> + }
> +
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_long_read(unsafe));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> @@ -776,16 +905,32 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> +
> + /*
> + * The result of the test below may vary due to garbage values in
> + * store-only mode. Therefore, skip this test when KASAN is configured
> + * in store-only mode.
> + */
> + if (!kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> +
> + /*
> + * The result of the test below may vary due to garbage values in
> + * store-only mode. Therefore, skip this test when KASAN is configured
> + * in store-only mode.
> + */
> + if (!kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> + }
> }
>
> static void kasan_atomics(struct kunit *test)
> @@ -842,8 +987,18 @@ static void ksize_unpoisons_memory(struct kunit *test)
> /* These must trigger a KASAN report. */
> if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
> +
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size + 5]);
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[real_size - 1]);
> + if (!kasan_sync_fault_possible()) {
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5] = 0);
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1] = 0);
> + }
> + } else {
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
> + }
>
> kfree(ptr);
> }
> @@ -863,8 +1018,17 @@ static void ksize_uaf(struct kunit *test)
>
> OPTIMIZER_HIDE_VAR(ptr);
> KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr));
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size]);
> + if (!kasan_sync_fault_possible()) {
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size] = 0);
> + }
> + } else {
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
> + }
> }
>
> /*
> @@ -886,7 +1050,11 @@ static void rcu_uaf_reclaim(struct rcu_head *rp)
> container_of(rp, struct kasan_rcu_info, rcu);
>
> kfree(fp);
> - ((volatile struct kasan_rcu_info *)fp)->i;
> +
> + if (kasan_stonly_enabled() && !kasan_async_fault_possible())
> + ((volatile struct kasan_rcu_info *)fp)->i = 0;
> + else
> + ((volatile struct kasan_rcu_info *)fp)->i;
> }
>
> static void rcu_uaf(struct kunit *test)
> @@ -899,9 +1067,14 @@ static void rcu_uaf(struct kunit *test)
> global_rcu_ptr = rcu_dereference_protected(
> (struct kasan_rcu_info __rcu *)ptr, NULL);
>
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
> - rcu_barrier());
> + if (kasan_stonly_enabled() && kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
> + rcu_barrier());
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
> + rcu_barrier());
> }
>
> static void workqueue_uaf_work(struct work_struct *work)
> @@ -924,8 +1097,12 @@ static void workqueue_uaf(struct kunit *test)
> queue_work(workqueue, work);
> destroy_workqueue(workqueue);
>
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - ((volatile struct work_struct *)work)->data);
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + ((volatile struct work_struct *)work)->data);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + ((volatile struct work_struct *)work)->data);
> }
>
> static void kfree_via_page(struct kunit *test)
> @@ -972,7 +1149,12 @@ static void kmem_cache_oob(struct kunit *test)
> return;
> }
>
> - KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, *p = p[size + OOB_TAG_OFF]);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, p[size + OOB_TAG_OFF] = *p);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
>
> kmem_cache_free(cache, p);
> kmem_cache_destroy(cache);
> @@ -1068,7 +1250,12 @@ static void kmem_cache_rcu_uaf(struct kunit *test)
> */
> rcu_barrier();
>
> - KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*p));
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*p, 0));
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
>
> kmem_cache_destroy(cache);
> }
> @@ -1206,7 +1393,13 @@ static void mempool_oob_right_helper(struct kunit *test, mempool_t *pool, size_t
> if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> KUNIT_EXPECT_KASAN_FAIL(test,
> ((volatile char *)&elem[size])[0]);
> - else
> + else if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0] = 0);
> + } else
> KUNIT_EXPECT_KASAN_FAIL(test,
> ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
>
> @@ -1273,7 +1466,13 @@ static void mempool_uaf_helper(struct kunit *test, mempool_t *pool, bool page)
> mempool_free(elem, pool);
>
> ptr = page ? page_address((struct page *)elem) : elem;
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> +
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> }
>
> static void mempool_kmalloc_uaf(struct kunit *test)
> @@ -1532,8 +1731,13 @@ static void kasan_memchr(struct kunit *test)
>
> OPTIMIZER_HIDE_VAR(ptr);
> OPTIMIZER_HIDE_VAR(size);
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - kasan_ptr_result = memchr(ptr, '1', size + 1));
> +
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + kasan_ptr_result = memchr(ptr, '1', size + 1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + kasan_ptr_result = memchr(ptr, '1', size + 1));
>
> kfree(ptr);
> }
> @@ -1559,8 +1763,14 @@ static void kasan_memcmp(struct kunit *test)
>
> OPTIMIZER_HIDE_VAR(ptr);
> OPTIMIZER_HIDE_VAR(size);
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - kasan_int_result = memcmp(ptr, arr, size+1));
> +
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + kasan_int_result = memcmp(ptr, arr, size+1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + kasan_int_result = memcmp(ptr, arr, size+1));
> +
> kfree(ptr);
> }
>
> @@ -1593,9 +1803,16 @@ static void kasan_strings(struct kunit *test)
> KUNIT_EXPECT_EQ(test, KASAN_GRANULE_SIZE - 2,
> strscpy(ptr, src + 1, KASAN_GRANULE_SIZE));
>
> - /* strscpy should fail if the first byte is unreadable. */
> - KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
> - KASAN_GRANULE_SIZE));
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
> + KASAN_GRANULE_SIZE));
> + if (!kasan_async_fault_possible())
> + /* strscpy should fail when the first byte is to be written. */
> + KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr + size, src, KASAN_GRANULE_SIZE));
> + } else
> + /* strscpy should fail if the first byte is unreadable. */
> + KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
> + KASAN_GRANULE_SIZE));
>
> kfree(src);
> kfree(ptr);
> @@ -1607,17 +1824,22 @@ static void kasan_strings(struct kunit *test)
> * will likely point to zeroed byte.
> */
> ptr += 16;
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
>
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
> -
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
> -
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
> -
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
> -
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strchr(ptr, '1'));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strrchr(ptr, '1'));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strcmp(ptr, "2"));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strncmp(ptr, "2", 1));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strlen(ptr));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strnlen(ptr, 1));
> + } else {
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
> + }
> }
>
> static void kasan_bitops_modify(struct kunit *test, int nr, void *addr)
> @@ -1636,12 +1858,27 @@ static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr)
> {
> KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit(nr, addr));
> KUNIT_EXPECT_KASAN_FAIL(test, __test_and_set_bit(nr, addr));
> - KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
> +
> + /*
> + * When KASAN is running in store-only mode,
> + * a fault won't occur even if the bit is set.
> + * Therefore, skip the test_and_set_bit_lock test in store-only mode.
> + */
> + if (!kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, test_and_clear_bit(nr, addr));
> KUNIT_EXPECT_KASAN_FAIL(test, __test_and_clear_bit(nr, addr));
> KUNIT_EXPECT_KASAN_FAIL(test, test_and_change_bit(nr, addr));
> KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr));
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
> +
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = test_bit(nr, addr));
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, set_bit(nr, addr));
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
> +
> if (nr < 7)
> KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result =
> xor_unlock_is_negative_byte(1 << nr, addr));
> @@ -1765,7 +2002,12 @@ static void vmalloc_oob(struct kunit *test)
> KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
>
> /* An aligned access into the first out-of-bounds granule. */
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
> + if (kasan_stonly_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)v_ptr)[size + 5]);
> + if (!kasan_async_fault_possible())
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5] = 0);
> + } else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
>
> /* Check that in-bounds accesses to the physical page are valid. */
> page = vmalloc_to_page(v_ptr);
> @@ -2042,16 +2284,33 @@ static void copy_user_test_oob(struct kunit *test)
>
> KUNIT_EXPECT_KASAN_FAIL(test,
> unused = copy_from_user(kmem, usermem, size + 1));
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - unused = copy_to_user(usermem, kmem, size + 1));
> +
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + unused = copy_to_user(usermem, kmem, size + 1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + unused = copy_to_user(usermem, kmem, size + 1));
> +
> KUNIT_EXPECT_KASAN_FAIL(test,
> unused = __copy_from_user(kmem, usermem, size + 1));
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - unused = __copy_to_user(usermem, kmem, size + 1));
> +
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + unused = __copy_to_user(usermem, kmem, size + 1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + unused = __copy_to_user(usermem, kmem, size + 1));
> +
> KUNIT_EXPECT_KASAN_FAIL(test,
> unused = __copy_from_user_inatomic(kmem, usermem, size + 1));
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
> +
> + if (kasan_stonly_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
>
> /*
> * Prepare a long string in usermem to avoid the strncpy_from_user test
> --
> LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
>
This patch does not look good.
Right now, KASAN tests are crafted to avoid/self-contain harmful
memory corruptions that they do (e.g. make sure that OOB write
accesses land in in-object kmalloc training space, etc.). If you turn
read accesses in tests into write accesses, memory corruptions caused
by the earlier tests will crash the kernel or the latter tests.
The easiest thing to do for now is to disable the tests that check bad
read accesses when store-only is enabled.
If we want to convert tests into doing write accesses instead of
reads, this needs to be done separately for each test (i.e. via a
separate patch) with an explanation why doing this is safe (and
adjustments whenever it's not). And we need a better way to code this
instead of the horrifying number of if/else checks.
Thank you!
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-12 16:28 ` Andrey Konovalov
@ 2025-08-12 16:56 ` Yeoreum Yun
2025-08-12 17:58 ` Andrey Konovalov
0 siblings, 1 reply; 11+ messages in thread
From: Yeoreum Yun @ 2025-08-12 16:56 UTC (permalink / raw)
To: Andrey Konovalov
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
Hi Andrey,
> >
> > When KASAN is configured in store-only mode,
> > fetch/load operations do not trigger tag check faults.
> > As a result, the outcome of some test cases may differ
> > compared to when KASAN is configured without store-only mode.
> >
> > To address this:
> > 1. Replace fetch/load expressions that would
> > normally trigger tag check faults with store operation
> > when running under store-only and sync mode.
> > In case of async/asymm mode, skip the store operation triggering
> > tag check fault since it corrupts memory.
> >
> > 2. Skip some testcases affected by initial value
> > (i.e) atomic_cmpxchg() testcase maybe successd if
> > it passes valid atomic_t address and invalid oldaval address.
> > In this case, if invalid atomic_t doesn't have the same oldval,
> > it won't trigger store operation so the test will pass.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> > mm/kasan/kasan_test_c.c | 423 ++++++++++++++++++++++++++++++++--------
> > 1 file changed, 341 insertions(+), 82 deletions(-)
> >
> > diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
> > index 2aa12dfa427a..22d5d6d6cd9f 100644
> > --- a/mm/kasan/kasan_test_c.c
> > +++ b/mm/kasan/kasan_test_c.c
> > @@ -94,11 +94,13 @@ static void kasan_test_exit(struct kunit *test)
> > }
> >
> > /**
> > - * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
> > - * KASAN report; causes a KUnit test failure otherwise.
> > + * _KUNIT_EXPECT_KASAN_TEMPLATE - check that the executed expression produces
> > + * a KASAN report or not; a KUnit test failure when it's different from @produce.
> > *
> > * @test: Currently executing KUnit test.
> > - * @expression: Expression that must produce a KASAN report.
> > + * @expr: Expression produce a KASAN report or not.
> > + * @expr_str: Expression string
> > + * @produce: expression should produce a KASAN report.
> > *
> > * For hardware tag-based KASAN, when a synchronous tag fault happens, tag
> > * checking is auto-disabled. When this happens, this test handler reenables
> > @@ -110,25 +112,29 @@ static void kasan_test_exit(struct kunit *test)
> > * Use READ/WRITE_ONCE() for the accesses and compiler barriers around the
> > * expression to prevent that.
> > *
> > - * In between KUNIT_EXPECT_KASAN_FAIL checks, test_status.report_found is kept
> > + * In between _KUNIT_EXPECT_KASAN_TEMPLATE checks, test_status.report_found is kept
> > * as false. This allows detecting KASAN reports that happen outside of the
> > * checks by asserting !test_status.report_found at the start of
> > - * KUNIT_EXPECT_KASAN_FAIL and in kasan_test_exit.
> > + * _KUNIT_EXPECT_KASAN_TEMPLATE and in kasan_test_exit.
> > */
> > -#define KUNIT_EXPECT_KASAN_FAIL(test, expression) do { \
> > +#define _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, expr_str, produce) \
> > +do { \
> > if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
> > kasan_sync_fault_possible()) \
> > migrate_disable(); \
> > KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found)); \
> > barrier(); \
> > - expression; \
> > + expr; \
> > barrier(); \
> > if (kasan_async_fault_possible()) \
> > kasan_force_async_fault(); \
> > - if (!READ_ONCE(test_status.report_found)) { \
> > - KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN failure " \
> > - "expected in \"" #expression \
> > - "\", but none occurred"); \
> > + if (READ_ONCE(test_status.report_found) != produce) { \
> > + KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN %s " \
> > + "expected in \"" expr_str \
> > + "\", but %soccurred", \
> > + (produce ? "failure" : "success"), \
> > + (test_status.report_found ? \
> > + "" : "none ")); \
> > } \
> > if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
> > kasan_sync_fault_possible()) { \
> > @@ -141,6 +147,26 @@ static void kasan_test_exit(struct kunit *test)
> > WRITE_ONCE(test_status.async_fault, false); \
> > } while (0)
> >
> > +/*
> > + * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
> > + * KASAN report; causes a KUnit test failure otherwise.
> > + *
> > + * @test: Currently executing KUnit test.
> > + * @expr: Expression produce a KASAN report.
> > + */
> > +#define KUNIT_EXPECT_KASAN_FAIL(test, expr) \
> > + _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, true)
> > +
> > +/*
> > + * KUNIT_EXPECT_KASAN_SUCCESS - check that the executed expression doesn't
> > + * produces a KASAN report; causes a KUnit test failure otherwise.
> > + *
> > + * @test: Currently executing KUnit test.
> > + * @expr: Expression doesn't produce a KASAN report.
> > + */
> > +#define KUNIT_EXPECT_KASAN_SUCCESS(test, expr) \
> > + _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, false)
> > +
> > #define KASAN_TEST_NEEDS_CONFIG_ON(test, config) do { \
> > if (!IS_ENABLED(config)) \
> > kunit_skip((test), "Test requires " #config "=y"); \
> > @@ -183,8 +209,15 @@ static void kmalloc_oob_right(struct kunit *test)
> > KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + 5] = 'y');
> >
> > /* Out-of-bounds access past the aligned kmalloc object. */
> > - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
> > - ptr[size + KASAN_GRANULE_SIZE + 5]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] =
> > + ptr[size + KASAN_GRANULE_SIZE + 5]);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + ptr[size + KASAN_GRANULE_SIZE + 5] = ptr[0]);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
> > + ptr[size + KASAN_GRANULE_SIZE + 5]);
> >
> > kfree(ptr);
> > }
> > @@ -198,7 +231,13 @@ static void kmalloc_oob_left(struct kunit *test)
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> >
> > OPTIMIZER_HIDE_VAR(ptr);
> > - KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr = *(ptr - 1));
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, *(ptr - 1) = *(ptr));
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
> > +
> > kfree(ptr);
> > }
> >
> > @@ -211,7 +250,13 @@ static void kmalloc_node_oob_right(struct kunit *test)
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> >
> > OPTIMIZER_HIDE_VAR(ptr);
> > - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = ptr[0]);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> > +
> > kfree(ptr);
> > }
> >
> > @@ -291,7 +336,12 @@ static void kmalloc_large_uaf(struct kunit *test)
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> > kfree(ptr);
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> > }
> >
> > static void kmalloc_large_invalid_free(struct kunit *test)
> > @@ -323,7 +373,13 @@ static void page_alloc_oob_right(struct kunit *test)
> > ptr = page_address(pages);
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = ptr[0]);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> > +
> > free_pages((unsigned long)ptr, order);
> > }
> >
> > @@ -338,7 +394,12 @@ static void page_alloc_uaf(struct kunit *test)
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> > free_pages((unsigned long)ptr, order);
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> > }
> >
> > static void krealloc_more_oob_helper(struct kunit *test,
> > @@ -455,10 +516,15 @@ static void krealloc_uaf(struct kunit *test)
> > ptr1 = kmalloc(size1, GFP_KERNEL);
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
> > kfree(ptr1);
> > -
> > KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL));
> > KUNIT_ASSERT_NULL(test, ptr2);
> > - KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
> > +
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, *(volatile char *)ptr1);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1 = 0);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
> > }
> >
> > static void kmalloc_oob_16(struct kunit *test)
> > @@ -501,7 +567,13 @@ static void kmalloc_uaf_16(struct kunit *test)
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
> > kfree(ptr2);
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr1 = *ptr2);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, *ptr2 = *ptr1);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
> > +
> > kfree(ptr1);
> > }
> >
> > @@ -640,8 +712,17 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
> > memset((char *)ptr, 0, 64);
> > OPTIMIZER_HIDE_VAR(ptr);
> > OPTIMIZER_HIDE_VAR(invalid_size);
> > - KUNIT_EXPECT_KASAN_FAIL(test,
> > - memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> > +
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test,
> > + memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + memmove((char *)ptr + 4, (char *)ptr, invalid_size));
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> > +
> > kfree(ptr);
> > }
> >
> > @@ -654,7 +735,13 @@ static void kmalloc_uaf(struct kunit *test)
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> >
> > kfree(ptr);
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
> > +
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[8]);
> > + if (!kasan_sync_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8] = 0);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
> > }
> >
> > static void kmalloc_uaf_memset(struct kunit *test)
> > @@ -701,7 +788,13 @@ static void kmalloc_uaf2(struct kunit *test)
> > goto again;
> > }
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[40]);
> > + if (!kasan_sync_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40] = 0);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
> > +
> > KUNIT_EXPECT_PTR_NE(test, ptr1, ptr2);
> >
> > kfree(ptr2);
> > @@ -727,19 +820,35 @@ static void kmalloc_uaf3(struct kunit *test)
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
> > kfree(ptr2);
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[8]);
> > + if (!kasan_sync_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8] = 0);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
> > }
> >
> > static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> > {
> > int *i_unsafe = unsafe;
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*i_unsafe));
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> > +
> > KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> > - KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, smp_load_acquire(i_unsafe));
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> > KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_read(unsafe));
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> > +
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> > @@ -752,18 +861,38 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> > +
> > + /*
> > + * The result of the test below may vary due to garbage values of unsafe in
> > + * store-only mode. Therefore, skip this test when KASAN is configured
> > + * in store-only mode.
> > + */
> > + if (!kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> > +
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> > + /*
> > + * The result of the test below may vary due to garbage values of unsafe in
> > + * store-only mode. Therefore, skip this test when KASAN is configured
> > + * in store-only mode.
> > + */
> > + if (!kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> > + }
> > +
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_long_read(unsafe));
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> > +
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> > @@ -776,16 +905,32 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> > +
> > + /*
> > + * The result of the test below may vary due to garbage values in
> > + * store-only mode. Therefore, skip this test when KASAN is configured
> > + * in store-only mode.
> > + */
> > + if (!kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> > +
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> > KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> > - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> > +
> > + /*
> > + * The result of the test below may vary due to garbage values in
> > + * store-only mode. Therefore, skip this test when KASAN is configured
> > + * in store-only mode.
> > + */
> > + if (!kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> > + }
> > }
> >
> > static void kasan_atomics(struct kunit *test)
> > @@ -842,8 +987,18 @@ static void ksize_unpoisons_memory(struct kunit *test)
> > /* These must trigger a KASAN report. */
> > if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> > KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
> > +
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size + 5]);
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[real_size - 1]);
> > + if (!kasan_sync_fault_possible()) {
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5] = 0);
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1] = 0);
> > + }
> > + } else {
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
> > + }
> >
> > kfree(ptr);
> > }
> > @@ -863,8 +1018,17 @@ static void ksize_uaf(struct kunit *test)
> >
> > OPTIMIZER_HIDE_VAR(ptr);
> > KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr));
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size]);
> > + if (!kasan_sync_fault_possible()) {
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size] = 0);
> > + }
> > + } else {
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
> > + }
> > }
> >
> > /*
> > @@ -886,7 +1050,11 @@ static void rcu_uaf_reclaim(struct rcu_head *rp)
> > container_of(rp, struct kasan_rcu_info, rcu);
> >
> > kfree(fp);
> > - ((volatile struct kasan_rcu_info *)fp)->i;
> > +
> > + if (kasan_stonly_enabled() && !kasan_async_fault_possible())
> > + ((volatile struct kasan_rcu_info *)fp)->i = 0;
> > + else
> > + ((volatile struct kasan_rcu_info *)fp)->i;
> > }
> >
> > static void rcu_uaf(struct kunit *test)
> > @@ -899,9 +1067,14 @@ static void rcu_uaf(struct kunit *test)
> > global_rcu_ptr = rcu_dereference_protected(
> > (struct kasan_rcu_info __rcu *)ptr, NULL);
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test,
> > - call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
> > - rcu_barrier());
> > + if (kasan_stonly_enabled() && kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test,
> > + call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
> > + rcu_barrier());
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
> > + rcu_barrier());
> > }
> >
> > static void workqueue_uaf_work(struct work_struct *work)
> > @@ -924,8 +1097,12 @@ static void workqueue_uaf(struct kunit *test)
> > queue_work(workqueue, work);
> > destroy_workqueue(workqueue);
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test,
> > - ((volatile struct work_struct *)work)->data);
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test,
> > + ((volatile struct work_struct *)work)->data);
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + ((volatile struct work_struct *)work)->data);
> > }
> >
> > static void kfree_via_page(struct kunit *test)
> > @@ -972,7 +1149,12 @@ static void kmem_cache_oob(struct kunit *test)
> > return;
> > }
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, *p = p[size + OOB_TAG_OFF]);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, p[size + OOB_TAG_OFF] = *p);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
> >
> > kmem_cache_free(cache, p);
> > kmem_cache_destroy(cache);
> > @@ -1068,7 +1250,12 @@ static void kmem_cache_rcu_uaf(struct kunit *test)
> > */
> > rcu_barrier();
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*p));
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*p, 0));
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
> >
> > kmem_cache_destroy(cache);
> > }
> > @@ -1206,7 +1393,13 @@ static void mempool_oob_right_helper(struct kunit *test, mempool_t *pool, size_t
> > if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> > KUNIT_EXPECT_KASAN_FAIL(test,
> > ((volatile char *)&elem[size])[0]);
> > - else
> > + else if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test,
> > + ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0] = 0);
> > + } else
> > KUNIT_EXPECT_KASAN_FAIL(test,
> > ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
> >
> > @@ -1273,7 +1466,13 @@ static void mempool_uaf_helper(struct kunit *test, mempool_t *pool, bool page)
> > mempool_free(elem, pool);
> >
> > ptr = page ? page_address((struct page *)elem) : elem;
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> > +
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0] = 0);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> > }
> >
> > static void mempool_kmalloc_uaf(struct kunit *test)
> > @@ -1532,8 +1731,13 @@ static void kasan_memchr(struct kunit *test)
> >
> > OPTIMIZER_HIDE_VAR(ptr);
> > OPTIMIZER_HIDE_VAR(size);
> > - KUNIT_EXPECT_KASAN_FAIL(test,
> > - kasan_ptr_result = memchr(ptr, '1', size + 1));
> > +
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test,
> > + kasan_ptr_result = memchr(ptr, '1', size + 1));
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + kasan_ptr_result = memchr(ptr, '1', size + 1));
> >
> > kfree(ptr);
> > }
> > @@ -1559,8 +1763,14 @@ static void kasan_memcmp(struct kunit *test)
> >
> > OPTIMIZER_HIDE_VAR(ptr);
> > OPTIMIZER_HIDE_VAR(size);
> > - KUNIT_EXPECT_KASAN_FAIL(test,
> > - kasan_int_result = memcmp(ptr, arr, size+1));
> > +
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test,
> > + kasan_int_result = memcmp(ptr, arr, size+1));
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + kasan_int_result = memcmp(ptr, arr, size+1));
> > +
> > kfree(ptr);
> > }
> >
> > @@ -1593,9 +1803,16 @@ static void kasan_strings(struct kunit *test)
> > KUNIT_EXPECT_EQ(test, KASAN_GRANULE_SIZE - 2,
> > strscpy(ptr, src + 1, KASAN_GRANULE_SIZE));
> >
> > - /* strscpy should fail if the first byte is unreadable. */
> > - KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
> > - KASAN_GRANULE_SIZE));
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
> > + KASAN_GRANULE_SIZE));
> > + if (!kasan_async_fault_possible())
> > + /* strscpy should fail when the first byte is to be written. */
> > + KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr + size, src, KASAN_GRANULE_SIZE));
> > + } else
> > + /* strscpy should fail if the first byte is unreadable. */
> > + KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
> > + KASAN_GRANULE_SIZE));
> >
> > kfree(src);
> > kfree(ptr);
> > @@ -1607,17 +1824,22 @@ static void kasan_strings(struct kunit *test)
> > * will likely point to zeroed byte.
> > */
> > ptr += 16;
> > - KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
> >
> > - KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
> > -
> > - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
> > -
> > - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
> > -
> > - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
> > -
> > - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strchr(ptr, '1'));
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strrchr(ptr, '1'));
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strcmp(ptr, "2"));
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strncmp(ptr, "2", 1));
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strlen(ptr));
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strnlen(ptr, 1));
> > + } else {
> > + KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
> > + KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
> > + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
> > + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
> > + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
> > + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
> > + }
> > }
> >
> > static void kasan_bitops_modify(struct kunit *test, int nr, void *addr)
> > @@ -1636,12 +1858,27 @@ static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr)
> > {
> > KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit(nr, addr));
> > KUNIT_EXPECT_KASAN_FAIL(test, __test_and_set_bit(nr, addr));
> > - KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
> > +
> > + /*
> > + * When KASAN is running in store-only mode,
> > + * a fault won't occur even if the bit is set.
> > + * Therefore, skip the test_and_set_bit_lock test in store-only mode.
> > + */
> > + if (!kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
> > +
> > KUNIT_EXPECT_KASAN_FAIL(test, test_and_clear_bit(nr, addr));
> > KUNIT_EXPECT_KASAN_FAIL(test, __test_and_clear_bit(nr, addr));
> > KUNIT_EXPECT_KASAN_FAIL(test, test_and_change_bit(nr, addr));
> > KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr));
> > - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
> > +
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = test_bit(nr, addr));
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, set_bit(nr, addr));
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
> > +
> > if (nr < 7)
> > KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result =
> > xor_unlock_is_negative_byte(1 << nr, addr));
> > @@ -1765,7 +2002,12 @@ static void vmalloc_oob(struct kunit *test)
> > KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
> >
> > /* An aligned access into the first out-of-bounds granule. */
> > - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
> > + if (kasan_stonly_enabled()) {
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)v_ptr)[size + 5]);
> > + if (!kasan_async_fault_possible())
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5] = 0);
> > + } else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
> >
> > /* Check that in-bounds accesses to the physical page are valid. */
> > page = vmalloc_to_page(v_ptr);
> > @@ -2042,16 +2284,33 @@ static void copy_user_test_oob(struct kunit *test)
> >
> > KUNIT_EXPECT_KASAN_FAIL(test,
> > unused = copy_from_user(kmem, usermem, size + 1));
> > - KUNIT_EXPECT_KASAN_FAIL(test,
> > - unused = copy_to_user(usermem, kmem, size + 1));
> > +
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test,
> > + unused = copy_to_user(usermem, kmem, size + 1));
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + unused = copy_to_user(usermem, kmem, size + 1));
> > +
> > KUNIT_EXPECT_KASAN_FAIL(test,
> > unused = __copy_from_user(kmem, usermem, size + 1));
> > - KUNIT_EXPECT_KASAN_FAIL(test,
> > - unused = __copy_to_user(usermem, kmem, size + 1));
> > +
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test,
> > + unused = __copy_to_user(usermem, kmem, size + 1));
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + unused = __copy_to_user(usermem, kmem, size + 1));
> > +
> > KUNIT_EXPECT_KASAN_FAIL(test,
> > unused = __copy_from_user_inatomic(kmem, usermem, size + 1));
> > - KUNIT_EXPECT_KASAN_FAIL(test,
> > - unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
> > +
> > + if (kasan_stonly_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test,
> > + unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test,
> > + unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
> >
> > /*
> > * Prepare a long string in usermem to avoid the strncpy_from_user test
> > --
> > LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
> >
>
> This patch does not look good.
>
> Right now, KASAN tests are crafted to avoid/self-contain harmful
> memory corruptions that they do (e.g. make sure that OOB write
> accesses land in in-object kmalloc training space, etc.). If you turn
> read accesses in tests into write accesses, memory corruptions caused
> by the earlier tests will crash the kernel or the latter tests.
That's why I run the store-only test when this mode is "sync"
In case of "async/asymm" as you mention since it reports "after",
there will be memory corruption.
But in case of sync, when the MTE fault happens, it doesn't
write to memory so, I think it's fine.
>
> The easiest thing to do for now is to disable the tests that check bad
> read accesses when store-only is enabled.
>
> If we want to convert tests into doing write accesses instead of
> reads, this needs to be done separately for each test (i.e. via a
> separate patch) with an explanation why doing this is safe (and
> adjustments whenever it's not). And we need a better way to code this
> instead of the horrifying number of if/else checks.
>
> Thank you!
Hmm, as I mention above, the testcase with store-only/sync mode seems to fine.
But, If the "testcase" is failed, as you mention it makes a memory
corruption.
If success case is fine,
Please let me make all related story-only case be seperated
to each function (but almost simliar to pre-exist testcase) with
sync mode otherwise, let me seperate them just checking it whether
it success when it accesses to invalid memory with read/fetch.
Thanks :)
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-12 16:56 ` Yeoreum Yun
@ 2025-08-12 17:58 ` Andrey Konovalov
2025-08-12 21:27 ` Yeoreum Yun
0 siblings, 1 reply; 11+ messages in thread
From: Andrey Konovalov @ 2025-08-12 17:58 UTC (permalink / raw)
To: Yeoreum Yun
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
On Tue, Aug 12, 2025 at 6:57 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
>
> > Right now, KASAN tests are crafted to avoid/self-contain harmful
> > memory corruptions that they do (e.g. make sure that OOB write
> > accesses land in in-object kmalloc training space, etc.). If you turn
> > read accesses in tests into write accesses, memory corruptions caused
> > by the earlier tests will crash the kernel or the latter tests.
>
> That's why I run the store-only test when this mode is "sync"
> In case of "async/asymm" as you mention since it reports "after",
> there will be memory corruption.
>
> But in case of sync, when the MTE fault happens, it doesn't
> write to memory so, I think it's fine.
Does it not? I thought MTE gets disabled and we return from the fault
handler and let the write instruction execute. But my memory on this
is foggy. And I don't have a setup right now to test.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-12 17:58 ` Andrey Konovalov
@ 2025-08-12 21:27 ` Yeoreum Yun
2025-08-13 2:45 ` Andrey Konovalov
0 siblings, 1 reply; 11+ messages in thread
From: Yeoreum Yun @ 2025-08-12 21:27 UTC (permalink / raw)
To: Andrey Konovalov
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
Hi Andrey,
> >
> > > Right now, KASAN tests are crafted to avoid/self-contain harmful
> > > memory corruptions that they do (e.g. make sure that OOB write
> > > accesses land in in-object kmalloc training space, etc.). If you turn
> > > read accesses in tests into write accesses, memory corruptions caused
> > > by the earlier tests will crash the kernel or the latter tests.
> >
> > That's why I run the store-only test when this mode is "sync"
> > In case of "async/asymm" as you mention since it reports "after",
> > there will be memory corruption.
> >
> > But in case of sync, when the MTE fault happens, it doesn't
> > write to memory so, I think it's fine.
>
> Does it not? I thought MTE gets disabled and we return from the fault
> handler and let the write instruction execute. But my memory on this
> is foggy. And I don't have a setup right now to test.
Right. when fault is hit the MTE gets disabled.
But in kasan_test_c.c -- See the KUNIT_EXPECT_KASAN_FAIL,
It re-enables for next test by calling kasan_enable_hw_tags().
So, the store-only with sync mode seems fine unless we wouldn't care
about failure (no fault happen) which makes memory corruption.
However, I'm not sure writing the seperate testcases for store-only
is right or now since
same tests which only are different of return value check will be
duplicate and half of these always skipped (when duplicate for
store-only, former should be skip and vice versa).
Thanks.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-12 21:27 ` Yeoreum Yun
@ 2025-08-13 2:45 ` Andrey Konovalov
2025-08-13 6:20 ` Yeoreum Yun
0 siblings, 1 reply; 11+ messages in thread
From: Andrey Konovalov @ 2025-08-13 2:45 UTC (permalink / raw)
To: Yeoreum Yun
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
On Tue, Aug 12, 2025 at 11:28 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
>
> > > But in case of sync, when the MTE fault happens, it doesn't
> > > write to memory so, I think it's fine.
> >
> > Does it not? I thought MTE gets disabled and we return from the fault
> > handler and let the write instruction execute. But my memory on this
> > is foggy. And I don't have a setup right now to test.
>
> Right. when fault is hit the MTE gets disabled.
> But in kasan_test_c.c -- See the KUNIT_EXPECT_KASAN_FAIL,
> It re-enables for next test by calling kasan_enable_hw_tags().
But before that, does the faulting instruction get executed? After MTE
gets disabled in the fault handler.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-13 2:45 ` Andrey Konovalov
@ 2025-08-13 6:20 ` Yeoreum Yun
0 siblings, 0 replies; 11+ messages in thread
From: Yeoreum Yun @ 2025-08-13 6:20 UTC (permalink / raw)
To: Andrey Konovalov
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
Hi Andrey,
> > > > But in case of sync, when the MTE fault happens, it doesn't
> > > > write to memory so, I think it's fine.
> > >
> > > Does it not? I thought MTE gets disabled and we return from the fault
> > > handler and let the write instruction execute. But my memory on this
> > > is foggy. And I don't have a setup right now to test.
> >
> > Right. when fault is hit the MTE gets disabled.
> > But in kasan_test_c.c -- See the KUNIT_EXPECT_KASAN_FAIL,
> > It re-enables for next test by calling kasan_enable_hw_tags().
>
> But before that, does the faulting instruction get executed? After MTE
> gets disabled in the fault handler.
Right. in case of tag check fault, the preferred excecption return
address is the instruction where TCF happen.
I was lucky when running the test :\
Okay then I'll remove the invalid write in test,
But I want to keep the if/else on each case with the reason I said.
Thank you.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] kasan/hw-tags: introduce store only mode
2025-08-12 16:25 ` Andrey Konovalov
@ 2025-08-13 6:26 ` Yeoreum Yun
0 siblings, 0 replies; 11+ messages in thread
From: Yeoreum Yun @ 2025-08-13 6:26 UTC (permalink / raw)
To: Andrey Konovalov
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
Hi Andrey,
> On Mon, Aug 11, 2025 at 7:36 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> >
> > Since Armv8.9, FEATURE_MTE_STORE_ONLY feature is introduced to restrict
> > raise of tag check fault on store operation only.
>
> To clarify: this feature is independent on the sync/async/asymm modes?
> So any mode can be used together with FEATURE_MTE_STORE_ONLY?
Yes it is. the ARM64_MTE_STORE_ONLY is separate SYSTEM_FEATURE then
ARM64_MTE and ARM64_MTE_ASYMM.
0 So any mode can be used together with ARM64_MTE_STORE_ONLY.
>
> > Introcude KASAN store only mode based on this feature.
> >
> > KASAN store only mode restricts KASAN checks operation for store only and
> > omits the checks for fetch/read operation when accessing memory.
> > So it might be used not only debugging enviroment but also normal
> > enviroment to check memory safty.
> >
> > This features can be controlled with "kasan.stonly" arguments.
> > When "kasan.stonly=on", KASAN checks store only mode otherwise
> > KASAN checks all operations.
>
> "stonly" looks cryptic, how about "kasan.store_only"?
Okay.
>
> Also, are there any existing/planned modes/extensions of the feature?
> E.g. read only? Knowing this will allow to better plan the
> command-line parameter format.
AFAIK, there will be no plan for new feature like "read only"
and any other modes to be added.
Also "store only" feature can be used with all mode
currently, I seems good to leave it as it is.
>
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> > Documentation/dev-tools/kasan.rst | 3 ++
> > arch/arm64/include/asm/memory.h | 1 +
> > arch/arm64/include/asm/mte-kasan.h | 6 +++
> > arch/arm64/kernel/cpufeature.c | 6 +++
> > arch/arm64/kernel/mte.c | 14 ++++++
> > include/linux/kasan.h | 2 +
> > mm/kasan/hw_tags.c | 76 +++++++++++++++++++++++++++++-
> > mm/kasan/kasan.h | 10 ++++
> > 8 files changed, 116 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> > index 0a1418ab72fd..7567a2ca0e39 100644
> > --- a/Documentation/dev-tools/kasan.rst
> > +++ b/Documentation/dev-tools/kasan.rst
> > @@ -163,6 +163,9 @@ disabling KASAN altogether or controlling its features:
> > This parameter is intended to allow sampling only large page_alloc
> > allocations, which is the biggest source of the performance overhead.
> >
> > +- ``kasan.stonly=off`` or ``kasan.stonly=on`` controls whether KASAN checks
> > + store operation only or all operation.
>
> How about:
>
> ``kasan.store_only=off`` or ``=on`` controls whether KASAN checks only
> the store (write) accesses only or all accesses (default: ``off``).
>
> And let's put this next to kasan.mode, as the new parameter is related.
Thanks for your suggetion. I'll change it.
>
> > +
> > Error reports
> > ~~~~~~~~~~~~~
> >
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 5213248e081b..9d8c72c9c91f 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -308,6 +308,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> > #define arch_enable_tag_checks_sync() mte_enable_kernel_sync()
> > #define arch_enable_tag_checks_async() mte_enable_kernel_async()
> > #define arch_enable_tag_checks_asymm() mte_enable_kernel_asymm()
> > +#define arch_enable_tag_checks_stonly() mte_enable_kernel_stonly()
> > #define arch_suppress_tag_checks_start() mte_enable_tco()
> > #define arch_suppress_tag_checks_stop() mte_disable_tco()
> > #define arch_force_async_tag_fault() mte_check_tfsr_exit()
> > diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> > index 2e98028c1965..d75908ed9d0f 100644
> > --- a/arch/arm64/include/asm/mte-kasan.h
> > +++ b/arch/arm64/include/asm/mte-kasan.h
> > @@ -200,6 +200,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
> > void mte_enable_kernel_sync(void);
> > void mte_enable_kernel_async(void);
> > void mte_enable_kernel_asymm(void);
> > +int mte_enable_kernel_stonly(void);
> >
> > #else /* CONFIG_ARM64_MTE */
> >
> > @@ -251,6 +252,11 @@ static inline void mte_enable_kernel_asymm(void)
> > {
> > }
> >
> > +static inline int mte_enable_kenrel_stonly(void)
> > +{
> > + return -EINVAL;
> > +}
> > +
> > #endif /* CONFIG_ARM64_MTE */
> >
> > #endif /* __ASSEMBLY__ */
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index 9ad065f15f1d..fdc510fe0187 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -2404,6 +2404,11 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
> >
> > kasan_init_hw_tags_cpu();
> > }
> > +
> > +static void cpu_enable_mte_stonly(struct arm64_cpu_capabilities const *cap)
> > +{
> > + kasan_late_init_hw_tags_cpu();
> > +}
> > #endif /* CONFIG_ARM64_MTE */
> >
> > static void user_feature_fixup(void)
> > @@ -2922,6 +2927,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> > .capability = ARM64_MTE_STORE_ONLY,
> > .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> > .matches = has_cpuid_feature,
> > + .cpu_enable = cpu_enable_mte_stonly,
> > ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTESTOREONLY, IMP)
> > },
> > #endif /* CONFIG_ARM64_MTE */
> > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> > index e5e773844889..a1cb2a8a79a1 100644
> > --- a/arch/arm64/kernel/mte.c
> > +++ b/arch/arm64/kernel/mte.c
> > @@ -157,6 +157,20 @@ void mte_enable_kernel_asymm(void)
> > mte_enable_kernel_sync();
> > }
> > }
> > +
> > +int mte_enable_kernel_stonly(void)
> > +{
> > + if (!cpus_have_cap(ARM64_MTE_STORE_ONLY))
> > + return -EINVAL;
> > +
> > + sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCSO_MASK,
> > + SYS_FIELD_PREP(SCTLR_EL1, TCSO, 1));
> > + isb();
> > +
> > + pr_info_once("MTE: enabled stonly mode at EL1\n");
> > +
> > + return 0;
> > +}
> > #endif
> >
> > #ifdef CONFIG_KASAN_HW_TAGS
> > diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> > index 890011071f2b..28951b29c593 100644
> > --- a/include/linux/kasan.h
> > +++ b/include/linux/kasan.h
> > @@ -552,9 +552,11 @@ static inline void kasan_init_sw_tags(void) { }
> > #ifdef CONFIG_KASAN_HW_TAGS
> > void kasan_init_hw_tags_cpu(void);
> > void __init kasan_init_hw_tags(void);
> > +void kasan_late_init_hw_tags_cpu(void);
>
> Why do we need a separate late init function? Can we not enable
> store-only at the same place where we enable async/asymm?
It couldn't since the ARM64_MTE_ASYMM and ARM64_MTE are boot feature
So the kasan_init_hw_tags() is called by boot cpu before other cpus're on.
But, ARM64_MTE_STORE_ONLY is SYSTEM_FEATURE so this feature is enabled
only when all cpus're on and they can use this system feature.
[...]
Thanks!
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-08-13 6:27 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-11 17:36 [PATCH 0/2] introduce kasan stonly-mode in hw-tags Yeoreum Yun
2025-08-11 17:36 ` [PATCH 1/2] kasan/hw-tags: introduce store only mode Yeoreum Yun
2025-08-12 16:25 ` Andrey Konovalov
2025-08-13 6:26 ` Yeoreum Yun
2025-08-11 17:36 ` [PATCH 2/2] kasan: apply store-only mode in kasan kunit testcases Yeoreum Yun
2025-08-12 16:28 ` Andrey Konovalov
2025-08-12 16:56 ` Yeoreum Yun
2025-08-12 17:58 ` Andrey Konovalov
2025-08-12 21:27 ` Yeoreum Yun
2025-08-13 2:45 ` Andrey Konovalov
2025-08-13 6:20 ` Yeoreum Yun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).