* [PATCH v2 0/2] introduce kasan.store_only option in hw-tags
@ 2025-08-13 17:53 Yeoreum Yun
2025-08-13 17:53 ` [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option Yeoreum Yun
2025-08-13 17:53 ` [PATCH v2 2/2] kasan: apply store-only mode in kasan kunit testcases Yeoreum Yun
0 siblings, 2 replies; 16+ messages in thread
From: Yeoreum Yun @ 2025-08-13 17:53 UTC (permalink / raw)
To: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, catalin.marinas, will, akpm, scott, jhubbard,
pankaj.gupta, leitao, kaleshsingh, maz, broonie, oliver.upton,
james.morse, ardb, hardevsinh.palaniya, david, yang
Cc: kasan-dev, workflows, linux-doc, linux-kernel, linux-arm-kernel,
linux-mm, Yeoreum Yun
Hardware tag based KASAN is implemented using the Memory Tagging Extension
(MTE) feature.
MTE is built on top of the ARMv8.0 virtual address tagging TBI
(Top Byte Ignore) feature and allows software to access a 4-bit
allocation tag for each 16-byte granule in the physical address space.
A logical tag is derived from bits 59-56 of the virtual
address used for the memory access. A CPU with MTE enabled will compare
the logical tag against the allocation tag and potentially raise an
tag check fault on mismatch, subject to system registers configuration.
Since ARMv8.9, FEAT_MTE_STORE_ONLY can be used to restrict raise of tag
check fault on store operation only.
Using this feature (FEAT_MTE_STORE_ONLY), introduce KASAN store-only mode
which restricts KASAN check store operation only.
This mode omits KASAN check for fetch/load operation.
Therefore, it might be used not only debugging purpose but also in
normal environment.
Patch History
=============
from v1 to v2:
- change cryptic name -- stonly to store_only
- remove some TCF check with store which can make memory courruption.
- https://lore.kernel.org/all/20250811173626.1878783-1-yeoreum.yun@arm.com/
Yeoreum Yun (2):
kasan/hw-tags: introduce kasan.store_only option
kasan: apply store-only mode in kasan kunit testcases
Documentation/dev-tools/kasan.rst | 3 +
arch/arm64/include/asm/memory.h | 1 +
arch/arm64/include/asm/mte-kasan.h | 6 +
arch/arm64/kernel/cpufeature.c | 6 +
arch/arm64/kernel/mte.c | 14 ++
include/linux/kasan.h | 2 +
mm/kasan/hw_tags.c | 76 +++++-
mm/kasan/kasan.h | 10 +
mm/kasan/kasan_test_c.c | 366 ++++++++++++++++++++++-------
9 files changed, 402 insertions(+), 82 deletions(-)
base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-13 17:53 [PATCH v2 0/2] introduce kasan.store_only option in hw-tags Yeoreum Yun
@ 2025-08-13 17:53 ` Yeoreum Yun
2025-08-14 5:03 ` Andrey Konovalov
2025-08-15 11:13 ` Catalin Marinas
2025-08-13 17:53 ` [PATCH v2 2/2] kasan: apply store-only mode in kasan kunit testcases Yeoreum Yun
1 sibling, 2 replies; 16+ messages in thread
From: Yeoreum Yun @ 2025-08-13 17:53 UTC (permalink / raw)
To: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, catalin.marinas, will, akpm, scott, jhubbard,
pankaj.gupta, leitao, kaleshsingh, maz, broonie, oliver.upton,
james.morse, ardb, hardevsinh.palaniya, david, yang
Cc: kasan-dev, workflows, linux-doc, linux-kernel, linux-arm-kernel,
linux-mm, Yeoreum Yun
Since Armv8.9, FEATURE_MTE_STORE_ONLY feature is introduced to restrict
raise of tag check fault on store operation only.
Introcude KASAN store only mode based on this feature.
KASAN store only mode restricts KASAN checks operation for store only and
omits the checks for fetch/read operation when accessing memory.
So it might be used not only debugging enviroment but also normal
enviroment to check memory safty.
This features can be controlled with "kasan.store_only" arguments.
When "kasan.store_only=on", KASAN checks store only mode otherwise
KASAN checks all operations.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
Documentation/dev-tools/kasan.rst | 3 ++
arch/arm64/include/asm/memory.h | 1 +
arch/arm64/include/asm/mte-kasan.h | 6 +++
arch/arm64/kernel/cpufeature.c | 6 +++
arch/arm64/kernel/mte.c | 14 ++++++
include/linux/kasan.h | 2 +
mm/kasan/hw_tags.c | 76 +++++++++++++++++++++++++++++-
mm/kasan/kasan.h | 10 ++++
8 files changed, 116 insertions(+), 2 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index 0a1418ab72fd..fcb70dd821ec 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -143,6 +143,9 @@ disabling KASAN altogether or controlling its features:
Asymmetric mode: a bad access is detected synchronously on reads and
asynchronously on writes.
+- ``kasan.store_only=off`` or ``kasan.store_only=on`` controls whether KASAN
+ checks the store (write) accesses only or all accesses (default: ``off``)
+
- ``kasan.vmalloc=off`` or ``=on`` disables or enables tagging of vmalloc
allocations (default: ``on``).
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 5213248e081b..ae29cd3db78d 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -308,6 +308,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
#define arch_enable_tag_checks_sync() mte_enable_kernel_sync()
#define arch_enable_tag_checks_async() mte_enable_kernel_async()
#define arch_enable_tag_checks_asymm() mte_enable_kernel_asymm()
+#define arch_enable_tag_checks_store_only() mte_enable_kernel_store_only()
#define arch_suppress_tag_checks_start() mte_enable_tco()
#define arch_suppress_tag_checks_stop() mte_disable_tco()
#define arch_force_async_tag_fault() mte_check_tfsr_exit()
diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
index 2e98028c1965..3e1cc341d47a 100644
--- a/arch/arm64/include/asm/mte-kasan.h
+++ b/arch/arm64/include/asm/mte-kasan.h
@@ -200,6 +200,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
void mte_enable_kernel_sync(void);
void mte_enable_kernel_async(void);
void mte_enable_kernel_asymm(void);
+int mte_enable_kernel_store_only(void);
#else /* CONFIG_ARM64_MTE */
@@ -251,6 +252,11 @@ static inline void mte_enable_kernel_asymm(void)
{
}
+static inline int mte_enable_kenrel_store_only(void)
+{
+ return -EINVAL;
+}
+
#endif /* CONFIG_ARM64_MTE */
#endif /* __ASSEMBLY__ */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9ad065f15f1d..7b724fcf20a7 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2404,6 +2404,11 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
kasan_init_hw_tags_cpu();
}
+
+static void cpu_enable_mte_store_only(struct arm64_cpu_capabilities const *cap)
+{
+ kasan_late_init_hw_tags_cpu();
+}
#endif /* CONFIG_ARM64_MTE */
static void user_feature_fixup(void)
@@ -2922,6 +2927,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_MTE_STORE_ONLY,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
+ .cpu_enable = cpu_enable_mte_store_only,
ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTESTOREONLY, IMP)
},
#endif /* CONFIG_ARM64_MTE */
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index e5e773844889..8eb1f66f2ccd 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -157,6 +157,20 @@ void mte_enable_kernel_asymm(void)
mte_enable_kernel_sync();
}
}
+
+int mte_enable_kernel_store_only(void)
+{
+ if (!cpus_have_cap(ARM64_MTE_STORE_ONLY))
+ return -EINVAL;
+
+ sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCSO_MASK,
+ SYS_FIELD_PREP(SCTLR_EL1, TCSO, 1));
+ isb();
+
+ pr_info_once("MTE: enabled stonly mode at EL1\n");
+
+ return 0;
+}
#endif
#ifdef CONFIG_KASAN_HW_TAGS
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 890011071f2b..28951b29c593 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -552,9 +552,11 @@ static inline void kasan_init_sw_tags(void) { }
#ifdef CONFIG_KASAN_HW_TAGS
void kasan_init_hw_tags_cpu(void);
void __init kasan_init_hw_tags(void);
+void kasan_late_init_hw_tags_cpu(void);
#else
static inline void kasan_init_hw_tags_cpu(void) { }
static inline void kasan_init_hw_tags(void) { }
+static inline void kasan_late_init_hw_tags_cpu(void) { }
#endif
#ifdef CONFIG_KASAN_VMALLOC
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 9a6927394b54..c2f90c06076e 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -41,9 +41,16 @@ enum kasan_arg_vmalloc {
KASAN_ARG_VMALLOC_ON,
};
+enum kasan_arg_store_only {
+ KASAN_ARG_STORE_ONLY_DEFAULT,
+ KASAN_ARG_STORE_ONLY_OFF,
+ KASAN_ARG_STORE_ONLY_ON,
+};
+
static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
+static enum kasan_arg_store_only kasan_arg_store_only __ro_after_init;
/*
* Whether KASAN is enabled at all.
@@ -67,6 +74,9 @@ DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
#endif
EXPORT_SYMBOL_GPL(kasan_flag_vmalloc);
+DEFINE_STATIC_KEY_FALSE(kasan_flag_store_only);
+EXPORT_SYMBOL_GPL(kasan_flag_store_only);
+
#define PAGE_ALLOC_SAMPLE_DEFAULT 1
#define PAGE_ALLOC_SAMPLE_ORDER_DEFAULT 3
@@ -141,6 +151,23 @@ static int __init early_kasan_flag_vmalloc(char *arg)
}
early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
+/* kasan.store_only=off/on */
+static int __init early_kasan_flag_store_only(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_store_only = KASAN_ARG_STORE_ONLY_OFF;
+ else if (!strcmp(arg, "on"))
+ kasan_arg_store_only = KASAN_ARG_STORE_ONLY_ON;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.store_only", early_kasan_flag_store_only);
+
static inline const char *kasan_mode_info(void)
{
if (kasan_mode == KASAN_MODE_ASYNC)
@@ -219,6 +246,20 @@ void kasan_init_hw_tags_cpu(void)
kasan_enable_hw_tags();
}
+/*
+ * kasan_late_init_hw_tags_cpu_post() is called for each CPU after
+ * all cpus are bring-up at boot.
+ * Not marked as __init as a CPU can be hot-plugged after boot.
+ */
+void kasan_late_init_hw_tags_cpu(void)
+{
+ /*
+ * Enable stonly mode only when explicitly requested through the command line.
+ * If system doesn't support, kasan checks all operation.
+ */
+ kasan_enable_store_only();
+}
+
/* kasan_init_hw_tags() is called once on boot CPU. */
void __init kasan_init_hw_tags(void)
{
@@ -257,15 +298,28 @@ void __init kasan_init_hw_tags(void)
break;
}
+ switch (kasan_arg_store_only) {
+ case KASAN_ARG_STORE_ONLY_DEFAULT:
+ /* Default is specified by kasan_flag_store_only definition. */
+ break;
+ case KASAN_ARG_STORE_ONLY_OFF:
+ static_branch_disable(&kasan_flag_store_only);
+ break;
+ case KASAN_ARG_STORE_ONLY_ON:
+ static_branch_enable(&kasan_flag_store_only);
+ break;
+ }
+
kasan_init_tags();
/* KASAN is now initialized, enable it. */
static_branch_enable(&kasan_flag_enabled);
- pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
+ pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s store_only=%s\n",
kasan_mode_info(),
str_on_off(kasan_vmalloc_enabled()),
- str_on_off(kasan_stack_collection_enabled()));
+ str_on_off(kasan_stack_collection_enabled()),
+ str_on_off(kasan_store_only_enabled()));
}
#ifdef CONFIG_KASAN_VMALLOC
@@ -394,6 +448,22 @@ void kasan_enable_hw_tags(void)
hw_enable_tag_checks_sync();
}
+void kasan_enable_store_only(void)
+{
+ if (kasan_arg_store_only == KASAN_ARG_STORE_ONLY_ON) {
+ if (hw_enable_tag_checks_store_only()) {
+ static_branch_disable(&kasan_flag_store_only);
+ kasan_arg_store_only = KASAN_ARG_STORE_ONLY_OFF;
+ pr_warn_once("KernelAddressSanitizer: store only mode isn't supported (hw-tags)\n");
+ }
+ }
+}
+
+bool kasan_store_only_enabled(void)
+{
+ return static_branch_unlikely(&kasan_flag_store_only);
+}
+
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
EXPORT_SYMBOL_IF_KUNIT(kasan_enable_hw_tags);
@@ -404,4 +474,6 @@ VISIBLE_IF_KUNIT void kasan_force_async_fault(void)
}
EXPORT_SYMBOL_IF_KUNIT(kasan_force_async_fault);
+EXPORT_SYMBOL_IF_KUNIT(kasan_store_only_enabled);
+
#endif
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 129178be5e64..1d853de1c499 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -33,6 +33,7 @@ static inline bool kasan_stack_collection_enabled(void)
#include "../slab.h"
DECLARE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
+DECLARE_STATIC_KEY_FALSE(kasan_flag_stonly);
enum kasan_mode {
KASAN_MODE_SYNC,
@@ -428,6 +429,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
#define hw_enable_tag_checks_sync() arch_enable_tag_checks_sync()
#define hw_enable_tag_checks_async() arch_enable_tag_checks_async()
#define hw_enable_tag_checks_asymm() arch_enable_tag_checks_asymm()
+#define hw_enable_tag_checks_store_only() arch_enable_tag_checks_store_only()
#define hw_suppress_tag_checks_start() arch_suppress_tag_checks_start()
#define hw_suppress_tag_checks_stop() arch_suppress_tag_checks_stop()
#define hw_force_async_tag_fault() arch_force_async_tag_fault()
@@ -437,10 +439,18 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
arch_set_mem_tag_range((addr), (size), (tag), (init))
void kasan_enable_hw_tags(void);
+void kasan_enable_store_only(void);
+bool kasan_store_only_enabled(void);
#else /* CONFIG_KASAN_HW_TAGS */
static inline void kasan_enable_hw_tags(void) { }
+static inline void kasan_enable_store_only(void) { }
+
+static inline bool kasan_store_only_enabled(void)
+{
+ return false;
+}
#endif /* CONFIG_KASAN_HW_TAGS */
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v2 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-13 17:53 [PATCH v2 0/2] introduce kasan.store_only option in hw-tags Yeoreum Yun
2025-08-13 17:53 ` [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option Yeoreum Yun
@ 2025-08-13 17:53 ` Yeoreum Yun
2025-08-14 5:04 ` Andrey Konovalov
1 sibling, 1 reply; 16+ messages in thread
From: Yeoreum Yun @ 2025-08-13 17:53 UTC (permalink / raw)
To: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, catalin.marinas, will, akpm, scott, jhubbard,
pankaj.gupta, leitao, kaleshsingh, maz, broonie, oliver.upton,
james.morse, ardb, hardevsinh.palaniya, david, yang
Cc: kasan-dev, workflows, linux-doc, linux-kernel, linux-arm-kernel,
linux-mm, Yeoreum Yun
When KASAN is configured in store-only mode,
fetch/load operations do not trigger tag check faults.
As a result, the outcome of some test cases may differ
compared to when KASAN is configured without store-only mode.
Therefore, by modifying pre-exist testcases
check the store only makes tag check fault (TCF) where
writing is perform in "allocated memory" but tag is invalid
(i.e) redzone write in atomic_set() testcases.
Otherwise check the invalid fetch/read doesn't generate TCF.
Also, skip some testcases affected by initial value
(i.e) atomic_cmpxchg() testcase maybe successd if
it passes valid atomic_t address and invalid oldaval address.
In this case, if invalid atomic_t doesn't have the same oldval,
it won't trigger store operation so the test will pass.
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
---
mm/kasan/kasan_test_c.c | 366 +++++++++++++++++++++++++++++++---------
1 file changed, 286 insertions(+), 80 deletions(-)
diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
index 2aa12dfa427a..e5d08a6ee3a2 100644
--- a/mm/kasan/kasan_test_c.c
+++ b/mm/kasan/kasan_test_c.c
@@ -94,11 +94,13 @@ static void kasan_test_exit(struct kunit *test)
}
/**
- * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
- * KASAN report; causes a KUnit test failure otherwise.
+ * _KUNIT_EXPECT_KASAN_TEMPLATE - check that the executed expression produces
+ * a KASAN report or not; a KUnit test failure when it's different from @produce.
*
* @test: Currently executing KUnit test.
- * @expression: Expression that must produce a KASAN report.
+ * @expr: Expression produce a KASAN report or not.
+ * @expr_str: Expression string
+ * @produce: expression should produce a KASAN report.
*
* For hardware tag-based KASAN, when a synchronous tag fault happens, tag
* checking is auto-disabled. When this happens, this test handler reenables
@@ -110,25 +112,29 @@ static void kasan_test_exit(struct kunit *test)
* Use READ/WRITE_ONCE() for the accesses and compiler barriers around the
* expression to prevent that.
*
- * In between KUNIT_EXPECT_KASAN_FAIL checks, test_status.report_found is kept
+ * In between _KUNIT_EXPECT_KASAN_TEMPLATE checks, test_status.report_found is kept
* as false. This allows detecting KASAN reports that happen outside of the
* checks by asserting !test_status.report_found at the start of
- * KUNIT_EXPECT_KASAN_FAIL and in kasan_test_exit.
+ * _KUNIT_EXPECT_KASAN_TEMPLATE and in kasan_test_exit.
*/
-#define KUNIT_EXPECT_KASAN_FAIL(test, expression) do { \
+#define _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, expr_str, produce) \
+do { \
if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
kasan_sync_fault_possible()) \
migrate_disable(); \
KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found)); \
barrier(); \
- expression; \
+ expr; \
barrier(); \
if (kasan_async_fault_possible()) \
kasan_force_async_fault(); \
- if (!READ_ONCE(test_status.report_found)) { \
- KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN failure " \
- "expected in \"" #expression \
- "\", but none occurred"); \
+ if (READ_ONCE(test_status.report_found) != produce) { \
+ KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN %s " \
+ "expected in \"" expr_str \
+ "\", but %soccurred", \
+ (produce ? "failure" : "success"), \
+ (test_status.report_found ? \
+ "" : "none ")); \
} \
if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
kasan_sync_fault_possible()) { \
@@ -141,6 +147,26 @@ static void kasan_test_exit(struct kunit *test)
WRITE_ONCE(test_status.async_fault, false); \
} while (0)
+/*
+ * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
+ * KASAN report; causes a KUnit test failure otherwise.
+ *
+ * @test: Currently executing KUnit test.
+ * @expr: Expression produce a KASAN report.
+ */
+#define KUNIT_EXPECT_KASAN_FAIL(test, expr) \
+ _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, true)
+
+/*
+ * KUNIT_EXPECT_KASAN_SUCCESS - check that the executed expression doesn't
+ * produces a KASAN report; causes a KUnit test failure otherwise.
+ *
+ * @test: Currently executing KUnit test.
+ * @expr: Expression doesn't produce a KASAN report.
+ */
+#define KUNIT_EXPECT_KASAN_SUCCESS(test, expr) \
+ _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, false)
+
#define KASAN_TEST_NEEDS_CONFIG_ON(test, config) do { \
if (!IS_ENABLED(config)) \
kunit_skip((test), "Test requires " #config "=y"); \
@@ -183,8 +209,12 @@ static void kmalloc_oob_right(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + 5] = 'y');
/* Out-of-bounds access past the aligned kmalloc object. */
- KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
- ptr[size + KASAN_GRANULE_SIZE + 5]);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] =
+ ptr[size + KASAN_GRANULE_SIZE + 5]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
+ ptr[size + KASAN_GRANULE_SIZE + 5]);
kfree(ptr);
}
@@ -198,7 +228,11 @@ static void kmalloc_oob_left(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
OPTIMIZER_HIDE_VAR(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr = *(ptr - 1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
+
kfree(ptr);
}
@@ -211,7 +245,11 @@ static void kmalloc_node_oob_right(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
OPTIMIZER_HIDE_VAR(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
+
kfree(ptr);
}
@@ -291,7 +329,10 @@ static void kmalloc_large_uaf(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
kfree(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
}
static void kmalloc_large_invalid_free(struct kunit *test)
@@ -323,7 +364,11 @@ static void page_alloc_oob_right(struct kunit *test)
ptr = page_address(pages);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
+
free_pages((unsigned long)ptr, order);
}
@@ -338,7 +383,10 @@ static void page_alloc_uaf(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
free_pages((unsigned long)ptr, order);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
}
static void krealloc_more_oob_helper(struct kunit *test,
@@ -455,10 +503,13 @@ static void krealloc_uaf(struct kunit *test)
ptr1 = kmalloc(size1, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
kfree(ptr1);
-
KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL));
KUNIT_ASSERT_NULL(test, ptr2);
- KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, *(volatile char *)ptr1);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
}
static void kmalloc_oob_16(struct kunit *test)
@@ -501,7 +552,11 @@ static void kmalloc_uaf_16(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
kfree(ptr2);
- KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr1 = *ptr2);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
+
kfree(ptr1);
}
@@ -640,8 +695,14 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
memset((char *)ptr, 0, 64);
OPTIMIZER_HIDE_VAR(ptr);
OPTIMIZER_HIDE_VAR(invalid_size);
- KUNIT_EXPECT_KASAN_FAIL(test,
- memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ memmove((char *)ptr, (char *)ptr + 4, invalid_size));
+
kfree(ptr);
}
@@ -654,7 +715,11 @@ static void kmalloc_uaf(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
kfree(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[8]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
}
static void kmalloc_uaf_memset(struct kunit *test)
@@ -701,7 +766,11 @@ static void kmalloc_uaf2(struct kunit *test)
goto again;
}
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[40]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
+
KUNIT_EXPECT_PTR_NE(test, ptr1, ptr2);
kfree(ptr2);
@@ -727,19 +796,33 @@ static void kmalloc_uaf3(struct kunit *test)
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
kfree(ptr2);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[8]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
}
static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
{
int *i_unsafe = unsafe;
- KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*i_unsafe));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
+
KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, smp_load_acquire(i_unsafe));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_read(unsafe));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
+
KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
@@ -752,18 +835,38 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
+
+ /*
+ * The result of the test below may vary due to garbage values of unsafe in
+ * store-only mode. Therefore, skip this test when KASAN is configured
+ * in store-only mode.
+ */
+ if (!kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
+
KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
+ /*
+ * The result of the test below may vary due to garbage values of unsafe in
+ * store-only mode. Therefore, skip this test when KASAN is configured
+ * in store-only mode.
+ */
+ if (!kasan_store_only_enabled()) {
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
+ }
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_long_read(unsafe));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
+
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
@@ -776,16 +879,32 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
+
+ /*
+ * The result of the test below may vary due to garbage values in
+ * store-only mode. Therefore, skip this test when KASAN is configured
+ * in store-only mode.
+ */
+ if (!kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
+
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
- KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
+
+ /*
+ * The result of the test below may vary due to garbage values in
+ * store-only mode. Therefore, skip this test when KASAN is configured
+ * in store-only mode.
+ */
+ if (!kasan_store_only_enabled()) {
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
+ }
}
static void kasan_atomics(struct kunit *test)
@@ -842,8 +961,14 @@ static void ksize_unpoisons_memory(struct kunit *test)
/* These must trigger a KASAN report. */
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
+
+ if (kasan_store_only_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size + 5]);
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[real_size - 1]);
+ } else {
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
+ }
kfree(ptr);
}
@@ -863,8 +988,13 @@ static void ksize_uaf(struct kunit *test)
OPTIMIZER_HIDE_VAR(ptr);
KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr));
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
+ if (kasan_store_only_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size]);
+ } else {
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
+ }
}
/*
@@ -886,6 +1016,7 @@ static void rcu_uaf_reclaim(struct rcu_head *rp)
container_of(rp, struct kasan_rcu_info, rcu);
kfree(fp);
+
((volatile struct kasan_rcu_info *)fp)->i;
}
@@ -899,9 +1030,14 @@ static void rcu_uaf(struct kunit *test)
global_rcu_ptr = rcu_dereference_protected(
(struct kasan_rcu_info __rcu *)ptr, NULL);
- KUNIT_EXPECT_KASAN_FAIL(test,
- call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
- rcu_barrier());
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
+ rcu_barrier());
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
+ rcu_barrier());
}
static void workqueue_uaf_work(struct work_struct *work)
@@ -924,8 +1060,12 @@ static void workqueue_uaf(struct kunit *test)
queue_work(workqueue, work);
destroy_workqueue(workqueue);
- KUNIT_EXPECT_KASAN_FAIL(test,
- ((volatile struct work_struct *)work)->data);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ ((volatile struct work_struct *)work)->data);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ ((volatile struct work_struct *)work)->data);
}
static void kfree_via_page(struct kunit *test)
@@ -972,7 +1112,10 @@ static void kmem_cache_oob(struct kunit *test)
return;
}
- KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, *p = p[size + OOB_TAG_OFF]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
kmem_cache_free(cache, p);
kmem_cache_destroy(cache);
@@ -1068,7 +1211,10 @@ static void kmem_cache_rcu_uaf(struct kunit *test)
*/
rcu_barrier();
- KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*p));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
kmem_cache_destroy(cache);
}
@@ -1206,6 +1352,9 @@ static void mempool_oob_right_helper(struct kunit *test, mempool_t *pool, size_t
if (IS_ENABLED(CONFIG_KASAN_GENERIC))
KUNIT_EXPECT_KASAN_FAIL(test,
((volatile char *)&elem[size])[0]);
+ else if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
else
KUNIT_EXPECT_KASAN_FAIL(test,
((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
@@ -1273,7 +1422,11 @@ static void mempool_uaf_helper(struct kunit *test, mempool_t *pool, bool page)
mempool_free(elem, pool);
ptr = page ? page_address((struct page *)elem) : elem;
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
}
static void mempool_kmalloc_uaf(struct kunit *test)
@@ -1532,8 +1685,13 @@ static void kasan_memchr(struct kunit *test)
OPTIMIZER_HIDE_VAR(ptr);
OPTIMIZER_HIDE_VAR(size);
- KUNIT_EXPECT_KASAN_FAIL(test,
- kasan_ptr_result = memchr(ptr, '1', size + 1));
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ kasan_ptr_result = memchr(ptr, '1', size + 1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ kasan_ptr_result = memchr(ptr, '1', size + 1));
kfree(ptr);
}
@@ -1559,8 +1717,14 @@ static void kasan_memcmp(struct kunit *test)
OPTIMIZER_HIDE_VAR(ptr);
OPTIMIZER_HIDE_VAR(size);
- KUNIT_EXPECT_KASAN_FAIL(test,
- kasan_int_result = memcmp(ptr, arr, size+1));
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ kasan_int_result = memcmp(ptr, arr, size+1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ kasan_int_result = memcmp(ptr, arr, size+1));
+
kfree(ptr);
}
@@ -1593,9 +1757,13 @@ static void kasan_strings(struct kunit *test)
KUNIT_EXPECT_EQ(test, KASAN_GRANULE_SIZE - 2,
strscpy(ptr, src + 1, KASAN_GRANULE_SIZE));
- /* strscpy should fail if the first byte is unreadable. */
- KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
- KASAN_GRANULE_SIZE));
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
+ KASAN_GRANULE_SIZE));
+ else
+ /* strscpy should fail if the first byte is unreadable. */
+ KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
+ KASAN_GRANULE_SIZE));
kfree(src);
kfree(ptr);
@@ -1607,17 +1775,22 @@ static void kasan_strings(struct kunit *test)
* will likely point to zeroed byte.
*/
ptr += 16;
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
-
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
-
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
-
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
-
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
+ if (kasan_store_only_enabled()) {
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strchr(ptr, '1'));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strrchr(ptr, '1'));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strcmp(ptr, "2"));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strncmp(ptr, "2", 1));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strlen(ptr));
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strnlen(ptr, 1));
+ } else {
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
+ }
}
static void kasan_bitops_modify(struct kunit *test, int nr, void *addr)
@@ -1636,12 +1809,25 @@ static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr)
{
KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit(nr, addr));
KUNIT_EXPECT_KASAN_FAIL(test, __test_and_set_bit(nr, addr));
- KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
+
+ /*
+ * When KASAN is running in store-only mode,
+ * a fault won't occur even if the bit is set.
+ * Therefore, skip the test_and_set_bit_lock test in store-only mode.
+ */
+ if (!kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
+
KUNIT_EXPECT_KASAN_FAIL(test, test_and_clear_bit(nr, addr));
KUNIT_EXPECT_KASAN_FAIL(test, __test_and_clear_bit(nr, addr));
KUNIT_EXPECT_KASAN_FAIL(test, test_and_change_bit(nr, addr));
KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr));
- KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = test_bit(nr, addr));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
+
if (nr < 7)
KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result =
xor_unlock_is_negative_byte(1 << nr, addr));
@@ -1765,7 +1951,10 @@ static void vmalloc_oob(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
/* An aligned access into the first out-of-bounds granule. */
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)v_ptr)[size + 5]);
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
/* Check that in-bounds accesses to the physical page are valid. */
page = vmalloc_to_page(v_ptr);
@@ -2042,16 +2231,33 @@ static void copy_user_test_oob(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test,
unused = copy_from_user(kmem, usermem, size + 1));
- KUNIT_EXPECT_KASAN_FAIL(test,
- unused = copy_to_user(usermem, kmem, size + 1));
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ unused = copy_to_user(usermem, kmem, size + 1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ unused = copy_to_user(usermem, kmem, size + 1));
+
KUNIT_EXPECT_KASAN_FAIL(test,
unused = __copy_from_user(kmem, usermem, size + 1));
- KUNIT_EXPECT_KASAN_FAIL(test,
- unused = __copy_to_user(usermem, kmem, size + 1));
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ unused = __copy_to_user(usermem, kmem, size + 1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ unused = __copy_to_user(usermem, kmem, size + 1));
+
KUNIT_EXPECT_KASAN_FAIL(test,
unused = __copy_from_user_inatomic(kmem, usermem, size + 1));
- KUNIT_EXPECT_KASAN_FAIL(test,
- unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
+
+ if (kasan_store_only_enabled())
+ KUNIT_EXPECT_KASAN_SUCCESS(test,
+ unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
/*
* Prepare a long string in usermem to avoid the strncpy_from_user test
--
LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-13 17:53 ` [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option Yeoreum Yun
@ 2025-08-14 5:03 ` Andrey Konovalov
2025-08-14 8:51 ` Yeoreum Yun
2025-08-15 11:19 ` Catalin Marinas
2025-08-15 11:13 ` Catalin Marinas
1 sibling, 2 replies; 16+ messages in thread
From: Andrey Konovalov @ 2025-08-14 5:03 UTC (permalink / raw)
To: Yeoreum Yun, glider, Marco Elver
Cc: ryabinin.a.a, dvyukov, vincenzo.frascino, corbet, catalin.marinas,
will, akpm, scott, jhubbard, pankaj.gupta, leitao, kaleshsingh,
maz, broonie, oliver.upton, james.morse, ardb,
hardevsinh.palaniya, david, yang, kasan-dev, workflows, linux-doc,
linux-kernel, linux-arm-kernel, linux-mm
On Wed, Aug 13, 2025 at 7:53 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
>
> Since Armv8.9, FEATURE_MTE_STORE_ONLY feature is introduced to restrict
> raise of tag check fault on store operation only.
> Introcude KASAN store only mode based on this feature.
>
> KASAN store only mode restricts KASAN checks operation for store only and
> omits the checks for fetch/read operation when accessing memory.
> So it might be used not only debugging enviroment but also normal
> enviroment to check memory safty.
>
> This features can be controlled with "kasan.store_only" arguments.
> When "kasan.store_only=on", KASAN checks store only mode otherwise
> KASAN checks all operations.
I'm thinking if we should name this "kasan.write_only" instead of
"kasan.store_only". This would align the terms with the
"kasan.fault=panic_on_write" parameter we already have. But then it
would be different from "FEATURE_MTE_STORE_ONLY", which is what Arm
documentation uses (right?).
Marco, Alexander, any opinion?
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---
> Documentation/dev-tools/kasan.rst | 3 ++
> arch/arm64/include/asm/memory.h | 1 +
> arch/arm64/include/asm/mte-kasan.h | 6 +++
> arch/arm64/kernel/cpufeature.c | 6 +++
> arch/arm64/kernel/mte.c | 14 ++++++
> include/linux/kasan.h | 2 +
> mm/kasan/hw_tags.c | 76 +++++++++++++++++++++++++++++-
> mm/kasan/kasan.h | 10 ++++
> 8 files changed, 116 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> index 0a1418ab72fd..fcb70dd821ec 100644
> --- a/Documentation/dev-tools/kasan.rst
> +++ b/Documentation/dev-tools/kasan.rst
> @@ -143,6 +143,9 @@ disabling KASAN altogether or controlling its features:
> Asymmetric mode: a bad access is detected synchronously on reads and
> asynchronously on writes.
>
> +- ``kasan.store_only=off`` or ``kasan.store_only=on`` controls whether KASAN
> + checks the store (write) accesses only or all accesses (default: ``off``)
> +
> - ``kasan.vmalloc=off`` or ``=on`` disables or enables tagging of vmalloc
> allocations (default: ``on``).
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 5213248e081b..ae29cd3db78d 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -308,6 +308,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> #define arch_enable_tag_checks_sync() mte_enable_kernel_sync()
> #define arch_enable_tag_checks_async() mte_enable_kernel_async()
> #define arch_enable_tag_checks_asymm() mte_enable_kernel_asymm()
> +#define arch_enable_tag_checks_store_only() mte_enable_kernel_store_only()
> #define arch_suppress_tag_checks_start() mte_enable_tco()
> #define arch_suppress_tag_checks_stop() mte_disable_tco()
> #define arch_force_async_tag_fault() mte_check_tfsr_exit()
> diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> index 2e98028c1965..3e1cc341d47a 100644
> --- a/arch/arm64/include/asm/mte-kasan.h
> +++ b/arch/arm64/include/asm/mte-kasan.h
> @@ -200,6 +200,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
> void mte_enable_kernel_sync(void);
> void mte_enable_kernel_async(void);
> void mte_enable_kernel_asymm(void);
> +int mte_enable_kernel_store_only(void);
>
> #else /* CONFIG_ARM64_MTE */
>
> @@ -251,6 +252,11 @@ static inline void mte_enable_kernel_asymm(void)
> {
> }
>
> +static inline int mte_enable_kenrel_store_only(void)
Typo in the function name. Please build/boot test without MTE/KASAN enabled.
> +{
> + return -EINVAL;
> +}
> +
> #endif /* CONFIG_ARM64_MTE */
>
> #endif /* __ASSEMBLY__ */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9ad065f15f1d..7b724fcf20a7 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2404,6 +2404,11 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
>
> kasan_init_hw_tags_cpu();
> }
> +
> +static void cpu_enable_mte_store_only(struct arm64_cpu_capabilities const *cap)
> +{
> + kasan_late_init_hw_tags_cpu();
> +}
> #endif /* CONFIG_ARM64_MTE */
>
> static void user_feature_fixup(void)
> @@ -2922,6 +2927,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> .capability = ARM64_MTE_STORE_ONLY,
> .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> .matches = has_cpuid_feature,
> + .cpu_enable = cpu_enable_mte_store_only,
> ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTESTOREONLY, IMP)
> },
> #endif /* CONFIG_ARM64_MTE */
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index e5e773844889..8eb1f66f2ccd 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -157,6 +157,20 @@ void mte_enable_kernel_asymm(void)
> mte_enable_kernel_sync();
> }
> }
> +
> +int mte_enable_kernel_store_only(void)
> +{
> + if (!cpus_have_cap(ARM64_MTE_STORE_ONLY))
> + return -EINVAL;
> +
> + sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCSO_MASK,
> + SYS_FIELD_PREP(SCTLR_EL1, TCSO, 1));
> + isb();
> +
> + pr_info_once("MTE: enabled stonly mode at EL1\n");
> +
> + return 0;
> +}
> #endif
>
> #ifdef CONFIG_KASAN_HW_TAGS
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 890011071f2b..28951b29c593 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -552,9 +552,11 @@ static inline void kasan_init_sw_tags(void) { }
> #ifdef CONFIG_KASAN_HW_TAGS
> void kasan_init_hw_tags_cpu(void);
> void __init kasan_init_hw_tags(void);
> +void kasan_late_init_hw_tags_cpu(void);
> #else
> static inline void kasan_init_hw_tags_cpu(void) { }
> static inline void kasan_init_hw_tags(void) { }
> +static inline void kasan_late_init_hw_tags_cpu(void) { }
> #endif
>
> #ifdef CONFIG_KASAN_VMALLOC
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 9a6927394b54..c2f90c06076e 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -41,9 +41,16 @@ enum kasan_arg_vmalloc {
> KASAN_ARG_VMALLOC_ON,
> };
>
> +enum kasan_arg_store_only {
> + KASAN_ARG_STORE_ONLY_DEFAULT,
> + KASAN_ARG_STORE_ONLY_OFF,
> + KASAN_ARG_STORE_ONLY_ON,
> +};
> +
> static enum kasan_arg kasan_arg __ro_after_init;
> static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
> +static enum kasan_arg_store_only kasan_arg_store_only __ro_after_init;
>
> /*
> * Whether KASAN is enabled at all.
> @@ -67,6 +74,9 @@ DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
> #endif
> EXPORT_SYMBOL_GPL(kasan_flag_vmalloc);
>
> +DEFINE_STATIC_KEY_FALSE(kasan_flag_store_only);
Is there a reason to have this as a static key? I think a normal
global bool would work, just as a normal variable works for
kasan_mode.
> +EXPORT_SYMBOL_GPL(kasan_flag_store_only);
> +
> #define PAGE_ALLOC_SAMPLE_DEFAULT 1
> #define PAGE_ALLOC_SAMPLE_ORDER_DEFAULT 3
>
> @@ -141,6 +151,23 @@ static int __init early_kasan_flag_vmalloc(char *arg)
> }
> early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
>
> +/* kasan.store_only=off/on */
> +static int __init early_kasan_flag_store_only(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + if (!strcmp(arg, "off"))
> + kasan_arg_store_only = KASAN_ARG_STORE_ONLY_OFF;
> + else if (!strcmp(arg, "on"))
> + kasan_arg_store_only = KASAN_ARG_STORE_ONLY_ON;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +early_param("kasan.store_only", early_kasan_flag_store_only);
> +
> static inline const char *kasan_mode_info(void)
> {
> if (kasan_mode == KASAN_MODE_ASYNC)
> @@ -219,6 +246,20 @@ void kasan_init_hw_tags_cpu(void)
> kasan_enable_hw_tags();
> }
>
> +/*
> + * kasan_late_init_hw_tags_cpu_post() is called for each CPU after
> + * all cpus are bring-up at boot.
"CPUs"
"brought up"
And please spell-check other comments.
> + * Not marked as __init as a CPU can be hot-plugged after boot.
> + */
> +void kasan_late_init_hw_tags_cpu(void)
> +{
> + /*
> + * Enable stonly mode only when explicitly requested through the command line.
"store-only"
> + * If system doesn't support, kasan checks all operation.
"If the system doesn't support this mode, KASAN will check both load
and store operations."
> + */
> + kasan_enable_store_only();
> +}
> +
> /* kasan_init_hw_tags() is called once on boot CPU. */
> void __init kasan_init_hw_tags(void)
> {
> @@ -257,15 +298,28 @@ void __init kasan_init_hw_tags(void)
> break;
> }
>
> + switch (kasan_arg_store_only) {
> + case KASAN_ARG_STORE_ONLY_DEFAULT:
> + /* Default is specified by kasan_flag_store_only definition. */
> + break;
> + case KASAN_ARG_STORE_ONLY_OFF:
> + static_branch_disable(&kasan_flag_store_only);
> + break;
> + case KASAN_ARG_STORE_ONLY_ON:
> + static_branch_enable(&kasan_flag_store_only);
> + break;
> + }
Let's move this part to kasan_late_init_hw_tags_cpu. Since that's
where the final decision of whether the store-only mode is enabled is
taken, we should just set the global flag there.
> +
> kasan_init_tags();
>
> /* KASAN is now initialized, enable it. */
> static_branch_enable(&kasan_flag_enabled);
>
> - pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
> + pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s store_only=%s\n",
Let's put "store_only" here next to "mode".
You're also missing a comma.
> kasan_mode_info(),
> str_on_off(kasan_vmalloc_enabled()),
> - str_on_off(kasan_stack_collection_enabled()));
> + str_on_off(kasan_stack_collection_enabled()),
> + str_on_off(kasan_store_only_enabled()));
> }
>
> #ifdef CONFIG_KASAN_VMALLOC
> @@ -394,6 +448,22 @@ void kasan_enable_hw_tags(void)
> hw_enable_tag_checks_sync();
> }
>
> +void kasan_enable_store_only(void)
Do we need this as a separate function? I think we can just move the
code to kasan_late_init_hw_tags_cpu.
> +{
> + if (kasan_arg_store_only == KASAN_ARG_STORE_ONLY_ON) {
> + if (hw_enable_tag_checks_store_only()) {
> + static_branch_disable(&kasan_flag_store_only);
> + kasan_arg_store_only = KASAN_ARG_STORE_ONLY_OFF;
> + pr_warn_once("KernelAddressSanitizer: store only mode isn't supported (hw-tags)\n");
No need for the "KernelAddressSanitizer" prefix, it's already defined
via pr_fmt().
> + }
> + }
> +}
> +
> +bool kasan_store_only_enabled(void)
> +{
> + return static_branch_unlikely(&kasan_flag_store_only);
> +}
> +
> #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
>
> EXPORT_SYMBOL_IF_KUNIT(kasan_enable_hw_tags);
> @@ -404,4 +474,6 @@ VISIBLE_IF_KUNIT void kasan_force_async_fault(void)
> }
> EXPORT_SYMBOL_IF_KUNIT(kasan_force_async_fault);
>
> +EXPORT_SYMBOL_IF_KUNIT(kasan_store_only_enabled);
> +
> #endif
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 129178be5e64..1d853de1c499 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -33,6 +33,7 @@ static inline bool kasan_stack_collection_enabled(void)
> #include "../slab.h"
>
> DECLARE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
> +DECLARE_STATIC_KEY_FALSE(kasan_flag_stonly);
kasan_flag_store_only
Did you build test this at all?
>
> enum kasan_mode {
> KASAN_MODE_SYNC,
> @@ -428,6 +429,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> #define hw_enable_tag_checks_sync() arch_enable_tag_checks_sync()
> #define hw_enable_tag_checks_async() arch_enable_tag_checks_async()
> #define hw_enable_tag_checks_asymm() arch_enable_tag_checks_asymm()
> +#define hw_enable_tag_checks_store_only() arch_enable_tag_checks_store_only()
> #define hw_suppress_tag_checks_start() arch_suppress_tag_checks_start()
> #define hw_suppress_tag_checks_stop() arch_suppress_tag_checks_stop()
> #define hw_force_async_tag_fault() arch_force_async_tag_fault()
> @@ -437,10 +439,18 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> arch_set_mem_tag_range((addr), (size), (tag), (init))
>
> void kasan_enable_hw_tags(void);
> +void kasan_enable_store_only(void);
> +bool kasan_store_only_enabled(void);
>
> #else /* CONFIG_KASAN_HW_TAGS */
>
> static inline void kasan_enable_hw_tags(void) { }
> +static inline void kasan_enable_store_only(void) { }
> +
> +static inline bool kasan_store_only_enabled(void)
> +{
> + return false;
> +}
>
> #endif /* CONFIG_KASAN_HW_TAGS */
>
> --
> LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/kasan-dev/20250813175335.3980268-2-yeoreum.yun%40arm.com.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-13 17:53 ` [PATCH v2 2/2] kasan: apply store-only mode in kasan kunit testcases Yeoreum Yun
@ 2025-08-14 5:04 ` Andrey Konovalov
2025-08-14 11:13 ` Yeoreum Yun
0 siblings, 1 reply; 16+ messages in thread
From: Andrey Konovalov @ 2025-08-14 5:04 UTC (permalink / raw)
To: Yeoreum Yun
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
On Wed, Aug 13, 2025 at 7:53 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
>
> When KASAN is configured in store-only mode,
> fetch/load operations do not trigger tag check faults.
>
> As a result, the outcome of some test cases may differ
> compared to when KASAN is configured without store-only mode.
>
> Therefore, by modifying pre-exist testcases
> check the store only makes tag check fault (TCF) where
> writing is perform in "allocated memory" but tag is invalid
> (i.e) redzone write in atomic_set() testcases.
> Otherwise check the invalid fetch/read doesn't generate TCF.
>
> Also, skip some testcases affected by initial value
> (i.e) atomic_cmpxchg() testcase maybe successd if
> it passes valid atomic_t address and invalid oldaval address.
> In this case, if invalid atomic_t doesn't have the same oldval,
> it won't trigger store operation so the test will pass.
>
> Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> ---
> mm/kasan/kasan_test_c.c | 366 +++++++++++++++++++++++++++++++---------
> 1 file changed, 286 insertions(+), 80 deletions(-)
>
> diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
> index 2aa12dfa427a..e5d08a6ee3a2 100644
> --- a/mm/kasan/kasan_test_c.c
> +++ b/mm/kasan/kasan_test_c.c
> @@ -94,11 +94,13 @@ static void kasan_test_exit(struct kunit *test)
> }
>
> /**
> - * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
> - * KASAN report; causes a KUnit test failure otherwise.
> + * _KUNIT_EXPECT_KASAN_TEMPLATE - check that the executed expression produces
> + * a KASAN report or not; a KUnit test failure when it's different from @produce.
> *
> * @test: Currently executing KUnit test.
> - * @expression: Expression that must produce a KASAN report.
> + * @expr: Expression produce a KASAN report or not.
> + * @expr_str: Expression string
> + * @produce: expression should produce a KASAN report.
> *
> * For hardware tag-based KASAN, when a synchronous tag fault happens, tag
> * checking is auto-disabled. When this happens, this test handler reenables
> @@ -110,25 +112,29 @@ static void kasan_test_exit(struct kunit *test)
> * Use READ/WRITE_ONCE() for the accesses and compiler barriers around the
> * expression to prevent that.
> *
> - * In between KUNIT_EXPECT_KASAN_FAIL checks, test_status.report_found is kept
> + * In between _KUNIT_EXPECT_KASAN_TEMPLATE checks, test_status.report_found is kept
> * as false. This allows detecting KASAN reports that happen outside of the
> * checks by asserting !test_status.report_found at the start of
> - * KUNIT_EXPECT_KASAN_FAIL and in kasan_test_exit.
> + * _KUNIT_EXPECT_KASAN_TEMPLATE and in kasan_test_exit.
> */
> -#define KUNIT_EXPECT_KASAN_FAIL(test, expression) do { \
> +#define _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, expr_str, produce) \
> +do { \
> if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
> kasan_sync_fault_possible()) \
> migrate_disable(); \
> KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found)); \
> barrier(); \
> - expression; \
> + expr; \
> barrier(); \
> if (kasan_async_fault_possible()) \
> kasan_force_async_fault(); \
> - if (!READ_ONCE(test_status.report_found)) { \
> - KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN failure " \
> - "expected in \"" #expression \
> - "\", but none occurred"); \
> + if (READ_ONCE(test_status.report_found) != produce) { \
> + KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN %s " \
> + "expected in \"" expr_str \
> + "\", but %soccurred", \
> + (produce ? "failure" : "success"), \
> + (test_status.report_found ? \
> + "" : "none ")); \
> } \
> if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
> kasan_sync_fault_possible()) { \
> @@ -141,6 +147,26 @@ static void kasan_test_exit(struct kunit *test)
> WRITE_ONCE(test_status.async_fault, false); \
> } while (0)
>
> +/*
> + * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
> + * KASAN report; causes a KUnit test failure otherwise.
> + *
> + * @test: Currently executing KUnit test.
> + * @expr: Expression produce a KASAN report.
> + */
> +#define KUNIT_EXPECT_KASAN_FAIL(test, expr) \
> + _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, true)
> +
> +/*
> + * KUNIT_EXPECT_KASAN_SUCCESS - check that the executed expression doesn't
> + * produces a KASAN report; causes a KUnit test failure otherwise.
Should be no need for this, the existing functionality already checks
that there are no reports outside of KUNIT_EXPECT_KASAN_FAIL().
> + *
> + * @test: Currently executing KUnit test.
> + * @expr: Expression doesn't produce a KASAN report.
> + */
> +#define KUNIT_EXPECT_KASAN_SUCCESS(test, expr) \
> + _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, false)
> +
> #define KASAN_TEST_NEEDS_CONFIG_ON(test, config) do { \
> if (!IS_ENABLED(config)) \
> kunit_skip((test), "Test requires " #config "=y"); \
> @@ -183,8 +209,12 @@ static void kmalloc_oob_right(struct kunit *test)
> KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + 5] = 'y');
>
> /* Out-of-bounds access past the aligned kmalloc object. */
> - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
> - ptr[size + KASAN_GRANULE_SIZE + 5]);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] =
> + ptr[size + KASAN_GRANULE_SIZE + 5]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
> + ptr[size + KASAN_GRANULE_SIZE + 5]);
Let's instead add KUNIT_EXPECT_KASAN_FAIL_READ() that only expects a
KASAN report when the store-only mode is not enabled. And use that for
the bad read accesses done in tests.
>
> kfree(ptr);
> }
> @@ -198,7 +228,11 @@ static void kmalloc_oob_left(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
>
> OPTIMIZER_HIDE_VAR(ptr);
> - KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr = *(ptr - 1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, *ptr = *(ptr - 1));
> +
> kfree(ptr);
> }
>
> @@ -211,7 +245,11 @@ static void kmalloc_node_oob_right(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
>
> OPTIMIZER_HIDE_VAR(ptr);
> - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> +
> kfree(ptr);
> }
>
> @@ -291,7 +329,10 @@ static void kmalloc_large_uaf(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> kfree(ptr);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> }
>
> static void kmalloc_large_invalid_free(struct kunit *test)
> @@ -323,7 +364,11 @@ static void page_alloc_oob_right(struct kunit *test)
> ptr = page_address(pages);
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] = ptr[size]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = ptr[size]);
> +
> free_pages((unsigned long)ptr, order);
> }
>
> @@ -338,7 +383,10 @@ static void page_alloc_uaf(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> free_pages((unsigned long)ptr, order);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> }
>
> static void krealloc_more_oob_helper(struct kunit *test,
> @@ -455,10 +503,13 @@ static void krealloc_uaf(struct kunit *test)
> ptr1 = kmalloc(size1, GFP_KERNEL);
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr1);
> kfree(ptr1);
> -
> KUNIT_EXPECT_KASAN_FAIL(test, ptr2 = krealloc(ptr1, size2, GFP_KERNEL));
> KUNIT_ASSERT_NULL(test, ptr2);
> - KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, *(volatile char *)ptr1);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)ptr1);
> }
>
> static void kmalloc_oob_16(struct kunit *test)
> @@ -501,7 +552,11 @@ static void kmalloc_uaf_16(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
> kfree(ptr2);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, *ptr1 = *ptr2);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, *ptr1 = *ptr2);
> +
> kfree(ptr1);
> }
>
> @@ -640,8 +695,14 @@ static void kmalloc_memmove_invalid_size(struct kunit *test)
> memset((char *)ptr, 0, 64);
> OPTIMIZER_HIDE_VAR(ptr);
> OPTIMIZER_HIDE_VAR(invalid_size);
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + memmove((char *)ptr, (char *)ptr + 4, invalid_size));
> +
> kfree(ptr);
> }
>
> @@ -654,7 +715,11 @@ static void kmalloc_uaf(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
>
> kfree(ptr);
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[8]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[8]);
> }
>
> static void kmalloc_uaf_memset(struct kunit *test)
> @@ -701,7 +766,11 @@ static void kmalloc_uaf2(struct kunit *test)
> goto again;
> }
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[40]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[40]);
> +
> KUNIT_EXPECT_PTR_NE(test, ptr1, ptr2);
>
> kfree(ptr2);
> @@ -727,19 +796,33 @@ static void kmalloc_uaf3(struct kunit *test)
> KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr2);
> kfree(ptr2);
>
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr1)[8]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
> }
>
> static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> {
> int *i_unsafe = unsafe;
>
> - KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*i_unsafe));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, smp_load_acquire(i_unsafe));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
>
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_read(unsafe));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> @@ -752,18 +835,38 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> +
> + /*
> + * The result of the test below may vary due to garbage values of unsafe in
> + * store-only mode. Therefore, skip this test when KASAN is configured
> + * in store-only mode.
> + */
> + if (!kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
>
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> + /*
> + * The result of the test below may vary due to garbage values of unsafe in
> + * store-only mode. Therefore, skip this test when KASAN is configured
> + * in store-only mode.
> + */
> + if (!kasan_store_only_enabled()) {
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> + }
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, atomic_long_read(unsafe));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> @@ -776,16 +879,32 @@ static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> +
> + /*
> + * The result of the test below may vary due to garbage values in
> + * store-only mode. Therefore, skip this test when KASAN is configured
> + * in store-only mode.
> + */
> + if (!kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> - KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> +
> + /*
> + * The result of the test below may vary due to garbage values in
> + * store-only mode. Therefore, skip this test when KASAN is configured
> + * in store-only mode.
> + */
> + if (!kasan_store_only_enabled()) {
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> + }
> }
>
> static void kasan_atomics(struct kunit *test)
> @@ -842,8 +961,14 @@ static void ksize_unpoisons_memory(struct kunit *test)
> /* These must trigger a KASAN report. */
> if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
> +
> + if (kasan_store_only_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size + 5]);
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[real_size - 1]);
> + } else {
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
> + }
>
> kfree(ptr);
> }
> @@ -863,8 +988,13 @@ static void ksize_uaf(struct kunit *test)
>
> OPTIMIZER_HIDE_VAR(ptr);
> KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr));
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
> + if (kasan_store_only_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[size]);
> + } else {
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
> + }
> }
>
> /*
> @@ -886,6 +1016,7 @@ static void rcu_uaf_reclaim(struct rcu_head *rp)
> container_of(rp, struct kasan_rcu_info, rcu);
>
> kfree(fp);
> +
> ((volatile struct kasan_rcu_info *)fp)->i;
> }
>
> @@ -899,9 +1030,14 @@ static void rcu_uaf(struct kunit *test)
> global_rcu_ptr = rcu_dereference_protected(
> (struct kasan_rcu_info __rcu *)ptr, NULL);
>
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
> - rcu_barrier());
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
> + rcu_barrier());
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
> + rcu_barrier());
> }
>
> static void workqueue_uaf_work(struct work_struct *work)
> @@ -924,8 +1060,12 @@ static void workqueue_uaf(struct kunit *test)
> queue_work(workqueue, work);
> destroy_workqueue(workqueue);
>
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - ((volatile struct work_struct *)work)->data);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + ((volatile struct work_struct *)work)->data);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + ((volatile struct work_struct *)work)->data);
> }
>
> static void kfree_via_page(struct kunit *test)
> @@ -972,7 +1112,10 @@ static void kmem_cache_oob(struct kunit *test)
> return;
> }
>
> - KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, *p = p[size + OOB_TAG_OFF]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, *p = p[size + OOB_TAG_OFF]);
>
> kmem_cache_free(cache, p);
> kmem_cache_destroy(cache);
> @@ -1068,7 +1211,10 @@ static void kmem_cache_rcu_uaf(struct kunit *test)
> */
> rcu_barrier();
>
> - KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, READ_ONCE(*p));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p));
>
> kmem_cache_destroy(cache);
> }
> @@ -1206,6 +1352,9 @@ static void mempool_oob_right_helper(struct kunit *test, mempool_t *pool, size_t
> if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> KUNIT_EXPECT_KASAN_FAIL(test,
> ((volatile char *)&elem[size])[0]);
> + else if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
> else
> KUNIT_EXPECT_KASAN_FAIL(test,
> ((volatile char *)&elem[round_up(size, KASAN_GRANULE_SIZE)])[0]);
> @@ -1273,7 +1422,11 @@ static void mempool_uaf_helper(struct kunit *test, mempool_t *pool, bool page)
> mempool_free(elem, pool);
>
> ptr = page ? page_address((struct page *)elem) : elem;
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)ptr)[0]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
> }
>
> static void mempool_kmalloc_uaf(struct kunit *test)
> @@ -1532,8 +1685,13 @@ static void kasan_memchr(struct kunit *test)
>
> OPTIMIZER_HIDE_VAR(ptr);
> OPTIMIZER_HIDE_VAR(size);
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - kasan_ptr_result = memchr(ptr, '1', size + 1));
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + kasan_ptr_result = memchr(ptr, '1', size + 1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + kasan_ptr_result = memchr(ptr, '1', size + 1));
>
> kfree(ptr);
> }
> @@ -1559,8 +1717,14 @@ static void kasan_memcmp(struct kunit *test)
>
> OPTIMIZER_HIDE_VAR(ptr);
> OPTIMIZER_HIDE_VAR(size);
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - kasan_int_result = memcmp(ptr, arr, size+1));
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + kasan_int_result = memcmp(ptr, arr, size+1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + kasan_int_result = memcmp(ptr, arr, size+1));
> +
> kfree(ptr);
> }
>
> @@ -1593,9 +1757,13 @@ static void kasan_strings(struct kunit *test)
> KUNIT_EXPECT_EQ(test, KASAN_GRANULE_SIZE - 2,
> strscpy(ptr, src + 1, KASAN_GRANULE_SIZE));
>
> - /* strscpy should fail if the first byte is unreadable. */
> - KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
> - KASAN_GRANULE_SIZE));
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
> + KASAN_GRANULE_SIZE));
> + else
> + /* strscpy should fail if the first byte is unreadable. */
> + KUNIT_EXPECT_KASAN_FAIL(test, strscpy(ptr, src + KASAN_GRANULE_SIZE,
> + KASAN_GRANULE_SIZE));
>
> kfree(src);
> kfree(ptr);
> @@ -1607,17 +1775,22 @@ static void kasan_strings(struct kunit *test)
> * will likely point to zeroed byte.
> */
> ptr += 16;
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
> -
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
> -
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
>
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
> -
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
> -
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
> + if (kasan_store_only_enabled()) {
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strchr(ptr, '1'));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_ptr_result = strrchr(ptr, '1'));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strcmp(ptr, "2"));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strncmp(ptr, "2", 1));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strlen(ptr));
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = strnlen(ptr, 1));
> + } else {
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strchr(ptr, '1'));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_ptr_result = strrchr(ptr, '1'));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strcmp(ptr, "2"));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strncmp(ptr, "2", 1));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strlen(ptr));
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = strnlen(ptr, 1));
> + }
> }
>
> static void kasan_bitops_modify(struct kunit *test, int nr, void *addr)
> @@ -1636,12 +1809,25 @@ static void kasan_bitops_test_and_modify(struct kunit *test, int nr, void *addr)
> {
> KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit(nr, addr));
> KUNIT_EXPECT_KASAN_FAIL(test, __test_and_set_bit(nr, addr));
> - KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
> +
> + /*
> + * When KASAN is running in store-only mode,
> + * a fault won't occur even if the bit is set.
> + * Therefore, skip the test_and_set_bit_lock test in store-only mode.
> + */
> + if (!kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_FAIL(test, test_and_set_bit_lock(nr, addr));
> +
> KUNIT_EXPECT_KASAN_FAIL(test, test_and_clear_bit(nr, addr));
> KUNIT_EXPECT_KASAN_FAIL(test, __test_and_clear_bit(nr, addr));
> KUNIT_EXPECT_KASAN_FAIL(test, test_and_change_bit(nr, addr));
> KUNIT_EXPECT_KASAN_FAIL(test, __test_and_change_bit(nr, addr));
> - KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, kasan_int_result = test_bit(nr, addr));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result = test_bit(nr, addr));
> +
> if (nr < 7)
> KUNIT_EXPECT_KASAN_FAIL(test, kasan_int_result =
> xor_unlock_is_negative_byte(1 << nr, addr));
> @@ -1765,7 +1951,10 @@ static void vmalloc_oob(struct kunit *test)
> KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
>
> /* An aligned access into the first out-of-bounds granule. */
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test, ((volatile char *)v_ptr)[size + 5]);
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size + 5]);
>
> /* Check that in-bounds accesses to the physical page are valid. */
> page = vmalloc_to_page(v_ptr);
> @@ -2042,16 +2231,33 @@ static void copy_user_test_oob(struct kunit *test)
>
> KUNIT_EXPECT_KASAN_FAIL(test,
> unused = copy_from_user(kmem, usermem, size + 1));
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - unused = copy_to_user(usermem, kmem, size + 1));
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + unused = copy_to_user(usermem, kmem, size + 1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + unused = copy_to_user(usermem, kmem, size + 1));
> +
> KUNIT_EXPECT_KASAN_FAIL(test,
> unused = __copy_from_user(kmem, usermem, size + 1));
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - unused = __copy_to_user(usermem, kmem, size + 1));
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + unused = __copy_to_user(usermem, kmem, size + 1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + unused = __copy_to_user(usermem, kmem, size + 1));
> +
> KUNIT_EXPECT_KASAN_FAIL(test,
> unused = __copy_from_user_inatomic(kmem, usermem, size + 1));
> - KUNIT_EXPECT_KASAN_FAIL(test,
> - unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
> +
> + if (kasan_store_only_enabled())
> + KUNIT_EXPECT_KASAN_SUCCESS(test,
> + unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
> + else
> + KUNIT_EXPECT_KASAN_FAIL(test,
> + unused = __copy_to_user_inatomic(usermem, kmem, size + 1));
>
> /*
> * Prepare a long string in usermem to avoid the strncpy_from_user test
> --
> LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-14 5:03 ` Andrey Konovalov
@ 2025-08-14 8:51 ` Yeoreum Yun
2025-08-15 11:19 ` Catalin Marinas
1 sibling, 0 replies; 16+ messages in thread
From: Yeoreum Yun @ 2025-08-14 8:51 UTC (permalink / raw)
To: Andrey Konovalov
Cc: glider, Marco Elver, ryabinin.a.a, dvyukov, vincenzo.frascino,
corbet, catalin.marinas, will, akpm, scott, jhubbard,
pankaj.gupta, leitao, kaleshsingh, maz, broonie, oliver.upton,
james.morse, ardb, hardevsinh.palaniya, david, yang, kasan-dev,
workflows, linux-doc, linux-kernel, linux-arm-kernel, linux-mm
Hi Andrey,
> On Wed, Aug 13, 2025 at 7:53 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> >
> > Since Armv8.9, FEATURE_MTE_STORE_ONLY feature is introduced to restrict
> > raise of tag check fault on store operation only.
> > Introcude KASAN store only mode based on this feature.
> >
> > KASAN store only mode restricts KASAN checks operation for store only and
> > omits the checks for fetch/read operation when accessing memory.
> > So it might be used not only debugging enviroment but also normal
> > enviroment to check memory safty.
> >
> > This features can be controlled with "kasan.store_only" arguments.
> > When "kasan.store_only=on", KASAN checks store only mode otherwise
> > KASAN checks all operations.
>
> I'm thinking if we should name this "kasan.write_only" instead of
> "kasan.store_only". This would align the terms with the
> "kasan.fault=panic_on_write" parameter we already have. But then it
> would be different from "FEATURE_MTE_STORE_ONLY", which is what Arm
> documentation uses (right?).
Yes. it uses "MTE_STORE_ONLY". but, write seems fine for me too.
>
> Marco, Alexander, any opinion?
>
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> > Documentation/dev-tools/kasan.rst | 3 ++
> > arch/arm64/include/asm/memory.h | 1 +
> > arch/arm64/include/asm/mte-kasan.h | 6 +++
> > arch/arm64/kernel/cpufeature.c | 6 +++
> > arch/arm64/kernel/mte.c | 14 ++++++
> > include/linux/kasan.h | 2 +
> > mm/kasan/hw_tags.c | 76 +++++++++++++++++++++++++++++-
> > mm/kasan/kasan.h | 10 ++++
> > 8 files changed, 116 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
> > index 0a1418ab72fd..fcb70dd821ec 100644
> > --- a/Documentation/dev-tools/kasan.rst
> > +++ b/Documentation/dev-tools/kasan.rst
> > @@ -143,6 +143,9 @@ disabling KASAN altogether or controlling its features:
> > Asymmetric mode: a bad access is detected synchronously on reads and
> > asynchronously on writes.
> >
> > +- ``kasan.store_only=off`` or ``kasan.store_only=on`` controls whether KASAN
> > + checks the store (write) accesses only or all accesses (default: ``off``)
> > +
> > - ``kasan.vmalloc=off`` or ``=on`` disables or enables tagging of vmalloc
> > allocations (default: ``on``).
> >
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 5213248e081b..ae29cd3db78d 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -308,6 +308,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
> > #define arch_enable_tag_checks_sync() mte_enable_kernel_sync()
> > #define arch_enable_tag_checks_async() mte_enable_kernel_async()
> > #define arch_enable_tag_checks_asymm() mte_enable_kernel_asymm()
> > +#define arch_enable_tag_checks_store_only() mte_enable_kernel_store_only()
> > #define arch_suppress_tag_checks_start() mte_enable_tco()
> > #define arch_suppress_tag_checks_stop() mte_disable_tco()
> > #define arch_force_async_tag_fault() mte_check_tfsr_exit()
> > diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> > index 2e98028c1965..3e1cc341d47a 100644
> > --- a/arch/arm64/include/asm/mte-kasan.h
> > +++ b/arch/arm64/include/asm/mte-kasan.h
> > @@ -200,6 +200,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
> > void mte_enable_kernel_sync(void);
> > void mte_enable_kernel_async(void);
> > void mte_enable_kernel_asymm(void);
> > +int mte_enable_kernel_store_only(void);
> >
> > #else /* CONFIG_ARM64_MTE */
> >
> > @@ -251,6 +252,11 @@ static inline void mte_enable_kernel_asymm(void)
> > {
> > }
> >
> > +static inline int mte_enable_kenrel_store_only(void)
>
> Typo in the function name. Please build/boot test without MTE/KASAN enabled.
Oops... Sorry for mistake :\
>
> > +{
> > + return -EINVAL;
> > +}
> > +
> > #endif /* CONFIG_ARM64_MTE */
> >
> > #endif /* __ASSEMBLY__ */
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index 9ad065f15f1d..7b724fcf20a7 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -2404,6 +2404,11 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
> >
> > kasan_init_hw_tags_cpu();
> > }
> > +
> > +static void cpu_enable_mte_store_only(struct arm64_cpu_capabilities const *cap)
> > +{
> > + kasan_late_init_hw_tags_cpu();
> > +}
> > #endif /* CONFIG_ARM64_MTE */
> >
> > static void user_feature_fixup(void)
> > @@ -2922,6 +2927,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> > .capability = ARM64_MTE_STORE_ONLY,
> > .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> > .matches = has_cpuid_feature,
> > + .cpu_enable = cpu_enable_mte_store_only,
> > ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTESTOREONLY, IMP)
> > },
> > #endif /* CONFIG_ARM64_MTE */
> > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> > index e5e773844889..8eb1f66f2ccd 100644
> > --- a/arch/arm64/kernel/mte.c
> > +++ b/arch/arm64/kernel/mte.c
> > @@ -157,6 +157,20 @@ void mte_enable_kernel_asymm(void)
> > mte_enable_kernel_sync();
> > }
> > }
> > +
> > +int mte_enable_kernel_store_only(void)
> > +{
> > + if (!cpus_have_cap(ARM64_MTE_STORE_ONLY))
> > + return -EINVAL;
> > +
> > + sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCSO_MASK,
> > + SYS_FIELD_PREP(SCTLR_EL1, TCSO, 1));
> > + isb();
> > +
> > + pr_info_once("MTE: enabled stonly mode at EL1\n");
> > +
> > + return 0;
> > +}
> > #endif
> >
> > #ifdef CONFIG_KASAN_HW_TAGS
> > diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> > index 890011071f2b..28951b29c593 100644
> > --- a/include/linux/kasan.h
> > +++ b/include/linux/kasan.h
> > @@ -552,9 +552,11 @@ static inline void kasan_init_sw_tags(void) { }
> > #ifdef CONFIG_KASAN_HW_TAGS
> > void kasan_init_hw_tags_cpu(void);
> > void __init kasan_init_hw_tags(void);
> > +void kasan_late_init_hw_tags_cpu(void);
> > #else
> > static inline void kasan_init_hw_tags_cpu(void) { }
> > static inline void kasan_init_hw_tags(void) { }
> > +static inline void kasan_late_init_hw_tags_cpu(void) { }
> > #endif
> >
> > #ifdef CONFIG_KASAN_VMALLOC
> > diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> > index 9a6927394b54..c2f90c06076e 100644
> > --- a/mm/kasan/hw_tags.c
> > +++ b/mm/kasan/hw_tags.c
> > @@ -41,9 +41,16 @@ enum kasan_arg_vmalloc {
> > KASAN_ARG_VMALLOC_ON,
> > };
> >
> > +enum kasan_arg_store_only {
> > + KASAN_ARG_STORE_ONLY_DEFAULT,
> > + KASAN_ARG_STORE_ONLY_OFF,
> > + KASAN_ARG_STORE_ONLY_ON,
> > +};
> > +
> > static enum kasan_arg kasan_arg __ro_after_init;
> > static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
> > static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
> > +static enum kasan_arg_store_only kasan_arg_store_only __ro_after_init;
> >
> > /*
> > * Whether KASAN is enabled at all.
> > @@ -67,6 +74,9 @@ DEFINE_STATIC_KEY_FALSE(kasan_flag_vmalloc);
> > #endif
> > EXPORT_SYMBOL_GPL(kasan_flag_vmalloc);
> >
> > +DEFINE_STATIC_KEY_FALSE(kasan_flag_store_only);
>
> Is there a reason to have this as a static key? I think a normal
> global bool would work, just as a normal variable works for
> kasan_mode.
Just for align with the other arguments.
since the kasan_flags_store_only is used only for kunit-test,
not called in any other place, this optimisation is meaningless.
It's fine to change as global bool.
>
> > +EXPORT_SYMBOL_GPL(kasan_flag_store_only);
> > +
> > #define PAGE_ALLOC_SAMPLE_DEFAULT 1
> > #define PAGE_ALLOC_SAMPLE_ORDER_DEFAULT 3
> >
> > @@ -141,6 +151,23 @@ static int __init early_kasan_flag_vmalloc(char *arg)
> > }
> > early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
> >
> > +/* kasan.store_only=off/on */
> > +static int __init early_kasan_flag_store_only(char *arg)
> > +{
> > + if (!arg)
> > + return -EINVAL;
> > +
> > + if (!strcmp(arg, "off"))
> > + kasan_arg_store_only = KASAN_ARG_STORE_ONLY_OFF;
> > + else if (!strcmp(arg, "on"))
> > + kasan_arg_store_only = KASAN_ARG_STORE_ONLY_ON;
> > + else
> > + return -EINVAL;
> > +
> > + return 0;
> > +}
> > +early_param("kasan.store_only", early_kasan_flag_store_only);
> > +
> > static inline const char *kasan_mode_info(void)
> > {
> > if (kasan_mode == KASAN_MODE_ASYNC)
> > @@ -219,6 +246,20 @@ void kasan_init_hw_tags_cpu(void)
> > kasan_enable_hw_tags();
> > }
> >
> > +/*
> > + * kasan_late_init_hw_tags_cpu_post() is called for each CPU after
> > + * all cpus are bring-up at boot.
>
> "CPUs"
> "brought up"
>
> And please spell-check other comments.
Thanks.
>
> > + * Not marked as __init as a CPU can be hot-plugged after boot.
> > + */
> > +void kasan_late_init_hw_tags_cpu(void)
> > +{
> > + /*
> > + * Enable stonly mode only when explicitly requested through the command line.
>
> "store-only"
>
> > + * If system doesn't support, kasan checks all operation.
>
> "If the system doesn't support this mode, KASAN will check both load
> and store operations."
Thanks for suggestion :)
>
> > + */
> > + kasan_enable_store_only();
> > +}
> > +
> > /* kasan_init_hw_tags() is called once on boot CPU. */
> > void __init kasan_init_hw_tags(void)
> > {
> > @@ -257,15 +298,28 @@ void __init kasan_init_hw_tags(void)
> > break;
> > }
> >
> > + switch (kasan_arg_store_only) {
> > + case KASAN_ARG_STORE_ONLY_DEFAULT:
> > + /* Default is specified by kasan_flag_store_only definition. */
> > + break;
> > + case KASAN_ARG_STORE_ONLY_OFF:
> > + static_branch_disable(&kasan_flag_store_only);
> > + break;
> > + case KASAN_ARG_STORE_ONLY_ON:
> > + static_branch_enable(&kasan_flag_store_only);
> > + break;
> > + }
>
> Let's move this part to kasan_late_init_hw_tags_cpu. Since that's
> where the final decision of whether the store-only mode is enabled is
> taken, we should just set the global flag there.
Okay.
>
> > +
> > kasan_init_tags();
> >
> > /* KASAN is now initialized, enable it. */
> > static_branch_enable(&kasan_flag_enabled);
> >
> > - pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s)\n",
> > + pr_info("KernelAddressSanitizer initialized (hw-tags, mode=%s, vmalloc=%s, stacktrace=%s store_only=%s\n",
>
> Let's put "store_only" here next to "mode".
Hmm, I think it's not a proper place to print store_only in here.
I think it would be better to print log related store_only at
kasan_late_init_hw_tags_cpu() like:
print_info("KernelAddressSanitizer checks store(write) access only.\n");
When store_only=on.
>
> You're also missing a comma.
>
> > kasan_mode_info(),
> > str_on_off(kasan_vmalloc_enabled()),
> > - str_on_off(kasan_stack_collection_enabled()));
> > + str_on_off(kasan_stack_collection_enabled()),
> > + str_on_off(kasan_store_only_enabled()));
> > }
> >
> > #ifdef CONFIG_KASAN_VMALLOC
> > @@ -394,6 +448,22 @@ void kasan_enable_hw_tags(void)
> > hw_enable_tag_checks_sync();
> > }
> >
> > +void kasan_enable_store_only(void)
>
> Do we need this as a separate function? I think we can just move the
> code to kasan_late_init_hw_tags_cpu.
>
> > +{
> > + if (kasan_arg_store_only == KASAN_ARG_STORE_ONLY_ON) {
> > + if (hw_enable_tag_checks_store_only()) {
> > + static_branch_disable(&kasan_flag_store_only);
> > + kasan_arg_store_only = KASAN_ARG_STORE_ONLY_OFF;
> > + pr_warn_once("KernelAddressSanitizer: store only mode isn't supported (hw-tags)\n");
>
> No need for the "KernelAddressSanitizer" prefix, it's already defined
> via pr_fmt().
Okay.
>
> > + }
> > + }
> > +}
> > +
> > +bool kasan_store_only_enabled(void)
> > +{
> > + return static_branch_unlikely(&kasan_flag_store_only);
> > +}
> > +
> > #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
> >
> > EXPORT_SYMBOL_IF_KUNIT(kasan_enable_hw_tags);
> > @@ -404,4 +474,6 @@ VISIBLE_IF_KUNIT void kasan_force_async_fault(void)
> > }
> > EXPORT_SYMBOL_IF_KUNIT(kasan_force_async_fault);
> >
> > +EXPORT_SYMBOL_IF_KUNIT(kasan_store_only_enabled);
> > +
> > #endif
> > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> > index 129178be5e64..1d853de1c499 100644
> > --- a/mm/kasan/kasan.h
> > +++ b/mm/kasan/kasan.h
> > @@ -33,6 +33,7 @@ static inline bool kasan_stack_collection_enabled(void)
> > #include "../slab.h"
> >
> > DECLARE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
> > +DECLARE_STATIC_KEY_FALSE(kasan_flag_stonly);
>
> kasan_flag_store_only
>
> Did you build test this at all?
Yes, But there is no place where directly use kasan_flag_stonly,
I think I miss it. Thanks!
>
>
> >
> > enum kasan_mode {
> > KASAN_MODE_SYNC,
> > @@ -428,6 +429,7 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> > #define hw_enable_tag_checks_sync() arch_enable_tag_checks_sync()
> > #define hw_enable_tag_checks_async() arch_enable_tag_checks_async()
> > #define hw_enable_tag_checks_asymm() arch_enable_tag_checks_asymm()
> > +#define hw_enable_tag_checks_store_only() arch_enable_tag_checks_store_only()
> > #define hw_suppress_tag_checks_start() arch_suppress_tag_checks_start()
> > #define hw_suppress_tag_checks_stop() arch_suppress_tag_checks_stop()
> > #define hw_force_async_tag_fault() arch_force_async_tag_fault()
> > @@ -437,10 +439,18 @@ static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
> > arch_set_mem_tag_range((addr), (size), (tag), (init))
> >
> > void kasan_enable_hw_tags(void);
> > +void kasan_enable_store_only(void);
> > +bool kasan_store_only_enabled(void);
> >
> > #else /* CONFIG_KASAN_HW_TAGS */
> >
> > static inline void kasan_enable_hw_tags(void) { }
> > +static inline void kasan_enable_store_only(void) { }
> > +
> > +static inline bool kasan_store_only_enabled(void)
> > +{
> > + return false;
> > +}
> >
> > #endif /* CONFIG_KASAN_HW_TAGS */
> >
> > --
> > LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}
> >
> > --
> > You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> > To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> > To view this discussion visit https://groups.google.com/d/msgid/kasan-dev/20250813175335.3980268-2-yeoreum.yun%40arm.com.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-14 5:04 ` Andrey Konovalov
@ 2025-08-14 11:13 ` Yeoreum Yun
2025-08-15 6:14 ` Andrey Konovalov
0 siblings, 1 reply; 16+ messages in thread
From: Yeoreum Yun @ 2025-08-14 11:13 UTC (permalink / raw)
To: Andrey Konovalov
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
Hi Andrey,
> >
> > When KASAN is configured in store-only mode,
> > fetch/load operations do not trigger tag check faults.
> >
> > As a result, the outcome of some test cases may differ
> > compared to when KASAN is configured without store-only mode.
> >
> > Therefore, by modifying pre-exist testcases
> > check the store only makes tag check fault (TCF) where
> > writing is perform in "allocated memory" but tag is invalid
> > (i.e) redzone write in atomic_set() testcases.
> > Otherwise check the invalid fetch/read doesn't generate TCF.
> >
> > Also, skip some testcases affected by initial value
> > (i.e) atomic_cmpxchg() testcase maybe successd if
> > it passes valid atomic_t address and invalid oldaval address.
> > In this case, if invalid atomic_t doesn't have the same oldval,
> > it won't trigger store operation so the test will pass.
> >
> > Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
> > ---
> > mm/kasan/kasan_test_c.c | 366 +++++++++++++++++++++++++++++++---------
> > 1 file changed, 286 insertions(+), 80 deletions(-)
> >
> > diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
> > index 2aa12dfa427a..e5d08a6ee3a2 100644
> > --- a/mm/kasan/kasan_test_c.c
> > +++ b/mm/kasan/kasan_test_c.c
> > @@ -94,11 +94,13 @@ static void kasan_test_exit(struct kunit *test)
> > }
> >
> > /**
> > - * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
> > - * KASAN report; causes a KUnit test failure otherwise.
> > + * _KUNIT_EXPECT_KASAN_TEMPLATE - check that the executed expression produces
> > + * a KASAN report or not; a KUnit test failure when it's different from @produce.
> > *
> > * @test: Currently executing KUnit test.
> > - * @expression: Expression that must produce a KASAN report.
> > + * @expr: Expression produce a KASAN report or not.
> > + * @expr_str: Expression string
> > + * @produce: expression should produce a KASAN report.
> > *
> > * For hardware tag-based KASAN, when a synchronous tag fault happens, tag
> > * checking is auto-disabled. When this happens, this test handler reenables
> > @@ -110,25 +112,29 @@ static void kasan_test_exit(struct kunit *test)
> > * Use READ/WRITE_ONCE() for the accesses and compiler barriers around the
> > * expression to prevent that.
> > *
> > - * In between KUNIT_EXPECT_KASAN_FAIL checks, test_status.report_found is kept
> > + * In between _KUNIT_EXPECT_KASAN_TEMPLATE checks, test_status.report_found is kept
> > * as false. This allows detecting KASAN reports that happen outside of the
> > * checks by asserting !test_status.report_found at the start of
> > - * KUNIT_EXPECT_KASAN_FAIL and in kasan_test_exit.
> > + * _KUNIT_EXPECT_KASAN_TEMPLATE and in kasan_test_exit.
> > */
> > -#define KUNIT_EXPECT_KASAN_FAIL(test, expression) do { \
> > +#define _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, expr_str, produce) \
> > +do { \
> > if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
> > kasan_sync_fault_possible()) \
> > migrate_disable(); \
> > KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found)); \
> > barrier(); \
> > - expression; \
> > + expr; \
> > barrier(); \
> > if (kasan_async_fault_possible()) \
> > kasan_force_async_fault(); \
> > - if (!READ_ONCE(test_status.report_found)) { \
> > - KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN failure " \
> > - "expected in \"" #expression \
> > - "\", but none occurred"); \
> > + if (READ_ONCE(test_status.report_found) != produce) { \
> > + KUNIT_FAIL(test, KUNIT_SUBTEST_INDENT "KASAN %s " \
> > + "expected in \"" expr_str \
> > + "\", but %soccurred", \
> > + (produce ? "failure" : "success"), \
> > + (test_status.report_found ? \
> > + "" : "none ")); \
> > } \
> > if (IS_ENABLED(CONFIG_KASAN_HW_TAGS) && \
> > kasan_sync_fault_possible()) { \
> > @@ -141,6 +147,26 @@ static void kasan_test_exit(struct kunit *test)
> > WRITE_ONCE(test_status.async_fault, false); \
> > } while (0)
> >
> > +/*
> > + * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
> > + * KASAN report; causes a KUnit test failure otherwise.
> > + *
> > + * @test: Currently executing KUnit test.
> > + * @expr: Expression produce a KASAN report.
> > + */
> > +#define KUNIT_EXPECT_KASAN_FAIL(test, expr) \
> > + _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, true)
> > +
> > +/*
> > + * KUNIT_EXPECT_KASAN_SUCCESS - check that the executed expression doesn't
> > + * produces a KASAN report; causes a KUnit test failure otherwise.
>
> Should be no need for this, the existing functionality already checks
> that there are no reports outside of KUNIT_EXPECT_KASAN_FAIL().
This is function's purpose is to print failure situtations:
- KASAN should reports but no report is found.
- KASAN shouldn't report but there report is found.
To print the second error, the "TEMPLATE" macro is added.
not just checking the no report but to check whether report was
generated as expected.
>
> > + *
> > + * @test: Currently executing KUnit test.
> > + * @expr: Expression doesn't produce a KASAN report.
> > + */
> > +#define KUNIT_EXPECT_KASAN_SUCCESS(test, expr) \
> > + _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, false)
> > +
> > #define KASAN_TEST_NEEDS_CONFIG_ON(test, config) do { \
> > if (!IS_ENABLED(config)) \
> > kunit_skip((test), "Test requires " #config "=y"); \
> > @@ -183,8 +209,12 @@ static void kmalloc_oob_right(struct kunit *test)
> > KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + 5] = 'y');
> >
> > /* Out-of-bounds access past the aligned kmalloc object. */
> > - KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
> > - ptr[size + KASAN_GRANULE_SIZE + 5]);
> > + if (kasan_store_only_enabled())
> > + KUNIT_EXPECT_KASAN_SUCCESS(test, ptr[0] =
> > + ptr[size + KASAN_GRANULE_SIZE + 5]);
> > + else
> > + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] =
> > + ptr[size + KASAN_GRANULE_SIZE + 5]);
>
> Let's instead add KUNIT_EXPECT_KASAN_FAIL_READ() that only expects a
> KASAN report when the store-only mode is not enabled. And use that for
> the bad read accesses done in tests.
Okay. I rename the KUNIT_EXPECT_KASAN_SUCCESS and integrate it
in the macro. Thanks!
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-14 11:13 ` Yeoreum Yun
@ 2025-08-15 6:14 ` Andrey Konovalov
2025-08-15 8:06 ` Yeoreum Yun
0 siblings, 1 reply; 16+ messages in thread
From: Andrey Konovalov @ 2025-08-15 6:14 UTC (permalink / raw)
To: Yeoreum Yun
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
On Thu, Aug 14, 2025 at 1:14 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
>
> > > +/*
> > > + * KUNIT_EXPECT_KASAN_SUCCESS - check that the executed expression doesn't
> > > + * produces a KASAN report; causes a KUnit test failure otherwise.
> >
> > Should be no need for this, the existing functionality already checks
> > that there are no reports outside of KUNIT_EXPECT_KASAN_FAIL().
>
> This is function's purpose is to print failure situtations:
> - KASAN should reports but no report is found.
> - KASAN shouldn't report but there report is found.
>
> To print the second error, the "TEMPLATE" macro is added.
> not just checking the no report but to check whether report was
> generated as expected.
There's no need to an explicit wrapper for detecting the second case.
If there's a KASAN report printed outside of
KUNIT_EXPECT_KASAN_FAIL(), either the next KUNIT_EXPECT_KASAN_FAIL()
or kasan_test_exit() will detect this.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 2/2] kasan: apply store-only mode in kasan kunit testcases
2025-08-15 6:14 ` Andrey Konovalov
@ 2025-08-15 8:06 ` Yeoreum Yun
0 siblings, 0 replies; 16+ messages in thread
From: Yeoreum Yun @ 2025-08-15 8:06 UTC (permalink / raw)
To: Andrey Konovalov
Cc: ryabinin.a.a, glider, dvyukov, vincenzo.frascino, corbet,
catalin.marinas, will, akpm, scott, jhubbard, pankaj.gupta,
leitao, kaleshsingh, maz, broonie, oliver.upton, james.morse,
ardb, hardevsinh.palaniya, david, yang, kasan-dev, workflows,
linux-doc, linux-kernel, linux-arm-kernel, linux-mm
Hi Andrey,
> > > > +/*
> > > > + * KUNIT_EXPECT_KASAN_SUCCESS - check that the executed expression doesn't
> > > > + * produces a KASAN report; causes a KUnit test failure otherwise.
> > >
> > > Should be no need for this, the existing functionality already checks
> > > that there are no reports outside of KUNIT_EXPECT_KASAN_FAIL().
> >
> > This is function's purpose is to print failure situtations:
> > - KASAN should reports but no report is found.
> > - KASAN shouldn't report but there report is found.
> >
> > To print the second error, the "TEMPLATE" macro is added.
> > not just checking the no report but to check whether report was
> > generated as expected.
>
> There's no need to an explicit wrapper for detecting the second case.
> If there's a KASAN report printed outside of
> KUNIT_EXPECT_KASAN_FAIL(), either the next KUNIT_EXPECT_KASAN_FAIL()
> or kasan_test_exit() will detect this.
Sorry for bothering you, But I'm not sure whether
I understood your suggetion but that's sound of implentation like:
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KUNIT_EXPECT_KASAN_FAIL_READ(test, expression) do { \
+ if (!kasan_enabled_store_only()) { \
+ KUNIT_EXPECT_KASAN_FAIL(test, expression); \
+ goto ____skip; \
+ } \
+ if (kasan_sync_fault_possible()) \
+ migrate_disable(); \
+ KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found)); \
+ barrier(); \
+ expression; \
+ barrier(); \
+ if (kasan_sync_fault_possible()) \
+ migrate_enable(); \
+___skip: \
+} while (0)
+#else
+#define KUNIT_EXPECT_KASAN_FAIL_READ(test, expression) \
+ KUNIT_EXPECT_KASAN_FAIL(test, expression)
+#endif
and you expect the "Error print" on the next KUNIT_EXPECT_KASAN_FAIL's
KUNIT_EXPECT_FALSE(test, READ_ONCE(test_status.report_found));
or kasan_test_exit().
this maybe work, but it wouldn't print the proper "expression" and
seems like reporting the problem in another place from it happens
(at least source line printing is different from
where it happens -- KUNIT_EXPECT_KASAN_FAIL_READ() and
where it reports -- KUNIT_EXPECT_FALSE()).
Also, some of test case using atomic, kasan_enabled_store_only() can
use for KUNIT_EXPECT_KASAN_FAIL()
i.e) atomic_set() which allocated with the sizeof 42 (writing on
redzone).
That's why I think it would be better to use like with
sustaining _KUNIT_EXPECT_KASAN_TEMPLATE:
+/*
+ * KUNIT_EXPECT_KASAN_FAIL - check that the executed expression produces a
+ * KASAN report; causes a KUnit test failure otherwise.
+ *
+ * @test: Currently executing KUnit test.
+ * @expr: Expression produce a KASAN report.
+ */
+#define KUNIT_EXPECT_KASAN_FAIL(test, expr) \
+ _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, true)
+
+/*
+ * KUNIT_EXPECT_KASAN_FAIL_READ - check that the executed expression produces
+ * a KASAN report for read access.
+ * It causes a KUnit test failure. if KASAN report isn't produced for read access.
+ * For write access, it cause a KUnit test failure if a KASAN report is produced
+ *
+ * @test: Currently executing KUnit test.
+ * @expr: Expression doesn't produce a KASAN report.
+ */
+#define KUNIT_EXPECT_KASAN_FAIL_READ(test, expr) \
+ _KUNIT_EXPECT_KASAN_TEMPLATE(test, expr, #expr, \
+ !kasan_store_only_enabled()) \
Am I misunderstading?
Thanks.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-13 17:53 ` [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option Yeoreum Yun
2025-08-14 5:03 ` Andrey Konovalov
@ 2025-08-15 11:13 ` Catalin Marinas
2025-08-15 13:51 ` Yeoreum Yun
2025-08-15 14:47 ` Yeoreum Yun
1 sibling, 2 replies; 16+ messages in thread
From: Catalin Marinas @ 2025-08-15 11:13 UTC (permalink / raw)
To: Yeoreum Yun
Cc: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, will, akpm, scott, jhubbard, pankaj.gupta, leitao,
kaleshsingh, maz, broonie, oliver.upton, james.morse, ardb,
hardevsinh.palaniya, david, yang, kasan-dev, workflows, linux-doc,
linux-kernel, linux-arm-kernel, linux-mm
On Wed, Aug 13, 2025 at 06:53:34PM +0100, Yeoreum Yun wrote:
> diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> index 2e98028c1965..3e1cc341d47a 100644
> --- a/arch/arm64/include/asm/mte-kasan.h
> +++ b/arch/arm64/include/asm/mte-kasan.h
> @@ -200,6 +200,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
> void mte_enable_kernel_sync(void);
> void mte_enable_kernel_async(void);
> void mte_enable_kernel_asymm(void);
> +int mte_enable_kernel_store_only(void);
>
> #else /* CONFIG_ARM64_MTE */
>
> @@ -251,6 +252,11 @@ static inline void mte_enable_kernel_asymm(void)
> {
> }
>
> +static inline int mte_enable_kenrel_store_only(void)
^^^^^^
This won't build with MTE disabled (check spelling).
> +{
> + return -EINVAL;
> +}
> +
> #endif /* CONFIG_ARM64_MTE */
>
> #endif /* __ASSEMBLY__ */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9ad065f15f1d..7b724fcf20a7 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -2404,6 +2404,11 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
>
> kasan_init_hw_tags_cpu();
> }
> +
> +static void cpu_enable_mte_store_only(struct arm64_cpu_capabilities const *cap)
> +{
> + kasan_late_init_hw_tags_cpu();
> +}
> #endif /* CONFIG_ARM64_MTE */
>
> static void user_feature_fixup(void)
> @@ -2922,6 +2927,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
> .capability = ARM64_MTE_STORE_ONLY,
> .type = ARM64_CPUCAP_SYSTEM_FEATURE,
> .matches = has_cpuid_feature,
> + .cpu_enable = cpu_enable_mte_store_only,
I don't think we should add this, see below.
> ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, MTESTOREONLY, IMP)
> },
> #endif /* CONFIG_ARM64_MTE */
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index e5e773844889..8eb1f66f2ccd 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -157,6 +157,20 @@ void mte_enable_kernel_asymm(void)
> mte_enable_kernel_sync();
> }
> }
> +
> +int mte_enable_kernel_store_only(void)
> +{
> + if (!cpus_have_cap(ARM64_MTE_STORE_ONLY))
> + return -EINVAL;
> +
> + sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCSO_MASK,
> + SYS_FIELD_PREP(SCTLR_EL1, TCSO, 1));
> + isb();
> +
> + pr_info_once("MTE: enabled stonly mode at EL1\n");
> +
> + return 0;
> +}
> #endif
If we do something like mte_enable_kernel_asymm(), that one doesn't
return any error, just fall back to the default mode.
> diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> index 9a6927394b54..c2f90c06076e 100644
> --- a/mm/kasan/hw_tags.c
> +++ b/mm/kasan/hw_tags.c
> @@ -219,6 +246,20 @@ void kasan_init_hw_tags_cpu(void)
> kasan_enable_hw_tags();
> }
>
> +/*
> + * kasan_late_init_hw_tags_cpu_post() is called for each CPU after
> + * all cpus are bring-up at boot.
Nit: s/bring-up/brought up/
> + * Not marked as __init as a CPU can be hot-plugged after boot.
> + */
> +void kasan_late_init_hw_tags_cpu(void)
> +{
> + /*
> + * Enable stonly mode only when explicitly requested through the command line.
> + * If system doesn't support, kasan checks all operation.
> + */
> + kasan_enable_store_only();
> +}
There's nothing late about this. We have kasan_init_hw_tags_cpu()
already and I'd rather have it all handled via this function. It's not
that different from how we added asymmetric support, though store-only
is complementary to the sync vs async checking.
Like we do in mte_enable_kernel_asymm(), if the feature is not available
just fall back to checking both reads and writes in the chosen
async/sync/asymm way. You can add some pr_info() to inform the user of
the chosen kasan mode. It's really mostly an performance choice.
--
Catalin
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-14 5:03 ` Andrey Konovalov
2025-08-14 8:51 ` Yeoreum Yun
@ 2025-08-15 11:19 ` Catalin Marinas
1 sibling, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2025-08-15 11:19 UTC (permalink / raw)
To: Andrey Konovalov
Cc: Yeoreum Yun, glider, Marco Elver, ryabinin.a.a, dvyukov,
vincenzo.frascino, corbet, will, akpm, scott, jhubbard,
pankaj.gupta, leitao, kaleshsingh, maz, broonie, oliver.upton,
james.morse, ardb, hardevsinh.palaniya, david, yang, kasan-dev,
workflows, linux-doc, linux-kernel, linux-arm-kernel, linux-mm
On Thu, Aug 14, 2025 at 07:03:35AM +0200, Andrey Konovalov wrote:
> On Wed, Aug 13, 2025 at 7:53 PM Yeoreum Yun <yeoreum.yun@arm.com> wrote:
> > Since Armv8.9, FEATURE_MTE_STORE_ONLY feature is introduced to restrict
> > raise of tag check fault on store operation only.
> > Introcude KASAN store only mode based on this feature.
> >
> > KASAN store only mode restricts KASAN checks operation for store only and
> > omits the checks for fetch/read operation when accessing memory.
> > So it might be used not only debugging enviroment but also normal
> > enviroment to check memory safty.
> >
> > This features can be controlled with "kasan.store_only" arguments.
> > When "kasan.store_only=on", KASAN checks store only mode otherwise
> > KASAN checks all operations.
>
> I'm thinking if we should name this "kasan.write_only" instead of
> "kasan.store_only". This would align the terms with the
> "kasan.fault=panic_on_write" parameter we already have. But then it
> would be different from "FEATURE_MTE_STORE_ONLY", which is what Arm
> documentation uses (right?).
"write_only" works for me, kasan is meant to be generic even though it
currently closely follows the arm nomenclature.
--
Catalin
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-15 11:13 ` Catalin Marinas
@ 2025-08-15 13:51 ` Yeoreum Yun
2025-08-15 15:10 ` Yeoreum Yun
2025-08-15 14:47 ` Yeoreum Yun
1 sibling, 1 reply; 16+ messages in thread
From: Yeoreum Yun @ 2025-08-15 13:51 UTC (permalink / raw)
To: Catalin Marinas
Cc: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, will, akpm, scott, jhubbard, pankaj.gupta, leitao,
kaleshsingh, maz, broonie, oliver.upton, james.morse, ardb,
hardevsinh.palaniya, david, yang, kasan-dev, workflows, linux-doc,
linux-kernel, linux-arm-kernel, linux-mm
Hi Cataline,
> > diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h
> > index 2e98028c1965..3e1cc341d47a 100644
> > --- a/arch/arm64/include/asm/mte-kasan.h
> > +++ b/arch/arm64/include/asm/mte-kasan.h
> > @@ -200,6 +200,7 @@ static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
> > void mte_enable_kernel_sync(void);
> > void mte_enable_kernel_async(void);
> > void mte_enable_kernel_asymm(void);
> > +int mte_enable_kernel_store_only(void);
> >
> > #else /* CONFIG_ARM64_MTE */
> >
> > @@ -251,6 +252,11 @@ static inline void mte_enable_kernel_asymm(void)
> > {
> > }
> >
> > +static inline int mte_enable_kenrel_store_only(void)
> ^^^^^^
> This won't build with MTE disabled (check spelling).
Yes. this is my mistake. I'll fix it..
[...]
> > +int mte_enable_kernel_store_only(void)
> > +{
> > + if (!cpus_have_cap(ARM64_MTE_STORE_ONLY))
> > + return -EINVAL;
> > +
> > + sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCSO_MASK,
> > + SYS_FIELD_PREP(SCTLR_EL1, TCSO, 1));
> > + isb();
> > +
> > + pr_info_once("MTE: enabled stonly mode at EL1\n");
> > +
> > + return 0;
> > +}
> > #endif
>
> If we do something like mte_enable_kernel_asymm(), that one doesn't
> return any error, just fall back to the default mode.
Yes. I'll change this.
>
> > diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
> > index 9a6927394b54..c2f90c06076e 100644
> > --- a/mm/kasan/hw_tags.c
> > +++ b/mm/kasan/hw_tags.c
> > @@ -219,6 +246,20 @@ void kasan_init_hw_tags_cpu(void)
> > kasan_enable_hw_tags();
> > }
> >
> > +/*
> > + * kasan_late_init_hw_tags_cpu_post() is called for each CPU after
> > + * all cpus are bring-up at boot.
>
> Nit: s/bring-up/brought up/
Thanks. I'll fix it.
>
> > + * Not marked as __init as a CPU can be hot-plugged after boot.
> > + */
> > +void kasan_late_init_hw_tags_cpu(void)
> > +{
> > + /*
> > + * Enable stonly mode only when explicitly requested through the command line.
> > + * If system doesn't support, kasan checks all operation.
> > + */
> > + kasan_enable_store_only();
> > +}
>
> There's nothing late about this. We have kasan_init_hw_tags_cpu()
> already and I'd rather have it all handled via this function. It's not
> that different from how we added asymmetric support, though store-only
> is complementary to the sync vs async checking.
>
> Like we do in mte_enable_kernel_asymm(), if the feature is not available
> just fall back to checking both reads and writes in the chosen
> async/sync/asymm way. You can add some pr_info() to inform the user of
> the chosen kasan mode. It's really mostly an performance choice.
But MTE_STORE_ONLY is defined as a SYSTEM_FEATURE.
This means that when it is called from kasan_init_hw_tags_cpu(),
the store_only mode is never set in system_capability,
so it cannot be checked using cpus_have_cap().
Although the MTE_STORE_ONLY capability is verified by
directly reading the ID register (seems ugly),
my concern is the potential for an inconsistent state across CPUs.
For example, in the case of ASYMM, which is a BOOT_CPU_FEATURE,
all CPUs operate in the same mode —
if ASYMM is not supported, either
all CPUs run in synchronous mode, or all run in asymmetric mode.
However, for MTE_STORE_ONLY, CPUs that support the feature will run in store-only mode,
while those that do not will run with full checking for all operations.
If we want to enable MTE_STORE_ONLY in kasan_init_hw_tags_cpu(),
I believe it should be reclassified as a BOOT_CPU_FEATURE.x
Otherwise, the cpu_enable_mte_store_only() function should still be called
as the enable callback for the MTE_STORE_ONLY feature.
In that case, kasan_enable_store_only() should be invoked (remove late init),
and if it returns an error, stop_machine() should be called to disable
the STORE_ONLY feature on all other CPUs
if any CPU is found to lack support for MTE_STORE_ONLY.
Am I missing something?
Thanks
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-15 11:13 ` Catalin Marinas
2025-08-15 13:51 ` Yeoreum Yun
@ 2025-08-15 14:47 ` Yeoreum Yun
2025-08-15 17:46 ` Catalin Marinas
1 sibling, 1 reply; 16+ messages in thread
From: Yeoreum Yun @ 2025-08-15 14:47 UTC (permalink / raw)
To: Catalin Marinas
Cc: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, will, akpm, scott, jhubbard, pankaj.gupta, leitao,
kaleshsingh, maz, broonie, oliver.upton, james.morse, ardb,
hardevsinh.palaniya, david, yang, kasan-dev, workflows, linux-doc,
linux-kernel, linux-arm-kernel, linux-mm
[...]
> If we do something like mte_enable_kernel_asymm(), that one doesn't
> return any error, just fall back to the default mode.
But, in case of mte_enable_kernel_asymm() need return to
change kasan_flag_write_only = false when it doesn't support which
used in KASAN Kunit test.
If we don't return anything, when user set the write_only but HW doesn't
support it, KUNIT test get failure since kasan_write_only_enabled()
return true thou HW doesn't support it.
[...]
Thanks.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-15 13:51 ` Yeoreum Yun
@ 2025-08-15 15:10 ` Yeoreum Yun
2025-08-15 17:44 ` Catalin Marinas
0 siblings, 1 reply; 16+ messages in thread
From: Yeoreum Yun @ 2025-08-15 15:10 UTC (permalink / raw)
To: Catalin Marinas
Cc: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, will, akpm, scott, jhubbard, pankaj.gupta, leitao,
kaleshsingh, maz, broonie, oliver.upton, james.morse, ardb,
hardevsinh.palaniya, david, yang, kasan-dev, workflows, linux-doc,
linux-kernel, linux-arm-kernel, linux-mm
[...]
> >
> > > + * Not marked as __init as a CPU can be hot-plugged after boot.
> > > + */
> > > +void kasan_late_init_hw_tags_cpu(void)
> > > +{
> > > + /*
> > > + * Enable stonly mode only when explicitly requested through the command line.
> > > + * If system doesn't support, kasan checks all operation.
> > > + */
> > > + kasan_enable_store_only();
> > > +}
> >
> > There's nothing late about this. We have kasan_init_hw_tags_cpu()
> > already and I'd rather have it all handled via this function. It's not
> > that different from how we added asymmetric support, though store-only
> > is complementary to the sync vs async checking.
> >
> > Like we do in mte_enable_kernel_asymm(), if the feature is not available
> > just fall back to checking both reads and writes in the chosen
> > async/sync/asymm way. You can add some pr_info() to inform the user of
> > the chosen kasan mode. It's really mostly an performance choice.
>
> But MTE_STORE_ONLY is defined as a SYSTEM_FEATURE.
> This means that when it is called from kasan_init_hw_tags_cpu(),
> the store_only mode is never set in system_capability,
> so it cannot be checked using cpus_have_cap().
>
> Although the MTE_STORE_ONLY capability is verified by
> directly reading the ID register (seems ugly),
> my concern is the potential for an inconsistent state across CPUs.
>
> For example, in the case of ASYMM, which is a BOOT_CPU_FEATURE,
> all CPUs operate in the same mode —
> if ASYMM is not supported, either
> all CPUs run in synchronous mode, or all run in asymmetric mode.
>
> However, for MTE_STORE_ONLY, CPUs that support the feature will run in store-only mode,
> while those that do not will run with full checking for all operations.
>
> If we want to enable MTE_STORE_ONLY in kasan_init_hw_tags_cpu(),
> I believe it should be reclassified as a BOOT_CPU_FEATURE.x
> Otherwise, the cpu_enable_mte_store_only() function should still be called
> as the enable callback for the MTE_STORE_ONLY feature.
> In that case, kasan_enable_store_only() should be invoked (remove late init),
> and if it returns an error, stop_machine() should be called to disable
> the STORE_ONLY feature on all other CPUs
> if any CPU is found to lack support for MTE_STORE_ONLY.
>
> Am I missing something?
So, IMHO like the ASYMM feature, it would be good to change
MTE_STORE_ONLY as BOOT_CPU_FEATURE.
That would makes everything as easiler and clear.
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-15 15:10 ` Yeoreum Yun
@ 2025-08-15 17:44 ` Catalin Marinas
0 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2025-08-15 17:44 UTC (permalink / raw)
To: Yeoreum Yun
Cc: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, will, akpm, scott, jhubbard, pankaj.gupta, leitao,
kaleshsingh, maz, broonie, oliver.upton, james.morse, ardb,
hardevsinh.palaniya, david, yang, kasan-dev, workflows, linux-doc,
linux-kernel, linux-arm-kernel, linux-mm
On Fri, Aug 15, 2025 at 04:10:59PM +0100, Yeoreum Yun wrote:
> > > Like we do in mte_enable_kernel_asymm(), if the feature is not available
> > > just fall back to checking both reads and writes in the chosen
> > > async/sync/asymm way. You can add some pr_info() to inform the user of
> > > the chosen kasan mode. It's really mostly an performance choice.
> >
> > But MTE_STORE_ONLY is defined as a SYSTEM_FEATURE.
> > This means that when it is called from kasan_init_hw_tags_cpu(),
> > the store_only mode is never set in system_capability,
> > so it cannot be checked using cpus_have_cap().
> >
> > Although the MTE_STORE_ONLY capability is verified by
> > directly reading the ID register (seems ugly),
> > my concern is the potential for an inconsistent state across CPUs.
> >
> > For example, in the case of ASYMM, which is a BOOT_CPU_FEATURE,
> > all CPUs operate in the same mode —
> > if ASYMM is not supported, either
> > all CPUs run in synchronous mode, or all run in asymmetric mode.
> >
> > However, for MTE_STORE_ONLY, CPUs that support the feature will run in store-only mode,
> > while those that do not will run with full checking for all operations.
> >
> > If we want to enable MTE_STORE_ONLY in kasan_init_hw_tags_cpu(),
> > I believe it should be reclassified as a BOOT_CPU_FEATURE.x
> > Otherwise, the cpu_enable_mte_store_only() function should still be called
> > as the enable callback for the MTE_STORE_ONLY feature.
> > In that case, kasan_enable_store_only() should be invoked (remove late init),
> > and if it returns an error, stop_machine() should be called to disable
> > the STORE_ONLY feature on all other CPUs
> > if any CPU is found to lack support for MTE_STORE_ONLY.
> >
> > Am I missing something?
Good point.
> So, IMHO like the ASYMM feature, it would be good to change
> MTE_STORE_ONLY as BOOT_CPU_FEATURE.
> That would makes everything as easiler and clear.
Yeah, let's do this. If people mix different features, we'll revisit at
that time. The asymmetric tag checking is also a boot CPU feature.
--
Catalin
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option
2025-08-15 14:47 ` Yeoreum Yun
@ 2025-08-15 17:46 ` Catalin Marinas
0 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2025-08-15 17:46 UTC (permalink / raw)
To: Yeoreum Yun
Cc: ryabinin.a.a, glider, andreyknvl, dvyukov, vincenzo.frascino,
corbet, will, akpm, scott, jhubbard, pankaj.gupta, leitao,
kaleshsingh, maz, broonie, oliver.upton, james.morse, ardb,
hardevsinh.palaniya, david, yang, kasan-dev, workflows, linux-doc,
linux-kernel, linux-arm-kernel, linux-mm
On Fri, Aug 15, 2025 at 03:47:17PM +0100, Yeoreum Yun wrote:
> > If we do something like mte_enable_kernel_asymm(), that one doesn't
> > return any error, just fall back to the default mode.
>
> But, in case of mte_enable_kernel_asymm() need return to
> change kasan_flag_write_only = false when it doesn't support which
> used in KASAN Kunit test.
>
> If we don't return anything, when user set the write_only but HW doesn't
> support it, KUNIT test get failure since kasan_write_only_enabled()
> return true thou HW doesn't support it.
Ah, ok, if we need this for the kunit test. I haven't checked the last
patch.
--
Catalin
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-08-15 17:46 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-13 17:53 [PATCH v2 0/2] introduce kasan.store_only option in hw-tags Yeoreum Yun
2025-08-13 17:53 ` [PATCH v2 1/2] kasan/hw-tags: introduce kasan.store_only option Yeoreum Yun
2025-08-14 5:03 ` Andrey Konovalov
2025-08-14 8:51 ` Yeoreum Yun
2025-08-15 11:19 ` Catalin Marinas
2025-08-15 11:13 ` Catalin Marinas
2025-08-15 13:51 ` Yeoreum Yun
2025-08-15 15:10 ` Yeoreum Yun
2025-08-15 17:44 ` Catalin Marinas
2025-08-15 14:47 ` Yeoreum Yun
2025-08-15 17:46 ` Catalin Marinas
2025-08-13 17:53 ` [PATCH v2 2/2] kasan: apply store-only mode in kasan kunit testcases Yeoreum Yun
2025-08-14 5:04 ` Andrey Konovalov
2025-08-14 11:13 ` Yeoreum Yun
2025-08-15 6:14 ` Andrey Konovalov
2025-08-15 8:06 ` Yeoreum Yun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).