* [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables
@ 2026-03-19 11:49 Muhammad Usama Anjum
2026-03-19 11:49 ` [PATCH 1/3] vmalloc: add __GFP_SKIP_KASAN support Muhammad Usama Anjum
` (4 more replies)
0 siblings, 5 replies; 14+ messages in thread
From: Muhammad Usama Anjum @ 2026-03-19 11:49 UTC (permalink / raw)
To: Arnd Bergmann, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Kees Cook, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Uladzislau Rezki, linux-arch, linux-kernel, linux-mm,
Andrey Konovalov, Marco Elver, Vincenzo Frascino,
Peter Collingbourne, Catalin Marinas, Will Deacon, Ryan.Roberts,
david.hildenbrand
Cc: Muhammad Usama Anjum
Stacks and page tables are always accessed with the match‑all tag,
so assigning a new random tag every time at allocation and setting
invalid tag at deallocation time, just adds overhead without improving
the detection.
With __GFP_SKIP_KASAN the page keeps its poison tag and KASAN_TAG_KERNEL
(match-all tag) is stored in the page flags while keeping the poison tag
in the hardware. The benefit of it is that 256 tag setting instruction
per 4 kB page aren't needed at allocation and deallocation time.
Thus match‑all pointers still work, while non‑match tags (other than
poison tag) still fault.
__GFP_SKIP_KASAN only skips for KASAN_HW_TAGS mode, so coverage is
unchanged.
Benchmark:
The benchmark has two modes. In thread mode, the child process forks
and creates N threads. In pgtable mode, the parent maps and faults a
specified memory size and then forks repeatedly with children exiting
immediately.
Thread benchmark:
2000 iterations, 2000 threads: 2.575 s → 2.229 s (~13.4% faster)
The pgtable samples:
- 2048 MB, 2000 iters 19.08 s → 17.62 s (~7.6% faster)
Muhammad Usama Anjum (3):
vmalloc: add __GFP_SKIP_KASAN support
fork: skip MTE tagging for kernel stacks
mm: SKIP KASAN for page table allocations
include/asm-generic/pgalloc.h | 2 +-
kernel/fork.c | 8 +++++---
mm/vmalloc.c | 8 ++++++--
3 files changed, 12 insertions(+), 6 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/3] vmalloc: add __GFP_SKIP_KASAN support
2026-03-19 11:49 [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Muhammad Usama Anjum
@ 2026-03-19 11:49 ` Muhammad Usama Anjum
2026-03-19 12:22 ` Ryan Roberts
2026-03-19 11:49 ` [PATCH 2/3] fork: skip MTE tagging for kernel stacks Muhammad Usama Anjum
` (3 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Muhammad Usama Anjum @ 2026-03-19 11:49 UTC (permalink / raw)
To: Arnd Bergmann, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Kees Cook, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Uladzislau Rezki, linux-arch, linux-kernel, linux-mm,
Andrey Konovalov, Marco Elver, Vincenzo Frascino,
Peter Collingbourne, Catalin Marinas, Will Deacon, Ryan.Roberts,
david.hildenbrand
Cc: Muhammad Usama Anjum
For allocations that will be accessed only with match-all pointers
(e.g., kernel stacks), setting tags is wasted work. If the caller
already set __GFP_SKIP_KASAN, don’t skip zeroing the pages and
don’t set KASAN_VMALLOC_PROT_NORMAL so kasan_unpoison_vmalloc()
returns early without tagging.
Before this patch, __GFP_SKIP_KASAN wasn't being used with vmalloc
APIs. So it wasn't being checked. Now its being checked and acted
upon. Other KASAN modes are unchanged because __GFP_SKIP_KASAN isn't
defined there.
This is a preparatory patch for optimizing kernel stack allocations.
Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
---
mm/vmalloc.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c607307c657a6..1baa602a0b9bb 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4041,7 +4041,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
* kasan_unpoison_vmalloc().
*/
if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) {
- if (kasan_hw_tags_enabled()) {
+ bool skip_kasan = kasan_hw_tags_enabled() &&
+ (gfp_mask & __GFP_SKIP_KASAN);
+
+ if (kasan_hw_tags_enabled() && !skip_kasan) {
/*
* Modify protection bits to allow tagging.
* This must be done before mapping.
@@ -4057,7 +4060,8 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
}
/* Take note that the mapping is PAGE_KERNEL. */
- kasan_flags |= KASAN_VMALLOC_PROT_NORMAL;
+ if (!skip_kasan)
+ kasan_flags |= KASAN_VMALLOC_PROT_NORMAL;
}
/* Allocate physical pages and map them into vmalloc space. */
--
2.47.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/3] fork: skip MTE tagging for kernel stacks
2026-03-19 11:49 [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Muhammad Usama Anjum
2026-03-19 11:49 ` [PATCH 1/3] vmalloc: add __GFP_SKIP_KASAN support Muhammad Usama Anjum
@ 2026-03-19 11:49 ` Muhammad Usama Anjum
2026-03-19 12:09 ` Ryan Roberts
2026-03-19 11:49 ` [PATCH 3/3] mm: SKIP KASAN for page table allocations Muhammad Usama Anjum
` (2 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Muhammad Usama Anjum @ 2026-03-19 11:49 UTC (permalink / raw)
To: Arnd Bergmann, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Kees Cook, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Uladzislau Rezki, linux-arch, linux-kernel, linux-mm,
Andrey Konovalov, Marco Elver, Vincenzo Frascino,
Peter Collingbourne, Catalin Marinas, Will Deacon, Ryan.Roberts,
david.hildenbrand
Cc: Muhammad Usama Anjum
The stack pointer always uses the match-all tag, so MTE never checks
tags on stack accesses. Tagging stack memory on every thread creation
is pure overhead.
- Pass __GFP_SKIP_KASAN in gfp_mask for vmalloc-backed stacks so the
vmalloc path skips HW tag setup (see previous patch).
- For the cached VMAP reuse path, skip kasan_unpoison_range() when HW
tags are enabled since the memory will only be accessed through the
match-all tagged SP.
- For the normal page allocator path, pass __GFP_SKIP_KASAN directly
to the page allocator.
Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
---
kernel/fork.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/fork.c b/kernel/fork.c
index bb0c2613a5604..2baf4db39b5a4 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -345,7 +345,8 @@ static int alloc_thread_stack_node(struct task_struct *tsk, int node)
}
/* Reset stack metadata. */
- kasan_unpoison_range(vm_area->addr, THREAD_SIZE);
+ if (!kasan_hw_tags_enabled())
+ kasan_unpoison_range(vm_area->addr, THREAD_SIZE);
stack = kasan_reset_tag(vm_area->addr);
@@ -358,7 +359,7 @@ static int alloc_thread_stack_node(struct task_struct *tsk, int node)
}
stack = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN,
- GFP_VMAP_STACK,
+ GFP_VMAP_STACK | __GFP_SKIP_KASAN,
node, __builtin_return_address(0));
if (!stack)
return -ENOMEM;
@@ -410,7 +411,8 @@ static void thread_stack_delayed_free(struct task_struct *tsk)
static int alloc_thread_stack_node(struct task_struct *tsk, int node)
{
- struct page *page = alloc_pages_node(node, THREADINFO_GFP,
+ struct page *page = alloc_pages_node(node,
+ THREADINFO_GFP | __GFP_SKIP_KASAN,
THREAD_SIZE_ORDER);
if (likely(page)) {
--
2.47.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/3] mm: SKIP KASAN for page table allocations
2026-03-19 11:49 [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Muhammad Usama Anjum
2026-03-19 11:49 ` [PATCH 1/3] vmalloc: add __GFP_SKIP_KASAN support Muhammad Usama Anjum
2026-03-19 11:49 ` [PATCH 2/3] fork: skip MTE tagging for kernel stacks Muhammad Usama Anjum
@ 2026-03-19 11:49 ` Muhammad Usama Anjum
2026-03-19 12:09 ` Ryan Roberts
2026-03-20 3:10 ` [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Andrew Morton
2026-03-20 8:53 ` David Hildenbrand (Arm)
4 siblings, 1 reply; 14+ messages in thread
From: Muhammad Usama Anjum @ 2026-03-19 11:49 UTC (permalink / raw)
To: Arnd Bergmann, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Kees Cook, Andrew Morton,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Uladzislau Rezki, linux-arch, linux-kernel, linux-mm,
Andrey Konovalov, Marco Elver, Vincenzo Frascino,
Peter Collingbourne, Catalin Marinas, Will Deacon, Ryan.Roberts,
david.hildenbrand
Cc: Muhammad Usama Anjum
Page tables are always accessed via __va(phys) / phys_to_virt(phys).
With a match-all tag in the pointer, MTE never checks memory tags on
access. Therefore: KASAN HW tags are set during page table allocation
but never checked during use. KASAN poisoning on free provides no
value for these pages as well. Its pure overhead - both at allocation
time and free time. Hence, skip the tag setting for all page tables.
Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
---
include/asm-generic/pgalloc.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index 57137d3ac1592..051aa1331051c 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -4,7 +4,7 @@
#ifdef CONFIG_MMU
-#define GFP_PGTABLE_KERNEL (GFP_KERNEL | __GFP_ZERO)
+#define GFP_PGTABLE_KERNEL (GFP_KERNEL | __GFP_ZERO | __GFP_SKIP_KASAN)
#define GFP_PGTABLE_USER (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT)
/**
--
2.47.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 2/3] fork: skip MTE tagging for kernel stacks
2026-03-19 11:49 ` [PATCH 2/3] fork: skip MTE tagging for kernel stacks Muhammad Usama Anjum
@ 2026-03-19 12:09 ` Ryan Roberts
2026-03-19 12:29 ` Muhammad Usama Anjum
0 siblings, 1 reply; 14+ messages in thread
From: Ryan Roberts @ 2026-03-19 12:09 UTC (permalink / raw)
To: Muhammad Usama Anjum, Arnd Bergmann, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Kees Cook,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Uladzislau Rezki, linux-arch,
linux-kernel, linux-mm, Andrey Konovalov, Marco Elver,
Vincenzo Frascino, Peter Collingbourne, Catalin Marinas,
Will Deacon, david.hildenbrand
On 19/03/2026 11:49, Muhammad Usama Anjum wrote:
> The stack pointer always uses the match-all tag, so MTE never checks
> tags on stack accesses. Tagging stack memory on every thread creation
> is pure overhead.
>
> - Pass __GFP_SKIP_KASAN in gfp_mask for vmalloc-backed stacks so the
> vmalloc path skips HW tag setup (see previous patch).
> - For the cached VMAP reuse path, skip kasan_unpoison_range() when HW
> tags are enabled since the memory will only be accessed through the
> match-all tagged SP.
> - For the normal page allocator path, pass __GFP_SKIP_KASAN directly
> to the page allocator.
>
> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
> ---
> kernel/fork.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index bb0c2613a5604..2baf4db39b5a4 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -345,7 +345,8 @@ static int alloc_thread_stack_node(struct task_struct *tsk, int node)
> }
>
> /* Reset stack metadata. */
> - kasan_unpoison_range(vm_area->addr, THREAD_SIZE);
> + if (!kasan_hw_tags_enabled())
> + kasan_unpoison_range(vm_area->addr, THREAD_SIZE);
>
> stack = kasan_reset_tag(vm_area->addr);
>
> @@ -358,7 +359,7 @@ static int alloc_thread_stack_node(struct task_struct *tsk, int node)
> }
>
> stack = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN,
> - GFP_VMAP_STACK,
> + GFP_VMAP_STACK | __GFP_SKIP_KASAN,
Perhaps cleaner to include __GFP_SKIP_KASAN in GFP_VMAP_STACK ?
> node, __builtin_return_address(0));
> if (!stack)
> return -ENOMEM;
> @@ -410,7 +411,8 @@ static void thread_stack_delayed_free(struct task_struct *tsk)
>
> static int alloc_thread_stack_node(struct task_struct *tsk, int node)
> {
> - struct page *page = alloc_pages_node(node, THREADINFO_GFP,
> + struct page *page = alloc_pages_node(node,
> + THREADINFO_GFP | __GFP_SKIP_KASAN,
I think there are some other places that could benefit from __GFP_SKIP_KASAN;
see arm64's arch_alloc_vmap_stack(), which allocates stacks for efi, irq and
sdei. I think these are allocated at boot, so not really performance sensitive,
but we might as well be consistent?
You've also missed the alloc_thread_stack_node() implementation for !VMAP when
PAGE_SIZE > STACK_SIZE.
All of these sites use THREADINFO_GFP so perhaps it is better to just define
THREADINFO_GFP to include __GFP_SKIP_KASAN ?
Thanks,
Ryan
> THREAD_SIZE_ORDER);
>
> if (likely(page)) {
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/3] mm: SKIP KASAN for page table allocations
2026-03-19 11:49 ` [PATCH 3/3] mm: SKIP KASAN for page table allocations Muhammad Usama Anjum
@ 2026-03-19 12:09 ` Ryan Roberts
0 siblings, 0 replies; 14+ messages in thread
From: Ryan Roberts @ 2026-03-19 12:09 UTC (permalink / raw)
To: Muhammad Usama Anjum, Arnd Bergmann, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Kees Cook,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Uladzislau Rezki, linux-arch,
linux-kernel, linux-mm, Andrey Konovalov, Marco Elver,
Vincenzo Frascino, Peter Collingbourne, Catalin Marinas,
Will Deacon, david.hildenbrand
On 19/03/2026 11:49, Muhammad Usama Anjum wrote:
> Page tables are always accessed via __va(phys) / phys_to_virt(phys).
> With a match-all tag in the pointer, MTE never checks memory tags on
> access. Therefore: KASAN HW tags are set during page table allocation
> but never checked during use. KASAN poisoning on free provides no
> value for these pages as well. Its pure overhead - both at allocation
> time and free time. Hence, skip the tag setting for all page tables.
>
> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> include/asm-generic/pgalloc.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
> index 57137d3ac1592..051aa1331051c 100644
> --- a/include/asm-generic/pgalloc.h
> +++ b/include/asm-generic/pgalloc.h
> @@ -4,7 +4,7 @@
>
> #ifdef CONFIG_MMU
>
> -#define GFP_PGTABLE_KERNEL (GFP_KERNEL | __GFP_ZERO)
> +#define GFP_PGTABLE_KERNEL (GFP_KERNEL | __GFP_ZERO | __GFP_SKIP_KASAN)
> #define GFP_PGTABLE_USER (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT)
>
> /**
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/3] vmalloc: add __GFP_SKIP_KASAN support
2026-03-19 11:49 ` [PATCH 1/3] vmalloc: add __GFP_SKIP_KASAN support Muhammad Usama Anjum
@ 2026-03-19 12:22 ` Ryan Roberts
2026-03-19 12:57 ` Muhammad Usama Anjum
0 siblings, 1 reply; 14+ messages in thread
From: Ryan Roberts @ 2026-03-19 12:22 UTC (permalink / raw)
To: Muhammad Usama Anjum, Arnd Bergmann, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Kees Cook,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Uladzislau Rezki, linux-arch,
linux-kernel, linux-mm, Andrey Konovalov, Marco Elver,
Vincenzo Frascino, Peter Collingbourne, Catalin Marinas,
Will Deacon, david.hildenbrand
On 19/03/2026 11:49, Muhammad Usama Anjum wrote:
> For allocations that will be accessed only with match-all pointers
> (e.g., kernel stacks), setting tags is wasted work. If the caller
> already set __GFP_SKIP_KASAN, don’t skip zeroing the pages and
> don’t set KASAN_VMALLOC_PROT_NORMAL so kasan_unpoison_vmalloc()
> returns early without tagging.
>
> Before this patch, __GFP_SKIP_KASAN wasn't being used with vmalloc
> APIs. So it wasn't being checked. Now its being checked and acted
> upon. Other KASAN modes are unchanged because __GFP_SKIP_KASAN isn't
> defined there.
>
> This is a preparatory patch for optimizing kernel stack allocations.
>
> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
> ---
> mm/vmalloc.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index c607307c657a6..1baa602a0b9bb 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4041,7 +4041,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
> * kasan_unpoison_vmalloc().
> */
> if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) {
> - if (kasan_hw_tags_enabled()) {
> + bool skip_kasan = kasan_hw_tags_enabled() &&
> + (gfp_mask & __GFP_SKIP_KASAN);
> +
> + if (kasan_hw_tags_enabled() && !skip_kasan) {
It's unfortunate that kasan_hw_tags_enabled() is involved twice in this expression.
> /*
> * Modify protection bits to allow tagging.
> * This must be done before mapping.
> @@ -4057,7 +4060,8 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
> }
>
> /* Take note that the mapping is PAGE_KERNEL. */
> - kasan_flags |= KASAN_VMALLOC_PROT_NORMAL;
> + if (!skip_kasan)
> + kasan_flags |= KASAN_VMALLOC_PROT_NORMAL;
I wonder if it would be clearer to just not call kasan_unpoison_vmalloc() below
if the user passed in __GFP_SKIP_KASAN? It's really just an implementation
detail that kasan_unpoison_vmalloc() skips unpoisoning if
KASAN_VMALLOC_PROT_NORMAL is not provided.
Thanks,
Ryan
> }
>
> /* Allocate physical pages and map them into vmalloc space. */
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/3] fork: skip MTE tagging for kernel stacks
2026-03-19 12:09 ` Ryan Roberts
@ 2026-03-19 12:29 ` Muhammad Usama Anjum
0 siblings, 0 replies; 14+ messages in thread
From: Muhammad Usama Anjum @ 2026-03-19 12:29 UTC (permalink / raw)
To: Ryan Roberts
Cc: usama.anjum, Arnd Bergmann, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Kees Cook,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Uladzislau Rezki, linux-arch,
linux-kernel, linux-mm, Andrey Konovalov, Marco Elver,
Vincenzo Frascino, Peter Collingbourne, Catalin Marinas,
Will Deacon, david.hildenbrand
Hi Ryan,
Thank you for the review.
On 19/03/2026 12:09 pm, Ryan Roberts wrote:
> On 19/03/2026 11:49, Muhammad Usama Anjum wrote:
>> The stack pointer always uses the match-all tag, so MTE never checks
>> tags on stack accesses. Tagging stack memory on every thread creation
>> is pure overhead.
>>
>> - Pass __GFP_SKIP_KASAN in gfp_mask for vmalloc-backed stacks so the
>> vmalloc path skips HW tag setup (see previous patch).
>> - For the cached VMAP reuse path, skip kasan_unpoison_range() when HW
>> tags are enabled since the memory will only be accessed through the
>> match-all tagged SP.
>> - For the normal page allocator path, pass __GFP_SKIP_KASAN directly
>> to the page allocator.
>>
>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
>> ---
>> kernel/fork.c | 8 +++++---
>> 1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/fork.c b/kernel/fork.c
>> index bb0c2613a5604..2baf4db39b5a4 100644
>> --- a/kernel/fork.c
>> +++ b/kernel/fork.c
>> @@ -345,7 +345,8 @@ static int alloc_thread_stack_node(struct task_struct *tsk, int node)
>> }
>>
>> /* Reset stack metadata. */
>> - kasan_unpoison_range(vm_area->addr, THREAD_SIZE);
>> + if (!kasan_hw_tags_enabled())
>> + kasan_unpoison_range(vm_area->addr, THREAD_SIZE);
>>
>> stack = kasan_reset_tag(vm_area->addr);
>>
>> @@ -358,7 +359,7 @@ static int alloc_thread_stack_node(struct task_struct *tsk, int node)
>> }
>>
>> stack = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN,
>> - GFP_VMAP_STACK,
>> + GFP_VMAP_STACK | __GFP_SKIP_KASAN,
>
> Perhaps cleaner to include __GFP_SKIP_KASAN in GFP_VMAP_STACK ?
Yes, it would be much better and the correct way. I'll add it in the next version.
>
>> node, __builtin_return_address(0));
>> if (!stack)
>> return -ENOMEM;
>> @@ -410,7 +411,8 @@ static void thread_stack_delayed_free(struct task_struct *tsk)
>>
>> static int alloc_thread_stack_node(struct task_struct *tsk, int node)
>> {
>> - struct page *page = alloc_pages_node(node, THREADINFO_GFP,
>> + struct page *page = alloc_pages_node(node,
>> + THREADINFO_GFP | __GFP_SKIP_KASAN,
>
> I think there are some other places that could benefit from __GFP_SKIP_KASAN;
> see arm64's arch_alloc_vmap_stack(), which allocates stacks for efi, irq and
> sdei. I think these are allocated at boot, so not really performance sensitive,
> but we might as well be consistent?
>
> You've also missed the alloc_thread_stack_node() implementation for !VMAP when
> PAGE_SIZE > STACK_SIZE.
>
> All of these sites use THREADINFO_GFP so perhaps it is better to just define
> THREADINFO_GFP to include __GFP_SKIP_KASAN ?
Yes, it'll be the straight forward and clean approach. I'll update.
>
> Thanks,
> Ryan
>
>
>> THREAD_SIZE_ORDER);
>>
>> if (likely(page)) {
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/3] vmalloc: add __GFP_SKIP_KASAN support
2026-03-19 12:22 ` Ryan Roberts
@ 2026-03-19 12:57 ` Muhammad Usama Anjum
0 siblings, 0 replies; 14+ messages in thread
From: Muhammad Usama Anjum @ 2026-03-19 12:57 UTC (permalink / raw)
To: Ryan Roberts, Arnd Bergmann, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Kees Cook,
Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Uladzislau Rezki, linux-arch,
linux-kernel, linux-mm, Andrey Konovalov, Marco Elver,
Vincenzo Frascino, Peter Collingbourne, Catalin Marinas,
Will Deacon, david.hildenbrand
Cc: usama.anjum
On 19/03/2026 12:22 pm, Ryan Roberts wrote:
> On 19/03/2026 11:49, Muhammad Usama Anjum wrote:
>> For allocations that will be accessed only with match-all pointers
>> (e.g., kernel stacks), setting tags is wasted work. If the caller
>> already set __GFP_SKIP_KASAN, don’t skip zeroing the pages and
>> don’t set KASAN_VMALLOC_PROT_NORMAL so kasan_unpoison_vmalloc()
>> returns early without tagging.
>>
>> Before this patch, __GFP_SKIP_KASAN wasn't being used with vmalloc
>> APIs. So it wasn't being checked. Now its being checked and acted
>> upon. Other KASAN modes are unchanged because __GFP_SKIP_KASAN isn't
>> defined there.
>>
>> This is a preparatory patch for optimizing kernel stack allocations.
>>
>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
>> ---
>> mm/vmalloc.c | 8 ++++++--
>> 1 file changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index c607307c657a6..1baa602a0b9bb 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -4041,7 +4041,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
>> * kasan_unpoison_vmalloc().
>> */
>> if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) {
>> - if (kasan_hw_tags_enabled()) {
>> + bool skip_kasan = kasan_hw_tags_enabled() &&
>> + (gfp_mask & __GFP_SKIP_KASAN);
>> +
>> + if (kasan_hw_tags_enabled() && !skip_kasan) {
>
> It's unfortunate that kasan_hw_tags_enabled() is involved twice in this expression.
I've looked at this again and simplified based on the fact tha
__GFP_SKIP_KASAN is zero in other than hw-tag modes.
>
>> /*
>> * Modify protection bits to allow tagging.
>> * This must be done before mapping.
>> @@ -4057,7 +4060,8 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
>> }
>>
>> /* Take note that the mapping is PAGE_KERNEL. */
>> - kasan_flags |= KASAN_VMALLOC_PROT_NORMAL;
>> + if (!skip_kasan)
>> + kasan_flags |= KASAN_VMALLOC_PROT_NORMAL;
>
> I wonder if it would be clearer to just not call kasan_unpoison_vmalloc() below
> if the user passed in __GFP_SKIP_KASAN? It's really just an implementation
> detail that kasan_unpoison_vmalloc() skips unpoisoning if
> KASAN_VMALLOC_PROT_NORMAL is not provided.
Then it would be confusing to set kasan_flags to KASAN_VMALLOC_PROT_NORMAL and
not use it later. I've found a good of doing it this way.
Thanks,
Usama
>
> Thanks,
> Ryan
>
>
>> }
>>
>> /* Allocate physical pages and map them into vmalloc space. */
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables
2026-03-19 11:49 [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Muhammad Usama Anjum
` (2 preceding siblings ...)
2026-03-19 11:49 ` [PATCH 3/3] mm: SKIP KASAN for page table allocations Muhammad Usama Anjum
@ 2026-03-20 3:10 ` Andrew Morton
2026-03-23 14:53 ` Muhammad Usama Anjum
2026-03-20 8:53 ` David Hildenbrand (Arm)
4 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2026-03-20 3:10 UTC (permalink / raw)
To: Muhammad Usama Anjum
Cc: Arnd Bergmann, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Kees Cook, David Hildenbrand,
Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Uladzislau Rezki, linux-arch,
linux-kernel, linux-mm, Andrey Konovalov, Marco Elver,
Vincenzo Frascino, Peter Collingbourne, Catalin Marinas,
Will Deacon, Ryan.Roberts, david.hildenbrand
On Thu, 19 Mar 2026 11:49:43 +0000 Muhammad Usama Anjum <usama.anjum@arm.com> wrote:
> Stacks and page tables are always accessed with the match‑all tag,
> so assigning a new random tag every time at allocation and setting
> invalid tag at deallocation time, just adds overhead without improving
> the detection.
>
> With __GFP_SKIP_KASAN the page keeps its poison tag and KASAN_TAG_KERNEL
> (match-all tag) is stored in the page flags while keeping the poison tag
> in the hardware. The benefit of it is that 256 tag setting instruction
> per 4 kB page aren't needed at allocation and deallocation time.
>
> Thus match‑all pointers still work, while non‑match tags (other than
> poison tag) still fault.
>
> __GFP_SKIP_KASAN only skips for KASAN_HW_TAGS mode, so coverage is
> unchanged.
>
Some questions from Sashiko:
https://sashiko.dev/#/patchset/20260319114952.3241359-1-usama.anjum%40arm.com
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables
2026-03-19 11:49 [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Muhammad Usama Anjum
` (3 preceding siblings ...)
2026-03-20 3:10 ` [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Andrew Morton
@ 2026-03-20 8:53 ` David Hildenbrand (Arm)
2026-03-23 15:06 ` Muhammad Usama Anjum
4 siblings, 1 reply; 14+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-20 8:53 UTC (permalink / raw)
To: Muhammad Usama Anjum, Arnd Bergmann, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Kees Cook,
Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Uladzislau Rezki,
linux-arch, linux-kernel, linux-mm, Andrey Konovalov, Marco Elver,
Vincenzo Frascino, Peter Collingbourne, Catalin Marinas,
Will Deacon, Ryan.Roberts, david.hildenbrand
On 3/19/26 12:49, Muhammad Usama Anjum wrote:
> Stacks and page tables are always accessed with the match‑all tag,
> so assigning a new random tag every time at allocation and setting
> invalid tag at deallocation time, just adds overhead without improving
> the detection.
>
> With __GFP_SKIP_KASAN the page keeps its poison tag and KASAN_TAG_KERNEL
> (match-all tag) is stored in the page flags while keeping the poison tag
> in the hardware. The benefit of it is that 256 tag setting instruction
> per 4 kB page aren't needed at allocation and deallocation time.
>
> Thus match‑all pointers still work, while non‑match tags (other than
> poison tag) still fault.
>
> __GFP_SKIP_KASAN only skips for KASAN_HW_TAGS mode, so coverage is
> unchanged.
>
> Benchmark:
> The benchmark has two modes. In thread mode, the child process forks
> and creates N threads. In pgtable mode, the parent maps and faults a
> specified memory size and then forks repeatedly with children exiting
> immediately.
>
> Thread benchmark:
> 2000 iterations, 2000 threads: 2.575 s → 2.229 s (~13.4% faster)
>
> The pgtable samples:
> - 2048 MB, 2000 iters 19.08 s → 17.62 s (~7.6% faster)
As discussed offline, I think we should look into finding a better name
for __GFP_SKIP_KASAN now that we are using it more broadly. It's confusing.
The semantics are:
* Only applies to HW KASAN right now. Otherwise it's ignored. So it
doesn't give any guarantees.
* Will currently leave memory tagged with some tag (poisoned), but
tag checks will be disabled by using the match-all pointer.
After pondering about that for a while, I realized that today, all
memory is tagged by default, and __GFP_SKIP_KASAN is our mechanism to
request memory that will not be tag-checked (close to if it would be not
tagged).
Is there a real difference to getting untagged memory, if supported by
the architecture.
So I was wondering if
__GFP_UNTAGGED: if possible, return memory that is either
untagged or that is tagged but has tag checks
disabled when accessed through page_address().
Using this flag can speed up page allocation
and freeing, and can reduce runtime overhead
by not performing page checking. For now,
only considered with HW-tag based KASAN.
Would be the right thing to do.
Assuming we could/would ever change the default from "all memory is
tagged" to "all memory is untagged", we could similarly introduce:
__GFP_TAGGED: if possible, return memory that is tagged and
and has tag checks enabled.
We could make it clearer that there are no guarantees. Like calling it
__GFP_PREF_UNTAGGED / __GFP_PREF_TAGGED.
(__GFP_TAGGED would obviously be something for the future)
--
Cheers,
David
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables
2026-03-20 3:10 ` [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Andrew Morton
@ 2026-03-23 14:53 ` Muhammad Usama Anjum
0 siblings, 0 replies; 14+ messages in thread
From: Muhammad Usama Anjum @ 2026-03-23 14:53 UTC (permalink / raw)
To: Andrew Morton
Cc: usama.anjum, Arnd Bergmann, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Kees Cook,
David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Uladzislau Rezki, linux-arch, linux-kernel, linux-mm,
Andrey Konovalov, Marco Elver, Vincenzo Frascino,
Peter Collingbourne, Catalin Marinas, Will Deacon, Ryan.Roberts,
david.hildenbrand
On 20/03/2026 3:10 am, Andrew Morton wrote:
> * # Be careful, this email looks suspicious; * Out of Character: The sender is exhibiting a significant deviation from their usual behavior, this may indicate that their account has been compromised. Be extra cautious before opening links or attachments. *
> On Thu, 19 Mar 2026 11:49:43 +0000 Muhammad Usama Anjum <usama.anjum@arm.com> wrote:
>
>> Stacks and page tables are always accessed with the match‑all tag,
>> so assigning a new random tag every time at allocation and setting
>> invalid tag at deallocation time, just adds overhead without improving
>> the detection.
>>
>> With __GFP_SKIP_KASAN the page keeps its poison tag and KASAN_TAG_KERNEL
>> (match-all tag) is stored in the page flags while keeping the poison tag
>> in the hardware. The benefit of it is that 256 tag setting instruction
>> per 4 kB page aren't needed at allocation and deallocation time.
>>
>> Thus match‑all pointers still work, while non‑match tags (other than
>> poison tag) still fault.
>>
>> __GFP_SKIP_KASAN only skips for KASAN_HW_TAGS mode, so coverage is
>> unchanged.
>>
>
> Some questions from Sashiko:
> https://uk01.z.antigena.com/l/sS6fsklhbbK-vAbd4-t3S20GiqcWENbKuEm9JdfcHhXGvSkAuP_tTYRVNNEFkNyqNy6Th_W67uq4HpyPCykcGaYKaeMj7OPiFdbYLta2AQ6H4~yy59q32QAKn-zpc1DtUKnRNXkTGRIvJMOH217hIWTkitNDDPLzALLhD6vG1MnteYIid8KfwK4pfDahLHbmvBU1WWp6d3BG53WUdBJ4ONjb2PDTe4JdIvW0uWnju-HL5hb
>
I've updated descriptions/patches in answer to those concerns.
Thanks,
Usama
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables
2026-03-20 8:53 ` David Hildenbrand (Arm)
@ 2026-03-23 15:06 ` Muhammad Usama Anjum
2026-03-26 13:40 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 14+ messages in thread
From: Muhammad Usama Anjum @ 2026-03-23 15:06 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: usama.anjum, Arnd Bergmann, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Kees Cook,
Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Uladzislau Rezki,
linux-arch, linux-kernel, linux-mm, Andrey Konovalov, Marco Elver,
Vincenzo Frascino, Peter Collingbourne, Catalin Marinas,
Will Deacon, Ryan.Roberts, david.hildenbrand
On 20/03/2026 8:53 am, David Hildenbrand (Arm) wrote:
> On 3/19/26 12:49, Muhammad Usama Anjum wrote:
>> Stacks and page tables are always accessed with the match‑all tag,
>> so assigning a new random tag every time at allocation and setting
>> invalid tag at deallocation time, just adds overhead without improving
>> the detection.
>>
>> With __GFP_SKIP_KASAN the page keeps its poison tag and KASAN_TAG_KERNEL
>> (match-all tag) is stored in the page flags while keeping the poison tag
>> in the hardware. The benefit of it is that 256 tag setting instruction
>> per 4 kB page aren't needed at allocation and deallocation time.
>>
>> Thus match‑all pointers still work, while non‑match tags (other than
>> poison tag) still fault.
>>
>> __GFP_SKIP_KASAN only skips for KASAN_HW_TAGS mode, so coverage is
>> unchanged.
>>
>> Benchmark:
>> The benchmark has two modes. In thread mode, the child process forks
>> and creates N threads. In pgtable mode, the parent maps and faults a
>> specified memory size and then forks repeatedly with children exiting
>> immediately.
>>
>> Thread benchmark:
>> 2000 iterations, 2000 threads: 2.575 s → 2.229 s (~13.4% faster)
>>
>> The pgtable samples:
>> - 2048 MB, 2000 iters 19.08 s → 17.62 s (~7.6% faster)
>
> As discussed offline, I think we should look into finding a better name
> for __GFP_SKIP_KASAN now that we are using it more broadly. It's confusing.
Agreed that its confusing and the name doesn't show its under-the-hood usage.
>
> The semantics are:
> * Only applies to HW KASAN right now. Otherwise it's ignored. So it
> doesn't give any guarantees.
> * Will currently leave memory tagged with some tag (poisoned), but
> tag checks will be disabled by using the match-all pointer.
>
> After pondering about that for a while, I realized that today, all
> memory is tagged by default, and __GFP_SKIP_KASAN is our mechanism to
> request memory that will not be tag-checked (close to if it would be not
> tagged).
KASAN uses the poisoning and un-poisoning terminologies. It depends upon
the type of KASAN enabled that how poisoning/unpoisoning is done.
>
> Is there a real difference to getting untagged memory, if supported by
> the architecture.
>
> So I was wondering if
>
> __GFP_UNTAGGED: if possible, return memory that is either
> untagged or that is tagged but has tag checks
> disabled when accessed through page_address().
> Using this flag can speed up page allocation
> and freeing, and can reduce runtime overhead
> by not performing page checking. For now,
> only considered with HW-tag based KASAN.
Its again confusing as __GFP_UNTAGGED will not return untagged memory
in case of KASAN_SW_TAGS.
As __GFP_SKIP_KASAN skips only for HW_TAGS mode, the more appropriate name
may be:
__GFP_SKIP_HW_POSION
No matter the final name, it may be worth the effort to rename / do better
handling of this in the code. Let's keep it a separate from this series.
>
> Would be the right thing to do.
>
> Assuming we could/would ever change the default from "all memory is
> tagged" to "all memory is untagged", we could similarly introduce:
>
> __GFP_TAGGED: if possible, return memory that is tagged and
> and has tag checks enabled.
>
> We could make it clearer that there are no guarantees. Like calling it
> __GFP_PREF_UNTAGGED / __GFP_PREF_TAGGED.
>
>
> (__GFP_TAGGED would obviously be something for the future)
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables
2026-03-23 15:06 ` Muhammad Usama Anjum
@ 2026-03-26 13:40 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 14+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-26 13:40 UTC (permalink / raw)
To: Muhammad Usama Anjum, David Hildenbrand (Arm)
Cc: Arnd Bergmann, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Kees Cook, Andrew Morton,
Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Uladzislau Rezki, linux-arch,
linux-kernel, linux-mm, Andrey Konovalov, Marco Elver,
Vincenzo Frascino, Peter Collingbourne, Catalin Marinas,
Will Deacon, Ryan.Roberts
On 3/23/26 16:06, Muhammad Usama Anjum wrote:
> On 20/03/2026 8:53 am, David Hildenbrand (Arm) wrote:
>> On 3/19/26 12:49, Muhammad Usama Anjum wrote:
>>> Stacks and page tables are always accessed with the match‑all tag,
>>> so assigning a new random tag every time at allocation and setting
>>> invalid tag at deallocation time, just adds overhead without improving
>>> the detection.
>>>
>>> With __GFP_SKIP_KASAN the page keeps its poison tag and KASAN_TAG_KERNEL
>>> (match-all tag) is stored in the page flags while keeping the poison tag
>>> in the hardware. The benefit of it is that 256 tag setting instruction
>>> per 4 kB page aren't needed at allocation and deallocation time.
>>>
>>> Thus match‑all pointers still work, while non‑match tags (other than
>>> poison tag) still fault.
>>>
>>> __GFP_SKIP_KASAN only skips for KASAN_HW_TAGS mode, so coverage is
>>> unchanged.
>>>
>>> Benchmark:
>>> The benchmark has two modes. In thread mode, the child process forks
>>> and creates N threads. In pgtable mode, the parent maps and faults a
>>> specified memory size and then forks repeatedly with children exiting
>>> immediately.
>>>
>>> Thread benchmark:
>>> 2000 iterations, 2000 threads: 2.575 s → 2.229 s (~13.4% faster)
>>>
>>> The pgtable samples:
>>> - 2048 MB, 2000 iters 19.08 s → 17.62 s (~7.6% faster)
>>
>> As discussed offline, I think we should look into finding a better name
>> for __GFP_SKIP_KASAN now that we are using it more broadly. It's confusing.
> Agreed that its confusing and the name doesn't show its under-the-hood usage.
>
And I think I finally realized that __GFP_SKIP_KASAN is used for two
independent use cases, something that really must be sorted out.
>>
>> The semantics are:
>> * Only applies to HW KASAN right now. Otherwise it's ignored. So it
>> doesn't give any guarantees.
>> * Will currently leave memory tagged with some tag (poisoned), but
>> tag checks will be disabled by using the match-all pointer.
>>
>> After pondering about that for a while, I realized that today, all
>> memory is tagged by default, and __GFP_SKIP_KASAN is our mechanism to
>> request memory that will not be tag-checked (close to if it would be not
>> tagged).
> KASAN uses the poisoning and un-poisoning terminologies. It depends upon
> the type of KASAN enabled that how poisoning/unpoisoning is done.
And that's an implementation detail. A random memory allocation
shouldn't have to know what KASAN or POISONING is. :)
>
>>
>> Is there a real difference to getting untagged memory, if supported by
>> the architecture.
>>
>> So I was wondering if
>>
>> __GFP_UNTAGGED: if possible, return memory that is either
>> untagged or that is tagged but has tag checks
>> disabled when accessed through page_address().
>> Using this flag can speed up page allocation
>> and freeing, and can reduce runtime overhead
>> by not performing page checking. For now,
>> only considered with HW-tag based KASAN.
> Its again confusing as __GFP_UNTAGGED will not return untagged memory
> in case of KASAN_SW_TAGS.
>
> As __GFP_SKIP_KASAN skips only for HW_TAGS mode, the more appropriate name
> may be:
> __GFP_SKIP_HW_POSION
Also not really the right fit I think.
>
> No matter the final name, it may be worth the effort to rename / do better
> handling of this in the code. Let's keep it a separate from this series.
Well, the point I am making is that
(1) you are adding more users of __GFP_SKIP_KASAN
(2) __GFP_SKIP_KASAN is a mess
I'll try to sort that out, but be prepared that the flag name might
change underneath your feet :)
--
Cheers,
David
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-03-26 13:40 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-19 11:49 [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Muhammad Usama Anjum
2026-03-19 11:49 ` [PATCH 1/3] vmalloc: add __GFP_SKIP_KASAN support Muhammad Usama Anjum
2026-03-19 12:22 ` Ryan Roberts
2026-03-19 12:57 ` Muhammad Usama Anjum
2026-03-19 11:49 ` [PATCH 2/3] fork: skip MTE tagging for kernel stacks Muhammad Usama Anjum
2026-03-19 12:09 ` Ryan Roberts
2026-03-19 12:29 ` Muhammad Usama Anjum
2026-03-19 11:49 ` [PATCH 3/3] mm: SKIP KASAN for page table allocations Muhammad Usama Anjum
2026-03-19 12:09 ` Ryan Roberts
2026-03-20 3:10 ` [PATCH 0/3] KASAN: HW_TAGS: Disable tagging for stack and page-tables Andrew Morton
2026-03-23 14:53 ` Muhammad Usama Anjum
2026-03-20 8:53 ` David Hildenbrand (Arm)
2026-03-23 15:06 ` Muhammad Usama Anjum
2026-03-26 13:40 ` David Hildenbrand (Arm)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox