public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] kasan: hw_tags: Disable tagging for stack and page-tables
@ 2026-04-24 13:01 Dev Jain
  2026-04-24 13:01 ` [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support Dev Jain
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Dev Jain @ 2026-04-24 13:01 UTC (permalink / raw)
  To: arnd, kees, mingo, peterz, juri.lelli, vincent.guittot, akpm,
	david, urezki
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid, ljs,
	Liam.Howlett, vbabka, rppt, surenb, mhocko, tglx, usama.anjum,
	mathieu.desnoyers, linux-arch, linux-kernel, linux-mm, Dev Jain

Stacks and page tables are always accessed with the match-all tag,
so assigning a new random tag every time at allocation and setting
invalid tag at deallocation time, just adds overhead without improving
the detection.

With __GFP_SKIP_KASAN the page keeps its poison tag and KASAN_TAG_KERNEL
(match-all tag) is stored in the page flags while keeping the poison tag
in the hardware. The benefit of it is that 256 tag setting instruction
per 4 kB page aren't needed at allocation and deallocation time.

Thus match-all pointers still work, while non-match tags (other than
poison tag) still fault.

__GFP_SKIP_KASAN only skips for KASAN_HW_TAGS mode, so coverage is
unchanged.

Benchmark:
The benchmark has two modes. In thread mode, the child process forks
and creates N threads. In pgtable mode, the parent maps and faults a
specified memory size and then forks repeatedly with children exiting
immediately.

Thread benchmark:
2000 iterations, 2000 threads:	2.575 s → 2.229 s (~13.4% faster)

The pgtable samples:
- 2048 MB, 2000 iters		19.08 s → 17.62 s (~7.6% faster)
---

Changes since v2:
- Directly skip kasan_unpoison_vmalloc() for GFP_SKIP_KASAN in patch 1

Changes since v1:
- Update description/title
- Patch 1: Simplify skip conditions based on the fact that __GFP_SKIP_KASAN
- Patch 2: Specify _GFP_SKIP_KASAN in THREADINFO_GFP and GFP_VMAP_STACK

Muhammad Usama Anjum (3):
  vmalloc: add __GFP_SKIP_KASAN support
  kasan: skip HW tagging for all kernel thread stacks
  mm: skip KASAN tagging for page-allocated page tables

 include/asm-generic/pgalloc.h |  2 +-
 include/linux/thread_info.h   |  2 +-
 kernel/fork.c                 |  5 +++--
 mm/vmalloc.c                  | 20 +++++++++++++++++---
 4 files changed, 22 insertions(+), 7 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support
  2026-04-24 13:01 [PATCH v3 0/3] kasan: hw_tags: Disable tagging for stack and page-tables Dev Jain
@ 2026-04-24 13:01 ` Dev Jain
  2026-04-24 18:32   ` Catalin Marinas
  2026-04-25  9:14   ` Catalin Marinas
  2026-04-24 13:01 ` [PATCH v3 2/3] kasan: skip HW tagging for all kernel thread stacks Dev Jain
  2026-04-24 13:01 ` [PATCH v3 3/3] mm: skip KASAN tagging for page-allocated page tables Dev Jain
  2 siblings, 2 replies; 7+ messages in thread
From: Dev Jain @ 2026-04-24 13:01 UTC (permalink / raw)
  To: arnd, kees, mingo, peterz, juri.lelli, vincent.guittot, akpm,
	david, urezki
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid, ljs,
	Liam.Howlett, vbabka, rppt, surenb, mhocko, tglx, usama.anjum,
	mathieu.desnoyers, linux-arch, linux-kernel, linux-mm,
	Ryan Roberts, Dev Jain

From: Muhammad Usama Anjum <usama.anjum@arm.com>

For allocations that will be accessed only with match-all pointers
(e.g., kernel stacks), setting tags is wasted work. If the caller
already set __GFP_SKIP_KASAN, skip tag setting of vmalloc pages.

Before this patch, __GFP_SKIP_KASAN wasn't being used with vmalloc
APIs. So it wasn't being checked. Now its being checked and acted
upon. Other KASAN modes are unchanged because __GFP_SKIP_KASAN isn't
defined there.

This is a preparatory patch for optimizing kernel stack allocations.

Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
Co-developed-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
---
 mm/vmalloc.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b31b208f6ecb3..c94fcb2725b6b 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3939,7 +3939,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 				__GFP_NOFAIL | __GFP_ZERO |\
 				__GFP_NORETRY | __GFP_RETRY_MAYFAIL |\
 				GFP_NOFS | GFP_NOIO | GFP_KERNEL_ACCOUNT |\
-				GFP_USER | __GFP_NOLOCKDEP)
+				GFP_USER | __GFP_NOLOCKDEP | __GFP_SKIP_KASAN)
 
 static gfp_t vmalloc_fix_flags(gfp_t flags)
 {
@@ -3980,6 +3980,9 @@ static gfp_t vmalloc_fix_flags(gfp_t flags)
  *
  * %__GFP_NOWARN can be used to suppress failure messages.
  *
+ * %__GFP_SKIP_KASAN can be used to skip unpoisoning of mapped pages
+ * (when prot=%PAGE_KERNEL).
+ *
  * Can not be called from interrupt nor NMI contexts.
  * Return: the address of the area or %NULL on failure
  */
@@ -3993,6 +3996,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
 	kasan_vmalloc_flags_t kasan_flags = KASAN_VMALLOC_NONE;
 	unsigned long original_align = align;
 	unsigned int shift = PAGE_SHIFT;
+	bool skip_vmalloc_kasan = gfp_mask & __GFP_SKIP_KASAN;
+
+	/* Don't skip metadata kasan unpoisoning */
+	gfp_mask &= ~__GFP_SKIP_KASAN;
 
 	if (WARN_ON_ONCE(!size))
 		return NULL;
@@ -4041,7 +4048,7 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
 	 * kasan_unpoison_vmalloc().
 	 */
 	if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) {
-		if (kasan_hw_tags_enabled()) {
+		if (kasan_hw_tags_enabled() && !skip_vmalloc_kasan) {
 			/*
 			 * Modify protection bits to allow tagging.
 			 * This must be done before mapping.
@@ -4054,6 +4061,12 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
 			 * poisoned and zeroed by kasan_unpoison_vmalloc().
 			 */
 			gfp_mask |= __GFP_SKIP_KASAN | __GFP_SKIP_ZERO;
+		} else if (skip_vmalloc_kasan) {
+			/*
+			 * Skip page_alloc unpoisoning physical pages backing
+			 * VM_ALLOC mapping, as requested by caller.
+			 */
+			gfp_mask |= __GFP_SKIP_KASAN;
 		}
 
 		/* Take note that the mapping is PAGE_KERNEL. */
@@ -4078,7 +4091,8 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
 	    (gfp_mask & __GFP_SKIP_ZERO))
 		kasan_flags |= KASAN_VMALLOC_INIT;
 	/* KASAN_VMALLOC_PROT_NORMAL already set if required. */
-	area->addr = kasan_unpoison_vmalloc(area->addr, size, kasan_flags);
+	if (!skip_vmalloc_kasan)
+		area->addr = kasan_unpoison_vmalloc(area->addr, size, kasan_flags);
 
 	/*
 	 * In this function, newly allocated vm_struct has VM_UNINITIALIZED
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 2/3] kasan: skip HW tagging for all kernel thread stacks
  2026-04-24 13:01 [PATCH v3 0/3] kasan: hw_tags: Disable tagging for stack and page-tables Dev Jain
  2026-04-24 13:01 ` [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support Dev Jain
@ 2026-04-24 13:01 ` Dev Jain
  2026-04-24 13:01 ` [PATCH v3 3/3] mm: skip KASAN tagging for page-allocated page tables Dev Jain
  2 siblings, 0 replies; 7+ messages in thread
From: Dev Jain @ 2026-04-24 13:01 UTC (permalink / raw)
  To: arnd, kees, mingo, peterz, juri.lelli, vincent.guittot, akpm,
	david, urezki
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid, ljs,
	Liam.Howlett, vbabka, rppt, surenb, mhocko, tglx, usama.anjum,
	mathieu.desnoyers, linux-arch, linux-kernel, linux-mm

From: Muhammad Usama Anjum <usama.anjum@arm.com>

HW-tag KASAN never checks kernel stacks because stack pointers carry the
match-all tag, so setting/poisoning tags is pure overhead.

- Add __GFP_SKIP_KASAN to THREADINFO_GFP so every stack allocator that
  uses it skips tagging (fork path plus arch users)
- Add __GFP_SKIP_KASAN to GFP_VMAP_STACK for the fork-specific vmap
  stacks.
- When reusing cached vmap stacks, skip kasan_unpoison_range() if HW tags
  are enabled.

Software KASAN is unchanged; this only affects tag-based KASAN.

Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
---
 include/linux/thread_info.h | 2 +-
 kernel/fork.c               | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 051e429026904..307b8390fc670 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -92,7 +92,7 @@ static inline long set_restart_fn(struct restart_block *restart,
 #define THREAD_ALIGN	THREAD_SIZE
 #endif
 
-#define THREADINFO_GFP		(GFP_KERNEL_ACCOUNT | __GFP_ZERO)
+#define THREADINFO_GFP		(GFP_KERNEL_ACCOUNT | __GFP_ZERO | __GFP_SKIP_KASAN)
 
 /*
  * flag set/clear/test wrappers
diff --git a/kernel/fork.c b/kernel/fork.c
index bc2bf58b93b65..2fc3b121962cb 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -202,7 +202,7 @@ static DEFINE_PER_CPU(struct vm_struct *, cached_stacks[NR_CACHED_STACKS]);
  * accounting is performed by the code assigning/releasing stacks to tasks.
  * We need a zeroed memory without __GFP_ACCOUNT.
  */
-#define GFP_VMAP_STACK (GFP_KERNEL | __GFP_ZERO)
+#define GFP_VMAP_STACK (GFP_KERNEL | __GFP_ZERO | __GFP_SKIP_KASAN)
 
 struct vm_stack {
 	struct rcu_head rcu;
@@ -340,7 +340,8 @@ static int alloc_thread_stack_node(struct task_struct *tsk, int node)
 		}
 
 		/* Reset stack metadata. */
-		kasan_unpoison_range(vm_area->addr, THREAD_SIZE);
+		if (!kasan_hw_tags_enabled())
+			kasan_unpoison_range(vm_area->addr, THREAD_SIZE);
 
 		stack = kasan_reset_tag(vm_area->addr);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 3/3] mm: skip KASAN tagging for page-allocated page tables
  2026-04-24 13:01 [PATCH v3 0/3] kasan: hw_tags: Disable tagging for stack and page-tables Dev Jain
  2026-04-24 13:01 ` [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support Dev Jain
  2026-04-24 13:01 ` [PATCH v3 2/3] kasan: skip HW tagging for all kernel thread stacks Dev Jain
@ 2026-04-24 13:01 ` Dev Jain
  2026-04-24 17:41   ` Catalin Marinas
  2 siblings, 1 reply; 7+ messages in thread
From: Dev Jain @ 2026-04-24 13:01 UTC (permalink / raw)
  To: arnd, kees, mingo, peterz, juri.lelli, vincent.guittot, akpm,
	david, urezki
  Cc: dietmar.eggemann, rostedt, bsegall, mgorman, vschneid, ljs,
	Liam.Howlett, vbabka, rppt, surenb, mhocko, tglx, usama.anjum,
	mathieu.desnoyers, linux-arch, linux-kernel, linux-mm,
	Ryan Roberts, Catalin Marinas

From: Muhammad Usama Anjum <usama.anjum@arm.com>

Page tables are always accessed via the linear mapping with a match-all
tag, so HW-tag KASAN never checks them. For page-allocated tables (PTEs
and PGDs etc), avoid the tag setup and poisoning overhead by using
__GFP_SKIP_KASAN. SLUB-backed page tables are unchanged for now. (They
aren't widely used and require more SLUB related skip logic. Leave it
later.)

Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 include/asm-generic/pgalloc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index 57137d3ac1592..051aa1331051c 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -4,7 +4,7 @@
 
 #ifdef CONFIG_MMU
 
-#define GFP_PGTABLE_KERNEL	(GFP_KERNEL | __GFP_ZERO)
+#define GFP_PGTABLE_KERNEL	(GFP_KERNEL | __GFP_ZERO | __GFP_SKIP_KASAN)
 #define GFP_PGTABLE_USER	(GFP_PGTABLE_KERNEL | __GFP_ACCOUNT)
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 3/3] mm: skip KASAN tagging for page-allocated page tables
  2026-04-24 13:01 ` [PATCH v3 3/3] mm: skip KASAN tagging for page-allocated page tables Dev Jain
@ 2026-04-24 17:41   ` Catalin Marinas
  0 siblings, 0 replies; 7+ messages in thread
From: Catalin Marinas @ 2026-04-24 17:41 UTC (permalink / raw)
  To: Dev Jain
  Cc: arnd, kees, mingo, peterz, juri.lelli, vincent.guittot, akpm,
	david, urezki, dietmar.eggemann, rostedt, bsegall, mgorman,
	vschneid, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, tglx,
	usama.anjum, mathieu.desnoyers, linux-arch, linux-kernel,
	linux-mm, Ryan Roberts

On Fri, Apr 24, 2026 at 06:31:57PM +0530, Dev Jain wrote:
> From: Muhammad Usama Anjum <usama.anjum@arm.com>
> 
> Page tables are always accessed via the linear mapping with a match-all
> tag, so HW-tag KASAN never checks them. For page-allocated tables (PTEs
> and PGDs etc), avoid the tag setup and poisoning overhead by using
> __GFP_SKIP_KASAN. SLUB-backed page tables are unchanged for now. (They
> aren't widely used and require more SLUB related skip logic. Leave it
> later.)
> 
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
> Acked-by: David Hildenbrand (Arm) <david@kernel.org>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Your signed-off-by is missing. You must add it if you are reposting
someone else's patches.

Also I only got cc'ed on patch 3. You should normally cc all people on
all patches. Maybe you could skip this if the patches are some
independent cleanups but even so, I'd still cc all the others at least
on the cover letter.

-- 
Catalin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support
  2026-04-24 13:01 ` [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support Dev Jain
@ 2026-04-24 18:32   ` Catalin Marinas
  2026-04-25  9:14   ` Catalin Marinas
  1 sibling, 0 replies; 7+ messages in thread
From: Catalin Marinas @ 2026-04-24 18:32 UTC (permalink / raw)
  To: Dev Jain
  Cc: arnd, kees, mingo, peterz, juri.lelli, vincent.guittot, akpm,
	david, urezki, dietmar.eggemann, rostedt, bsegall, mgorman,
	vschneid, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, tglx,
	usama.anjum, mathieu.desnoyers, linux-arch, linux-kernel,
	linux-mm, Ryan Roberts

On Fri, Apr 24, 2026 at 06:31:55PM +0530, Dev Jain wrote:
> From: Muhammad Usama Anjum <usama.anjum@arm.com>
> 
> For allocations that will be accessed only with match-all pointers
> (e.g., kernel stacks), setting tags is wasted work. If the caller
> already set __GFP_SKIP_KASAN, skip tag setting of vmalloc pages.
> 
> Before this patch, __GFP_SKIP_KASAN wasn't being used with vmalloc
> APIs. So it wasn't being checked. Now its being checked and acted
> upon. Other KASAN modes are unchanged because __GFP_SKIP_KASAN isn't
> defined there.
> 
> This is a preparatory patch for optimizing kernel stack allocations.
> 
> Co-developed-by: Ryan Roberts <ryan.roberts@arm.com>
> Co-developed-by: Dev Jain <dev.jain@arm.com>
> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>

Co-developers need to sign off as well. See submitting-patches.rst. Same
comment about your SoB as on patch 3.

> ---
>  mm/vmalloc.c | 20 +++++++++++++++++---
>  1 file changed, 17 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index b31b208f6ecb3..c94fcb2725b6b 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3939,7 +3939,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>  				__GFP_NOFAIL | __GFP_ZERO |\
>  				__GFP_NORETRY | __GFP_RETRY_MAYFAIL |\
>  				GFP_NOFS | GFP_NOIO | GFP_KERNEL_ACCOUNT |\
> -				GFP_USER | __GFP_NOLOCKDEP)
> +				GFP_USER | __GFP_NOLOCKDEP | __GFP_SKIP_KASAN)
>  
>  static gfp_t vmalloc_fix_flags(gfp_t flags)
>  {
> @@ -3980,6 +3980,9 @@ static gfp_t vmalloc_fix_flags(gfp_t flags)
>   *
>   * %__GFP_NOWARN can be used to suppress failure messages.
>   *
> + * %__GFP_SKIP_KASAN can be used to skip unpoisoning of mapped pages
> + * (when prot=%PAGE_KERNEL).
> + *
>   * Can not be called from interrupt nor NMI contexts.
>   * Return: the address of the area or %NULL on failure
>   */
> @@ -3993,6 +3996,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
>  	kasan_vmalloc_flags_t kasan_flags = KASAN_VMALLOC_NONE;
>  	unsigned long original_align = align;
>  	unsigned int shift = PAGE_SHIFT;
> +	bool skip_vmalloc_kasan = gfp_mask & __GFP_SKIP_KASAN;
> +
> +	/* Don't skip metadata kasan unpoisoning */
> +	gfp_mask &= ~__GFP_SKIP_KASAN;
>  
>  	if (WARN_ON_ONCE(!size))
>  		return NULL;
> @@ -4041,7 +4048,7 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
>  	 * kasan_unpoison_vmalloc().
>  	 */
>  	if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) {
> -		if (kasan_hw_tags_enabled()) {
> +		if (kasan_hw_tags_enabled() && !skip_vmalloc_kasan) {
>  			/*
>  			 * Modify protection bits to allow tagging.
>  			 * This must be done before mapping.
> @@ -4054,6 +4061,12 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
>  			 * poisoned and zeroed by kasan_unpoison_vmalloc().
>  			 */
>  			gfp_mask |= __GFP_SKIP_KASAN | __GFP_SKIP_ZERO;
> +		} else if (skip_vmalloc_kasan) {
> +			/*
> +			 * Skip page_alloc unpoisoning physical pages backing
> +			 * VM_ALLOC mapping, as requested by caller.
> +			 */
> +			gfp_mask |= __GFP_SKIP_KASAN;
>  		}

This playing around with some of the GFP flags meant for metadata and
the actual page allocation gets confusing. You remove __GFP_SKIP_KASAN
early from gfp_mask, add it back here. You might as well just remove it
when calling __get_vm_area_node() and we won't have to figure out why
it's added back above.

The __GFP_SKIP_ZERO flag is meant for the page allocator and used in
this function later to actually tell kasan to initialise the memory (not
skip this). __GFP_SKIP_KASAN, OTOH, is used to actually tell both
vmalloc() and the underlying page allocator to avoid tagging. I wonder
whether it would be better to have a VM_SKIP_KASAN flag instead and
leave the GFP flags alone.

-- 
Catalin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support
  2026-04-24 13:01 ` [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support Dev Jain
  2026-04-24 18:32   ` Catalin Marinas
@ 2026-04-25  9:14   ` Catalin Marinas
  1 sibling, 0 replies; 7+ messages in thread
From: Catalin Marinas @ 2026-04-25  9:14 UTC (permalink / raw)
  To: Dev Jain
  Cc: arnd, kees, mingo, peterz, juri.lelli, vincent.guittot, akpm,
	david, urezki, dietmar.eggemann, rostedt, bsegall, mgorman,
	vschneid, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko, tglx,
	usama.anjum, mathieu.desnoyers, linux-arch, linux-kernel,
	linux-mm, Ryan Roberts

On Fri, Apr 24, 2026 at 06:31:55PM +0530, Dev Jain wrote:
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index b31b208f6ecb3..c94fcb2725b6b 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -3939,7 +3939,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>  				__GFP_NOFAIL | __GFP_ZERO |\
>  				__GFP_NORETRY | __GFP_RETRY_MAYFAIL |\
>  				GFP_NOFS | GFP_NOIO | GFP_KERNEL_ACCOUNT |\
> -				GFP_USER | __GFP_NOLOCKDEP)
> +				GFP_USER | __GFP_NOLOCKDEP | __GFP_SKIP_KASAN)
>  
>  static gfp_t vmalloc_fix_flags(gfp_t flags)
>  {
> @@ -3980,6 +3980,9 @@ static gfp_t vmalloc_fix_flags(gfp_t flags)
>   *
>   * %__GFP_NOWARN can be used to suppress failure messages.
>   *
> + * %__GFP_SKIP_KASAN can be used to skip unpoisoning of mapped pages
> + * (when prot=%PAGE_KERNEL).

I just realised, if we go with this flag for vmalloc(), there's also a
comment in gfp_types.h implying that pages are unpoisoned by
kasan_unpoison_vmalloc() instead. This is no longer the case with this
patch.

A VM_SKIP_KASAN flag may have been nicer but we already have
THREADINFO_GFP and GFP_VMAP_STACK, so all those call sites would have to
be moved to call the lower-level __vmalloc_node_range().

-- 
Catalin

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-04-25  9:14 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-24 13:01 [PATCH v3 0/3] kasan: hw_tags: Disable tagging for stack and page-tables Dev Jain
2026-04-24 13:01 ` [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support Dev Jain
2026-04-24 18:32   ` Catalin Marinas
2026-04-25  9:14   ` Catalin Marinas
2026-04-24 13:01 ` [PATCH v3 2/3] kasan: skip HW tagging for all kernel thread stacks Dev Jain
2026-04-24 13:01 ` [PATCH v3 3/3] mm: skip KASAN tagging for page-allocated page tables Dev Jain
2026-04-24 17:41   ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox