Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] kasan: hw_tags: some micro-optimizations
@ 2026-05-13 10:57 Dev Jain
  2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Dev Jain @ 2026-05-13 10:57 UTC (permalink / raw)
  To: akpm, vbabka, harry, ryabinin.a.a
  Cc: Dev Jain, surenb, mhocko, jackmanb, hannes, ziy, hao.li, cl,
	rientjes, roman.gushchin, linux-mm, linux-kernel, glider,
	andreyknvl, dvyukov, vincenzo.frascino, kasan-dev, ryan.roberts,
	anshuman.khandual, catalin.marinas

Patch 1 uses GFP_SKIP_KASAN to skip unpoisoning of a slab page in the page
allocator, since slab allocator itself poisons the slab page immediately.

Patch 2 and 3 remove wasted work while poisoning the tail end of the
vmalloc/slab allocation.

---
Based on 7.1-rc2.

Dev Jain (3):
  mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation
  kasan: avoid re-poisoning tag-based kmalloc redzones
  vmalloc: hw_tags: optimize vmalloc redzoning

 include/linux/kasan.h | 17 +++++++++----
 mm/kasan/common.c     | 55 +++++++++++++++++++++++++++++++++----------
 mm/kasan/hw_tags.c    | 13 ++++++----
 mm/page_alloc.c       |  2 +-
 mm/slub.c             | 22 ++++++++++++-----
 5 files changed, 79 insertions(+), 30 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation
  2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
@ 2026-05-13 10:57 ` Dev Jain
  2026-05-14 12:11   ` Ryan Roberts
  2026-05-13 10:57 ` [PATCH 2/3] kasan: avoid re-poisoning tag-based kmalloc redzones Dev Jain
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Dev Jain @ 2026-05-13 10:57 UTC (permalink / raw)
  To: akpm, vbabka, harry, ryabinin.a.a
  Cc: Dev Jain, surenb, mhocko, jackmanb, hannes, ziy, hao.li, cl,
	rientjes, roman.gushchin, linux-mm, linux-kernel, glider,
	andreyknvl, dvyukov, vincenzo.frascino, kasan-dev, ryan.roberts,
	anshuman.khandual, catalin.marinas

When a new slab page is allocated, the buddy will unpoison the page.
Then slab immediately poisons the page via kasan_poison_slab(). This
is wasted work.

Similar to what is done in vmalloc currently, use GFP_SKIP_KASAN
(hw tags flag only) to skip unpoisoning of the slab page.

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
 mm/page_alloc.c |  2 +-
 mm/slub.c       | 11 +++++++++--
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 227d58dc3de6..c3a69913aaa9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7723,7 +7723,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned
 	struct alloc_context ac = { };
 	struct page *page;
 
-	VM_WARN_ON_ONCE(gfp_flags & ~__GFP_ACCOUNT);
+	VM_WARN_ON_ONCE(gfp_flags & ~(__GFP_ACCOUNT | __GFP_SKIP_KASAN));
 	/*
 	 * In PREEMPT_RT spin_trylock() will call raw_spin_lock() which is
 	 * unsafe in NMI. If spin_trylock() is called from hard IRQ the current
diff --git a/mm/slub.c b/mm/slub.c
index 0baa906f39ab..da3520769d1f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3269,9 +3269,16 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
 	struct slab *slab;
 	unsigned int order = oo_order(oo);
 
+	/*
+	 * New slab pages are immediately poisoned by kasan_poison_slab()
+	 * before any object is handed out, so page allocator unpoisoning
+	 * is wasted work for HW_TAGS KASAN.
+	 */
+	flags |= __GFP_SKIP_KASAN;
+
 	if (unlikely(!allow_spin))
-		page = alloc_frozen_pages_nolock(0/* __GFP_COMP is implied */,
-								  node, order);
+		page = alloc_frozen_pages_nolock(__GFP_SKIP_KASAN,
+						 node, order);
 	else if (node == NUMA_NO_NODE)
 		page = alloc_frozen_pages(flags, order);
 	else
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] kasan: avoid re-poisoning tag-based kmalloc redzones
  2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
  2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
@ 2026-05-13 10:57 ` Dev Jain
  2026-05-13 10:57 ` [PATCH 3/3] vmalloc: hw_tags: optimize vmalloc redzoning Dev Jain
  2026-05-14  9:56 ` [PATCH 0/3] kasan: hw_tags: some micro-optimizations Harry Yoo (Oracle)
  3 siblings, 0 replies; 7+ messages in thread
From: Dev Jain @ 2026-05-13 10:57 UTC (permalink / raw)
  To: akpm, vbabka, harry, ryabinin.a.a
  Cc: Dev Jain, surenb, mhocko, jackmanb, hannes, ziy, hao.li, cl,
	rientjes, roman.gushchin, linux-mm, linux-kernel, glider,
	andreyknvl, dvyukov, vincenzo.frascino, kasan-dev, ryan.roberts,
	anshuman.khandual, catalin.marinas

When we allocate object from slab, kasan will unpoison the entire object.
In case of allocation from kmalloc caches, the actual allocation size
request can be less than the size of the kmalloc cache. kasan poisons
the bytes following allocation size up till object size to catch OOB.

We can do this operation in one shot: while unpoisoning the object upon
allocation, only unpoison up till allocation size bytes, so that the
bytes following that up till object size remain poisoned.

Currently when we free an object into the slab, we use KASAN_SLAB_FREE for
poisoning, and use KASAN_SLAB_REDZONE for poisoning the tail end. For
tag-based kasan, these two are equal, as opposed to generic kasan. So we
make this optimization only for tag-based kasan.

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
 include/linux/kasan.h | 17 +++++++++----
 mm/kasan/common.c     | 55 +++++++++++++++++++++++++++++++++----------
 mm/slub.c             | 11 +++++----
 3 files changed, 61 insertions(+), 22 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index bf233bde68c7..fd7c1f5f9fd6 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -102,6 +102,12 @@ static inline bool kasan_has_integrated_init(void)
 	return kasan_hw_tags_enabled();
 }
 
+static inline bool kasan_has_tag_based_kmalloc_redzones(void)
+{
+	return kasan_enabled() &&
+	       (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || kasan_hw_tags_enabled());
+}
+
 #ifdef CONFIG_KASAN
 void __kasan_unpoison_range(const void *addr, size_t size);
 static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
@@ -244,13 +250,14 @@ static __always_inline void kasan_kfree_large(void *ptr)
 		__kasan_kfree_large(ptr, _RET_IP_);
 }
 
-void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
-				       void *object, gfp_t flags, bool init);
+void * __must_check __kasan_slab_alloc(struct kmem_cache *s, void *object,
+				       size_t size, gfp_t flags, bool init);
 static __always_inline void * __must_check kasan_slab_alloc(
-		struct kmem_cache *s, void *object, gfp_t flags, bool init)
+		struct kmem_cache *s, void *object, size_t size,
+		gfp_t flags, bool init)
 {
 	if (kasan_enabled())
-		return __kasan_slab_alloc(s, object, flags, init);
+		return __kasan_slab_alloc(s, object, size, flags, init);
 	return object;
 }
 
@@ -437,7 +444,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
 }
 static inline void kasan_kfree_large(void *ptr) {}
 static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
-				   gfp_t flags, bool init)
+				   size_t size, gfp_t flags, bool init)
 {
 	return object;
 }
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index b7d05c2a6d93..9a4db9c21aaf 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -326,14 +326,25 @@ void __kasan_kfree_large(void *ptr, unsigned long ip)
 	/* The object will be poisoned by kasan_poison_pages(). */
 }
 
+static inline size_t slab_unpoison_size(struct kmem_cache *cache, size_t size)
+{
+	if (kasan_has_tag_based_kmalloc_redzones() && is_kmalloc_cache(cache))
+		return min_t(size_t, size, cache->object_size);
+
+	return cache->object_size;
+}
+
 static inline void unpoison_slab_object(struct kmem_cache *cache, void *object,
-					gfp_t flags, bool init)
+					size_t size, gfp_t flags, bool init)
 {
 	/*
-	 * Unpoison the whole object. For kmalloc() allocations,
-	 * poison_kmalloc_redzone() will do precise poisoning.
+	 * For tag-based modes, kmalloc redzones all use the same invalid tag.
+	 * Keep the tail poisoned and only unpoison the requested allocation
+	 * size. Generic KASAN keeps distinct shadow values for free objects and
+	 * redzones, so it still unpoisons the whole object and later poisons
+	 * the precise redzone.
 	 */
-	kasan_unpoison(object, cache->object_size, init);
+	kasan_unpoison(object, slab_unpoison_size(cache, size), init);
 
 	/* Save alloc info (if possible) for non-kmalloc() allocations. */
 	if (kasan_stack_collection_enabled() && !is_kmalloc_cache(cache))
@@ -341,7 +352,8 @@ static inline void unpoison_slab_object(struct kmem_cache *cache, void *object,
 }
 
 void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
-					void *object, gfp_t flags, bool init)
+					void *object, size_t size,
+					gfp_t flags, bool init)
 {
 	u8 tag;
 	void *tagged_object;
@@ -363,11 +375,18 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
 	tagged_object = set_tag(object, tag);
 
 	/* Unpoison the object and save alloc info for non-kmalloc() allocations. */
-	unpoison_slab_object(cache, tagged_object, flags, init);
+	unpoison_slab_object(cache, tagged_object, size, flags, init);
 
 	return tagged_object;
 }
 
+static inline void save_kmalloc_alloc_info(struct kmem_cache *cache,
+					   void *object, gfp_t flags)
+{
+	if (kasan_stack_collection_enabled() && is_kmalloc_cache(cache))
+		kasan_save_alloc_info(cache, object, flags);
+}
+
 static inline void poison_kmalloc_redzone(struct kmem_cache *cache,
 				const void *object, size_t size, gfp_t flags)
 {
@@ -394,8 +413,7 @@ static inline void poison_kmalloc_redzone(struct kmem_cache *cache,
 	 * Save alloc info (if possible) for kmalloc() allocations.
 	 * This also rewrites the alloc info when called from kasan_krealloc().
 	 */
-	if (kasan_stack_collection_enabled() && is_kmalloc_cache(cache))
-		kasan_save_alloc_info(cache, (void *)object, flags);
+	save_kmalloc_alloc_info(cache, (void *)object, flags);
 
 }
 
@@ -411,8 +429,14 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object
 	if (is_kfence_address(object))
 		return (void *)object;
 
-	/* The object has already been unpoisoned by kasan_slab_alloc(). */
-	poison_kmalloc_redzone(cache, object, size, flags);
+	/*
+	 * For tag-based modes, the object has already been precisely
+	 * unpoisoned by kasan_slab_alloc(). The tail remains poisoned.
+	 */
+	if (kasan_has_tag_based_kmalloc_redzones())
+		save_kmalloc_alloc_info(cache, (void *)object, flags);
+	else
+		poison_kmalloc_redzone(cache, object, size, flags);
 
 	/* Keep the tag that was set by kasan_slab_alloc(). */
 	return (void *)object;
@@ -561,11 +585,16 @@ void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip)
 		return;
 
 	/* Unpoison the object and save alloc info for non-kmalloc() allocations. */
-	unpoison_slab_object(slab->slab_cache, ptr, flags, false);
+	unpoison_slab_object(slab->slab_cache, ptr, size, flags, false);
 
 	/* Poison the redzone and save alloc info for kmalloc() allocations. */
-	if (is_kmalloc_cache(slab->slab_cache))
-		poison_kmalloc_redzone(slab->slab_cache, ptr, size, flags);
+	if (is_kmalloc_cache(slab->slab_cache)) {
+		if (kasan_has_tag_based_kmalloc_redzones())
+			save_kmalloc_alloc_info(slab->slab_cache, ptr, flags);
+		else
+			poison_kmalloc_redzone(slab->slab_cache, ptr, size,
+					       flags);
+	}
 }
 
 bool __kasan_check_byte(const void *address, unsigned long ip)
diff --git a/mm/slub.c b/mm/slub.c
index da3520769d1f..15144b2e078c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4550,8 +4550,9 @@ bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru,
 	 * replacement of current poisoning under certain debug option, and
 	 * won't break other sanity checks.
 	 */
-	if (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) &&
-	    (s->flags & SLAB_KMALLOC))
+	if ((s->flags & SLAB_KMALLOC) &&
+	    (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) ||
+	     kasan_has_tag_based_kmalloc_redzones()))
 		zero_size = orig_size;
 
 	/*
@@ -4573,7 +4574,8 @@ bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru,
 	 * As p[i] might get tagged, memset and kmemleak hook come after KASAN.
 	 */
 	for (i = 0; i < size; i++) {
-		p[i] = kasan_slab_alloc(s, p[i], init_flags, kasan_init);
+		p[i] = kasan_slab_alloc(s, p[i], orig_size, init_flags,
+					kasan_init);
 		if (p[i] && init && (!kasan_init ||
 				     !kasan_has_integrated_init()))
 			memset(p[i], 0, zero_size);
@@ -7615,7 +7617,8 @@ static void early_kmem_cache_node_alloc(int node)
 #ifdef CONFIG_SLUB_DEBUG
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 #endif
-	n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false);
+	n = kasan_slab_alloc(kmem_cache_node, n, kmem_cache_node->object_size,
+			     GFP_KERNEL, false);
 	slab->freelist = get_freepointer(kmem_cache_node, n);
 	slab->inuse = 1;
 	kmem_cache_node->per_node[node].node = n;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] vmalloc: hw_tags: optimize vmalloc redzoning
  2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
  2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
  2026-05-13 10:57 ` [PATCH 2/3] kasan: avoid re-poisoning tag-based kmalloc redzones Dev Jain
@ 2026-05-13 10:57 ` Dev Jain
  2026-05-14  9:56 ` [PATCH 0/3] kasan: hw_tags: some micro-optimizations Harry Yoo (Oracle)
  3 siblings, 0 replies; 7+ messages in thread
From: Dev Jain @ 2026-05-13 10:57 UTC (permalink / raw)
  To: akpm, vbabka, harry, ryabinin.a.a
  Cc: Dev Jain, surenb, mhocko, jackmanb, hannes, ziy, hao.li, cl,
	rientjes, roman.gushchin, linux-mm, linux-kernel, glider,
	andreyknvl, dvyukov, vincenzo.frascino, kasan-dev, ryan.roberts,
	anshuman.khandual, catalin.marinas

If the allocation size is less than a page, vmalloc first unpoisons the
entire page, then poisons the tail with KASAN_TAG_INVALID (for hw tags),
to catch OOB.

Instead, unpoison the allocation length, and then poison the tail,
saving some work.

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
 mm/kasan/hw_tags.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index cbef5e450954..7c94f71b5f12 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -364,9 +364,6 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
 	tag = (flags & KASAN_VMALLOC_KEEP_TAG) ? get_tag(start) : kasan_random_tag();
 	start = set_tag(start, tag);
 
-	/* Unpoison and initialize memory up to size. */
-	kasan_unpoison(start, size, flags & KASAN_VMALLOC_INIT);
-
 	/*
 	 * Explicitly poison and initialize the in-page vmalloc() redzone.
 	 * Unlike software KASAN modes, hardware tag-based KASAN doesn't
@@ -375,8 +372,14 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
 	redzone_start = round_up((unsigned long)start + size,
 				 KASAN_GRANULE_SIZE);
 	redzone_size = round_up(redzone_start, PAGE_SIZE) - redzone_start;
-	kasan_poison((void *)redzone_start, redzone_size, KASAN_TAG_INVALID,
-		     flags & KASAN_VMALLOC_INIT);
+
+	/* Unpoison and initialize memory before the redzone. */
+	kasan_unpoison(start, redzone_start - (unsigned long)start,
+		       flags & KASAN_VMALLOC_INIT);
+
+	if (redzone_size)
+		kasan_poison((void *)redzone_start, redzone_size,
+			     KASAN_TAG_INVALID, flags & KASAN_VMALLOC_INIT);
 
 	/*
 	 * Set per-page tag flags to allow accessing physical memory for the
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/3] kasan: hw_tags: some micro-optimizations
  2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
                   ` (2 preceding siblings ...)
  2026-05-13 10:57 ` [PATCH 3/3] vmalloc: hw_tags: optimize vmalloc redzoning Dev Jain
@ 2026-05-14  9:56 ` Harry Yoo (Oracle)
  2026-05-14 10:22   ` Dev Jain
  3 siblings, 1 reply; 7+ messages in thread
From: Harry Yoo (Oracle) @ 2026-05-14  9:56 UTC (permalink / raw)
  To: Dev Jain
  Cc: akpm, vbabka, ryabinin.a.a, surenb, mhocko, jackmanb, hannes, ziy,
	hao.li, cl, rientjes, roman.gushchin, linux-mm, linux-kernel,
	glider, andreyknvl, dvyukov, vincenzo.frascino, kasan-dev,
	ryan.roberts, anshuman.khandual, catalin.marinas

I have a few questions...

Does the performance of KASAN_HW_TAGS matter?

Yes, the performance of Hardware Tag-Based KASAN matters because
it is meant to be used in production on mobile devices as it utilizes
cutting-edge (to me) hardware support instead of compiler
instrumentation.

Do you have data or are you planning to measure the performance
impact of this work?

On Wed, May 13, 2026 at 04:27:31PM +0530, Dev Jain wrote:
> Patch 1 uses GFP_SKIP_KASAN to skip unpoisoning of a slab page in the page
> allocator, since slab allocator itself poisons the slab page immediately.

If __GFP_SKIP_KASAN skips unpoisoning, why should the slab allocator
poison the whole page (kasan_poison_slab())? It's already poisoned.

> Patch 2 and 3 remove wasted work while poisoning the tail end of the
> vmalloc/slab allocation.

> ---
> Based on 7.1-rc2.
> 
> Dev Jain (3):
>   mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation
>   kasan: avoid re-poisoning tag-based kmalloc redzones
>   vmalloc: hw_tags: optimize vmalloc redzoning
> 
>  include/linux/kasan.h | 17 +++++++++----
>  mm/kasan/common.c     | 55 +++++++++++++++++++++++++++++++++----------
>  mm/kasan/hw_tags.c    | 13 ++++++----
>  mm/page_alloc.c       |  2 +-
>  mm/slub.c             | 22 ++++++++++++-----
>  5 files changed, 79 insertions(+), 30 deletions(-)
> 
> -- 
> 2.43.0
> 

-- 
Cheers,
Harry / Hyeonggon


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/3] kasan: hw_tags: some micro-optimizations
  2026-05-14  9:56 ` [PATCH 0/3] kasan: hw_tags: some micro-optimizations Harry Yoo (Oracle)
@ 2026-05-14 10:22   ` Dev Jain
  0 siblings, 0 replies; 7+ messages in thread
From: Dev Jain @ 2026-05-14 10:22 UTC (permalink / raw)
  To: Harry Yoo (Oracle)
  Cc: akpm, vbabka, ryabinin.a.a, surenb, mhocko, jackmanb, hannes, ziy,
	hao.li, cl, rientjes, roman.gushchin, linux-mm, linux-kernel,
	glider, andreyknvl, dvyukov, vincenzo.frascino, kasan-dev,
	ryan.roberts, anshuman.khandual, catalin.marinas



On 14/05/26 3:26 pm, Harry Yoo (Oracle) wrote:
> I have a few questions...
> 
> Does the performance of KASAN_HW_TAGS matter?
> 
> Yes, the performance of Hardware Tag-Based KASAN matters because
> it is meant to be used in production on mobile devices as it utilizes
> cutting-edge (to me) hardware support instead of compiler
> instrumentation.
> 
> Do you have data or are you planning to measure the performance
> impact of this work?

I will get back on this.

> 
> On Wed, May 13, 2026 at 04:27:31PM +0530, Dev Jain wrote:
>> Patch 1 uses GFP_SKIP_KASAN to skip unpoisoning of a slab page in the page
>> allocator, since slab allocator itself poisons the slab page immediately.
> 
> If __GFP_SKIP_KASAN skips unpoisoning, why should the slab allocator
> poison the whole page (kasan_poison_slab())? It's already poisoned.

Nice observation! Although it may not always be the case that the slab
page is already poisoned. If should_skip_kasan_poison() in the buddy
code triggers, then a freed page is not poisoned.

But, the point of kasan_poison_slab() is still served without it -
an OOB for a slab object is still caught probabilistically because
the allocation has a random tag.
> 
>> Patch 2 and 3 remove wasted work while poisoning the tail end of the
>> vmalloc/slab allocation.
> 
>> ---
>> Based on 7.1-rc2.
>>
>> Dev Jain (3):
>>   mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation
>>   kasan: avoid re-poisoning tag-based kmalloc redzones
>>   vmalloc: hw_tags: optimize vmalloc redzoning
>>
>>  include/linux/kasan.h | 17 +++++++++----
>>  mm/kasan/common.c     | 55 +++++++++++++++++++++++++++++++++----------
>>  mm/kasan/hw_tags.c    | 13 ++++++----
>>  mm/page_alloc.c       |  2 +-
>>  mm/slub.c             | 22 ++++++++++++-----
>>  5 files changed, 79 insertions(+), 30 deletions(-)
>>
>> -- 
>> 2.43.0
>>
> 



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation
  2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
@ 2026-05-14 12:11   ` Ryan Roberts
  0 siblings, 0 replies; 7+ messages in thread
From: Ryan Roberts @ 2026-05-14 12:11 UTC (permalink / raw)
  To: Dev Jain, akpm, vbabka, harry, ryabinin.a.a
  Cc: surenb, mhocko, jackmanb, hannes, ziy, hao.li, cl, rientjes,
	roman.gushchin, linux-mm, linux-kernel, glider, andreyknvl,
	dvyukov, vincenzo.frascino, kasan-dev, anshuman.khandual,
	catalin.marinas

On 13/05/2026 11:57, Dev Jain wrote:
> When a new slab page is allocated, the buddy will unpoison the page.
> Then slab immediately poisons the page via kasan_poison_slab(). This
> is wasted work.
> 
> Similar to what is done in vmalloc currently, use GFP_SKIP_KASAN
> (hw tags flag only) to skip unpoisoning of the slab page.
> 
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
>  mm/page_alloc.c |  2 +-
>  mm/slub.c       | 11 +++++++++--
>  2 files changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 227d58dc3de6..c3a69913aaa9 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7723,7 +7723,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned
>  	struct alloc_context ac = { };
>  	struct page *page;
>  
> -	VM_WARN_ON_ONCE(gfp_flags & ~__GFP_ACCOUNT);
> +	VM_WARN_ON_ONCE(gfp_flags & ~(__GFP_ACCOUNT | __GFP_SKIP_KASAN));
>  	/*
>  	 * In PREEMPT_RT spin_trylock() will call raw_spin_lock() which is
>  	 * unsafe in NMI. If spin_trylock() is called from hard IRQ the current
> diff --git a/mm/slub.c b/mm/slub.c
> index 0baa906f39ab..da3520769d1f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3269,9 +3269,16 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
>  	struct slab *slab;
>  	unsigned int order = oo_order(oo);
>  
> +	/*
> +	 * New slab pages are immediately poisoned by kasan_poison_slab()
> +	 * before any object is handed out, so page allocator unpoisoning
> +	 * is wasted work for HW_TAGS KASAN.
> +	 */
> +	flags |= __GFP_SKIP_KASAN;

You will also want to elide kasan_poison_slab() right? In which case, it might
be better to handle __GFP_SKIP_KASAN in allocate_slab() (which calls
alloc_slab_page()), because that's the place where kasan_poison_slab() currently
is and it's probably better to keep the logic together.

Note that there is a wrinkle though; logically, there are different types of
memory poison; KASAN_PAGE_FREE, KASAN_PAGE_REDZONE, KASAN_SLAB_REDZONE, etc.
Memory returned by the page allocator with __GFP_SKIP_KASAN set, will have
KASAN_PAGE_FREE poison (I think). But kasan_poison_slab() sets
KASAN_SLAB_REDZONE poison.

However, this is only distinguished in practice for KASAN_GENERIC. For
KASAN_SW_TAGS and KASAN_HW_TAGS these distinct logical types all map to the same
KASAN_TAG_INVALID tag. So this optimization can only be safely applied to
KASAN_SW_TAGS and KASAN_HW_TAGS.

It would be nice if this could be abstracted away somehow...

> +
>  	if (unlikely(!allow_spin))
> -		page = alloc_frozen_pages_nolock(0/* __GFP_COMP is implied */,

nit: you may want to keep this comment around?

> -								  node, order);
> +		page = alloc_frozen_pages_nolock(__GFP_SKIP_KASAN,
> +						 node, order);
>  	else if (node == NUMA_NO_NODE)
>  		page = alloc_frozen_pages(flags, order);
>  	else



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-05-14 12:11 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
2026-05-14 12:11   ` Ryan Roberts
2026-05-13 10:57 ` [PATCH 2/3] kasan: avoid re-poisoning tag-based kmalloc redzones Dev Jain
2026-05-13 10:57 ` [PATCH 3/3] vmalloc: hw_tags: optimize vmalloc redzoning Dev Jain
2026-05-14  9:56 ` [PATCH 0/3] kasan: hw_tags: some micro-optimizations Harry Yoo (Oracle)
2026-05-14 10:22   ` Dev Jain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox