Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] kasan: hw_tags: some micro-optimizations
@ 2026-05-13 10:57 Dev Jain
  2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Dev Jain @ 2026-05-13 10:57 UTC (permalink / raw)
  To: akpm, vbabka, harry, ryabinin.a.a
  Cc: Dev Jain, surenb, mhocko, jackmanb, hannes, ziy, hao.li, cl,
	rientjes, roman.gushchin, linux-mm, linux-kernel, glider,
	andreyknvl, dvyukov, vincenzo.frascino, kasan-dev, ryan.roberts,
	anshuman.khandual, catalin.marinas

Patch 1 uses GFP_SKIP_KASAN to skip unpoisoning of a slab page in the page
allocator, since slab allocator itself poisons the slab page immediately.

Patch 2 and 3 remove wasted work while poisoning the tail end of the
vmalloc/slab allocation.

---
Based on 7.1-rc2.

Dev Jain (3):
  mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation
  kasan: avoid re-poisoning tag-based kmalloc redzones
  vmalloc: hw_tags: optimize vmalloc redzoning

 include/linux/kasan.h | 17 +++++++++----
 mm/kasan/common.c     | 55 +++++++++++++++++++++++++++++++++----------
 mm/kasan/hw_tags.c    | 13 ++++++----
 mm/page_alloc.c       |  2 +-
 mm/slub.c             | 22 ++++++++++++-----
 5 files changed, 79 insertions(+), 30 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation
  2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
@ 2026-05-13 10:57 ` Dev Jain
  2026-05-13 10:57 ` [PATCH 2/3] kasan: avoid re-poisoning tag-based kmalloc redzones Dev Jain
  2026-05-13 10:57 ` [PATCH 3/3] vmalloc: hw_tags: optimize vmalloc redzoning Dev Jain
  2 siblings, 0 replies; 4+ messages in thread
From: Dev Jain @ 2026-05-13 10:57 UTC (permalink / raw)
  To: akpm, vbabka, harry, ryabinin.a.a
  Cc: Dev Jain, surenb, mhocko, jackmanb, hannes, ziy, hao.li, cl,
	rientjes, roman.gushchin, linux-mm, linux-kernel, glider,
	andreyknvl, dvyukov, vincenzo.frascino, kasan-dev, ryan.roberts,
	anshuman.khandual, catalin.marinas

When a new slab page is allocated, the buddy will unpoison the page.
Then slab immediately poisons the page via kasan_poison_slab(). This
is wasted work.

Similar to what is done in vmalloc currently, use GFP_SKIP_KASAN
(hw tags flag only) to skip unpoisoning of the slab page.

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
 mm/page_alloc.c |  2 +-
 mm/slub.c       | 11 +++++++++--
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 227d58dc3de6..c3a69913aaa9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7723,7 +7723,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned
 	struct alloc_context ac = { };
 	struct page *page;
 
-	VM_WARN_ON_ONCE(gfp_flags & ~__GFP_ACCOUNT);
+	VM_WARN_ON_ONCE(gfp_flags & ~(__GFP_ACCOUNT | __GFP_SKIP_KASAN));
 	/*
 	 * In PREEMPT_RT spin_trylock() will call raw_spin_lock() which is
 	 * unsafe in NMI. If spin_trylock() is called from hard IRQ the current
diff --git a/mm/slub.c b/mm/slub.c
index 0baa906f39ab..da3520769d1f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3269,9 +3269,16 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
 	struct slab *slab;
 	unsigned int order = oo_order(oo);
 
+	/*
+	 * New slab pages are immediately poisoned by kasan_poison_slab()
+	 * before any object is handed out, so page allocator unpoisoning
+	 * is wasted work for HW_TAGS KASAN.
+	 */
+	flags |= __GFP_SKIP_KASAN;
+
 	if (unlikely(!allow_spin))
-		page = alloc_frozen_pages_nolock(0/* __GFP_COMP is implied */,
-								  node, order);
+		page = alloc_frozen_pages_nolock(__GFP_SKIP_KASAN,
+						 node, order);
 	else if (node == NUMA_NO_NODE)
 		page = alloc_frozen_pages(flags, order);
 	else
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/3] kasan: avoid re-poisoning tag-based kmalloc redzones
  2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
  2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
@ 2026-05-13 10:57 ` Dev Jain
  2026-05-13 10:57 ` [PATCH 3/3] vmalloc: hw_tags: optimize vmalloc redzoning Dev Jain
  2 siblings, 0 replies; 4+ messages in thread
From: Dev Jain @ 2026-05-13 10:57 UTC (permalink / raw)
  To: akpm, vbabka, harry, ryabinin.a.a
  Cc: Dev Jain, surenb, mhocko, jackmanb, hannes, ziy, hao.li, cl,
	rientjes, roman.gushchin, linux-mm, linux-kernel, glider,
	andreyknvl, dvyukov, vincenzo.frascino, kasan-dev, ryan.roberts,
	anshuman.khandual, catalin.marinas

When we allocate object from slab, kasan will unpoison the entire object.
In case of allocation from kmalloc caches, the actual allocation size
request can be less than the size of the kmalloc cache. kasan poisons
the bytes following allocation size up till object size to catch OOB.

We can do this operation in one shot: while unpoisoning the object upon
allocation, only unpoison up till allocation size bytes, so that the
bytes following that up till object size remain poisoned.

Currently when we free an object into the slab, we use KASAN_SLAB_FREE for
poisoning, and use KASAN_SLAB_REDZONE for poisoning the tail end. For
tag-based kasan, these two are equal, as opposed to generic kasan. So we
make this optimization only for tag-based kasan.

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
 include/linux/kasan.h | 17 +++++++++----
 mm/kasan/common.c     | 55 +++++++++++++++++++++++++++++++++----------
 mm/slub.c             | 11 +++++----
 3 files changed, 61 insertions(+), 22 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index bf233bde68c7..fd7c1f5f9fd6 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -102,6 +102,12 @@ static inline bool kasan_has_integrated_init(void)
 	return kasan_hw_tags_enabled();
 }
 
+static inline bool kasan_has_tag_based_kmalloc_redzones(void)
+{
+	return kasan_enabled() &&
+	       (IS_ENABLED(CONFIG_KASAN_SW_TAGS) || kasan_hw_tags_enabled());
+}
+
 #ifdef CONFIG_KASAN
 void __kasan_unpoison_range(const void *addr, size_t size);
 static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
@@ -244,13 +250,14 @@ static __always_inline void kasan_kfree_large(void *ptr)
 		__kasan_kfree_large(ptr, _RET_IP_);
 }
 
-void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
-				       void *object, gfp_t flags, bool init);
+void * __must_check __kasan_slab_alloc(struct kmem_cache *s, void *object,
+				       size_t size, gfp_t flags, bool init);
 static __always_inline void * __must_check kasan_slab_alloc(
-		struct kmem_cache *s, void *object, gfp_t flags, bool init)
+		struct kmem_cache *s, void *object, size_t size,
+		gfp_t flags, bool init)
 {
 	if (kasan_enabled())
-		return __kasan_slab_alloc(s, object, flags, init);
+		return __kasan_slab_alloc(s, object, size, flags, init);
 	return object;
 }
 
@@ -437,7 +444,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
 }
 static inline void kasan_kfree_large(void *ptr) {}
 static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
-				   gfp_t flags, bool init)
+				   size_t size, gfp_t flags, bool init)
 {
 	return object;
 }
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index b7d05c2a6d93..9a4db9c21aaf 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -326,14 +326,25 @@ void __kasan_kfree_large(void *ptr, unsigned long ip)
 	/* The object will be poisoned by kasan_poison_pages(). */
 }
 
+static inline size_t slab_unpoison_size(struct kmem_cache *cache, size_t size)
+{
+	if (kasan_has_tag_based_kmalloc_redzones() && is_kmalloc_cache(cache))
+		return min_t(size_t, size, cache->object_size);
+
+	return cache->object_size;
+}
+
 static inline void unpoison_slab_object(struct kmem_cache *cache, void *object,
-					gfp_t flags, bool init)
+					size_t size, gfp_t flags, bool init)
 {
 	/*
-	 * Unpoison the whole object. For kmalloc() allocations,
-	 * poison_kmalloc_redzone() will do precise poisoning.
+	 * For tag-based modes, kmalloc redzones all use the same invalid tag.
+	 * Keep the tail poisoned and only unpoison the requested allocation
+	 * size. Generic KASAN keeps distinct shadow values for free objects and
+	 * redzones, so it still unpoisons the whole object and later poisons
+	 * the precise redzone.
 	 */
-	kasan_unpoison(object, cache->object_size, init);
+	kasan_unpoison(object, slab_unpoison_size(cache, size), init);
 
 	/* Save alloc info (if possible) for non-kmalloc() allocations. */
 	if (kasan_stack_collection_enabled() && !is_kmalloc_cache(cache))
@@ -341,7 +352,8 @@ static inline void unpoison_slab_object(struct kmem_cache *cache, void *object,
 }
 
 void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
-					void *object, gfp_t flags, bool init)
+					void *object, size_t size,
+					gfp_t flags, bool init)
 {
 	u8 tag;
 	void *tagged_object;
@@ -363,11 +375,18 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
 	tagged_object = set_tag(object, tag);
 
 	/* Unpoison the object and save alloc info for non-kmalloc() allocations. */
-	unpoison_slab_object(cache, tagged_object, flags, init);
+	unpoison_slab_object(cache, tagged_object, size, flags, init);
 
 	return tagged_object;
 }
 
+static inline void save_kmalloc_alloc_info(struct kmem_cache *cache,
+					   void *object, gfp_t flags)
+{
+	if (kasan_stack_collection_enabled() && is_kmalloc_cache(cache))
+		kasan_save_alloc_info(cache, object, flags);
+}
+
 static inline void poison_kmalloc_redzone(struct kmem_cache *cache,
 				const void *object, size_t size, gfp_t flags)
 {
@@ -394,8 +413,7 @@ static inline void poison_kmalloc_redzone(struct kmem_cache *cache,
 	 * Save alloc info (if possible) for kmalloc() allocations.
 	 * This also rewrites the alloc info when called from kasan_krealloc().
 	 */
-	if (kasan_stack_collection_enabled() && is_kmalloc_cache(cache))
-		kasan_save_alloc_info(cache, (void *)object, flags);
+	save_kmalloc_alloc_info(cache, (void *)object, flags);
 
 }
 
@@ -411,8 +429,14 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object
 	if (is_kfence_address(object))
 		return (void *)object;
 
-	/* The object has already been unpoisoned by kasan_slab_alloc(). */
-	poison_kmalloc_redzone(cache, object, size, flags);
+	/*
+	 * For tag-based modes, the object has already been precisely
+	 * unpoisoned by kasan_slab_alloc(). The tail remains poisoned.
+	 */
+	if (kasan_has_tag_based_kmalloc_redzones())
+		save_kmalloc_alloc_info(cache, (void *)object, flags);
+	else
+		poison_kmalloc_redzone(cache, object, size, flags);
 
 	/* Keep the tag that was set by kasan_slab_alloc(). */
 	return (void *)object;
@@ -561,11 +585,16 @@ void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip)
 		return;
 
 	/* Unpoison the object and save alloc info for non-kmalloc() allocations. */
-	unpoison_slab_object(slab->slab_cache, ptr, flags, false);
+	unpoison_slab_object(slab->slab_cache, ptr, size, flags, false);
 
 	/* Poison the redzone and save alloc info for kmalloc() allocations. */
-	if (is_kmalloc_cache(slab->slab_cache))
-		poison_kmalloc_redzone(slab->slab_cache, ptr, size, flags);
+	if (is_kmalloc_cache(slab->slab_cache)) {
+		if (kasan_has_tag_based_kmalloc_redzones())
+			save_kmalloc_alloc_info(slab->slab_cache, ptr, flags);
+		else
+			poison_kmalloc_redzone(slab->slab_cache, ptr, size,
+					       flags);
+	}
 }
 
 bool __kasan_check_byte(const void *address, unsigned long ip)
diff --git a/mm/slub.c b/mm/slub.c
index da3520769d1f..15144b2e078c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4550,8 +4550,9 @@ bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru,
 	 * replacement of current poisoning under certain debug option, and
 	 * won't break other sanity checks.
 	 */
-	if (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) &&
-	    (s->flags & SLAB_KMALLOC))
+	if ((s->flags & SLAB_KMALLOC) &&
+	    (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) ||
+	     kasan_has_tag_based_kmalloc_redzones()))
 		zero_size = orig_size;
 
 	/*
@@ -4573,7 +4574,8 @@ bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru,
 	 * As p[i] might get tagged, memset and kmemleak hook come after KASAN.
 	 */
 	for (i = 0; i < size; i++) {
-		p[i] = kasan_slab_alloc(s, p[i], init_flags, kasan_init);
+		p[i] = kasan_slab_alloc(s, p[i], orig_size, init_flags,
+					kasan_init);
 		if (p[i] && init && (!kasan_init ||
 				     !kasan_has_integrated_init()))
 			memset(p[i], 0, zero_size);
@@ -7615,7 +7617,8 @@ static void early_kmem_cache_node_alloc(int node)
 #ifdef CONFIG_SLUB_DEBUG
 	init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
 #endif
-	n = kasan_slab_alloc(kmem_cache_node, n, GFP_KERNEL, false);
+	n = kasan_slab_alloc(kmem_cache_node, n, kmem_cache_node->object_size,
+			     GFP_KERNEL, false);
 	slab->freelist = get_freepointer(kmem_cache_node, n);
 	slab->inuse = 1;
 	kmem_cache_node->per_node[node].node = n;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 3/3] vmalloc: hw_tags: optimize vmalloc redzoning
  2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
  2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
  2026-05-13 10:57 ` [PATCH 2/3] kasan: avoid re-poisoning tag-based kmalloc redzones Dev Jain
@ 2026-05-13 10:57 ` Dev Jain
  2 siblings, 0 replies; 4+ messages in thread
From: Dev Jain @ 2026-05-13 10:57 UTC (permalink / raw)
  To: akpm, vbabka, harry, ryabinin.a.a
  Cc: Dev Jain, surenb, mhocko, jackmanb, hannes, ziy, hao.li, cl,
	rientjes, roman.gushchin, linux-mm, linux-kernel, glider,
	andreyknvl, dvyukov, vincenzo.frascino, kasan-dev, ryan.roberts,
	anshuman.khandual, catalin.marinas

If the allocation size is less than a page, vmalloc first unpoisons the
entire page, then poisons the tail with KASAN_TAG_INVALID (for hw tags),
to catch OOB.

Instead, unpoison the allocation length, and then poison the tail,
saving some work.

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
 mm/kasan/hw_tags.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index cbef5e450954..7c94f71b5f12 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -364,9 +364,6 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
 	tag = (flags & KASAN_VMALLOC_KEEP_TAG) ? get_tag(start) : kasan_random_tag();
 	start = set_tag(start, tag);
 
-	/* Unpoison and initialize memory up to size. */
-	kasan_unpoison(start, size, flags & KASAN_VMALLOC_INIT);
-
 	/*
 	 * Explicitly poison and initialize the in-page vmalloc() redzone.
 	 * Unlike software KASAN modes, hardware tag-based KASAN doesn't
@@ -375,8 +372,14 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size,
 	redzone_start = round_up((unsigned long)start + size,
 				 KASAN_GRANULE_SIZE);
 	redzone_size = round_up(redzone_start, PAGE_SIZE) - redzone_start;
-	kasan_poison((void *)redzone_start, redzone_size, KASAN_TAG_INVALID,
-		     flags & KASAN_VMALLOC_INIT);
+
+	/* Unpoison and initialize memory before the redzone. */
+	kasan_unpoison(start, redzone_start - (unsigned long)start,
+		       flags & KASAN_VMALLOC_INIT);
+
+	if (redzone_size)
+		kasan_poison((void *)redzone_start, redzone_size,
+			     KASAN_TAG_INVALID, flags & KASAN_VMALLOC_INIT);
 
 	/*
 	 * Set per-page tag flags to allow accessing physical memory for the
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-05-13 10:58 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
2026-05-13 10:57 ` [PATCH 2/3] kasan: avoid re-poisoning tag-based kmalloc redzones Dev Jain
2026-05-13 10:57 ` [PATCH 3/3] vmalloc: hw_tags: optimize vmalloc redzoning Dev Jain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox