Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Dev Jain <dev.jain@arm.com>,
	akpm@linux-foundation.org, vbabka@kernel.org, harry@kernel.org,
	ryabinin.a.a@gmail.com
Cc: surenb@google.com, mhocko@suse.com, jackmanb@google.com,
	hannes@cmpxchg.org, ziy@nvidia.com, hao.li@linux.dev,
	cl@gentwo.org, rientjes@google.com, roman.gushchin@linux.dev,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	glider@google.com, andreyknvl@gmail.com, dvyukov@google.com,
	vincenzo.frascino@arm.com, kasan-dev@googlegroups.com,
	anshuman.khandual@arm.com, catalin.marinas@arm.com
Subject: Re: [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation
Date: Thu, 14 May 2026 13:11:15 +0100	[thread overview]
Message-ID: <83aca9c5-6bd0-438e-b571-c4d0335c9901@arm.com> (raw)
In-Reply-To: <20260513105734.3380544-2-dev.jain@arm.com>

On 13/05/2026 11:57, Dev Jain wrote:
> When a new slab page is allocated, the buddy will unpoison the page.
> Then slab immediately poisons the page via kasan_poison_slab(). This
> is wasted work.
> 
> Similar to what is done in vmalloc currently, use GFP_SKIP_KASAN
> (hw tags flag only) to skip unpoisoning of the slab page.
> 
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
>  mm/page_alloc.c |  2 +-
>  mm/slub.c       | 11 +++++++++--
>  2 files changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 227d58dc3de6..c3a69913aaa9 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7723,7 +7723,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned
>  	struct alloc_context ac = { };
>  	struct page *page;
>  
> -	VM_WARN_ON_ONCE(gfp_flags & ~__GFP_ACCOUNT);
> +	VM_WARN_ON_ONCE(gfp_flags & ~(__GFP_ACCOUNT | __GFP_SKIP_KASAN));
>  	/*
>  	 * In PREEMPT_RT spin_trylock() will call raw_spin_lock() which is
>  	 * unsafe in NMI. If spin_trylock() is called from hard IRQ the current
> diff --git a/mm/slub.c b/mm/slub.c
> index 0baa906f39ab..da3520769d1f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3269,9 +3269,16 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
>  	struct slab *slab;
>  	unsigned int order = oo_order(oo);
>  
> +	/*
> +	 * New slab pages are immediately poisoned by kasan_poison_slab()
> +	 * before any object is handed out, so page allocator unpoisoning
> +	 * is wasted work for HW_TAGS KASAN.
> +	 */
> +	flags |= __GFP_SKIP_KASAN;

You will also want to elide kasan_poison_slab() right? In which case, it might
be better to handle __GFP_SKIP_KASAN in allocate_slab() (which calls
alloc_slab_page()), because that's the place where kasan_poison_slab() currently
is and it's probably better to keep the logic together.

Note that there is a wrinkle though; logically, there are different types of
memory poison; KASAN_PAGE_FREE, KASAN_PAGE_REDZONE, KASAN_SLAB_REDZONE, etc.
Memory returned by the page allocator with __GFP_SKIP_KASAN set, will have
KASAN_PAGE_FREE poison (I think). But kasan_poison_slab() sets
KASAN_SLAB_REDZONE poison.

However, this is only distinguished in practice for KASAN_GENERIC. For
KASAN_SW_TAGS and KASAN_HW_TAGS these distinct logical types all map to the same
KASAN_TAG_INVALID tag. So this optimization can only be safely applied to
KASAN_SW_TAGS and KASAN_HW_TAGS.

It would be nice if this could be abstracted away somehow...

> +
>  	if (unlikely(!allow_spin))
> -		page = alloc_frozen_pages_nolock(0/* __GFP_COMP is implied */,

nit: you may want to keep this comment around?

> -								  node, order);
> +		page = alloc_frozen_pages_nolock(__GFP_SKIP_KASAN,
> +						 node, order);
>  	else if (node == NUMA_NO_NODE)
>  		page = alloc_frozen_pages(flags, order);
>  	else



  reply	other threads:[~2026-05-14 12:11 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-13 10:57 [PATCH 0/3] kasan: hw_tags: some micro-optimizations Dev Jain
2026-05-13 10:57 ` [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Dev Jain
2026-05-14 12:11   ` Ryan Roberts [this message]
2026-05-13 10:57 ` [PATCH 2/3] kasan: avoid re-poisoning tag-based kmalloc redzones Dev Jain
2026-05-13 10:57 ` [PATCH 3/3] vmalloc: hw_tags: optimize vmalloc redzoning Dev Jain
2026-05-14  9:56 ` [PATCH 0/3] kasan: hw_tags: some micro-optimizations Harry Yoo (Oracle)
2026-05-14 10:22   ` Dev Jain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=83aca9c5-6bd0-438e-b571-c4d0335c9901@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=andreyknvl@gmail.com \
    --cc=anshuman.khandual@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=cl@gentwo.org \
    --cc=dev.jain@arm.com \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=hao.li@linux.dev \
    --cc=harry@kernel.org \
    --cc=jackmanb@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=ryabinin.a.a@gmail.com \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=vincenzo.frascino@arm.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox