From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D0EF53E3173 for ; Fri, 15 May 2026 07:04:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778828672; cv=none; b=izfPiJ2kriUabMzHpGpPo3R/FqKXtVJ34fvuKQet4RWbArqJWeDnNQtPm5WREQn2WtmTZ4vPdr/r2vFeIwzuZkzV9OqyFzJDZHGlNRfF/h+8WqK6YyBzNZe14urvQH+JNkOxBl5lb4g1I8Sqjhy7WOrLuP+s1lLKlt49iDL3J2Q= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778828672; c=relaxed/simple; bh=Teq4aEV47b5xc1xDbc9sNZiP3VZUXGcetKGt0Nj8jj8=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=tA70eF0lytj+lQqVuFP06jZJqPdIk61L19sR1JJFXtOGfTvrzADLUOl9TYCm446y3YNsKP4qudvDbUcZRc8AiDFqfqAdBoi7v0ySB5OpPzNsdD4ENJEPuZ0MFxy7MGzmQ02L4eoWMYKOTyPFWB8kO7sQbW3x1wnGG9zUIa5dQ3Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=eN3JnKs3; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="eN3JnKs3" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F0EB26A4; Fri, 15 May 2026 00:04:22 -0700 (PDT) Received: from [10.164.19.30] (unknown [10.164.19.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 005203F85F; Fri, 15 May 2026 00:04:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778828667; bh=Teq4aEV47b5xc1xDbc9sNZiP3VZUXGcetKGt0Nj8jj8=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=eN3JnKs3EzS0t8ox0F9OPSRe2IdraPZxQZz4lC5+FzUfJpVjaxcEi6Otz6sc0BiAv tiKjVgEAXkLleHeGlaRilLr8esdBsvELnrUCFeqTNFz4370OKxKOz040DwWBIBbXzN eMt/BlMsKJF7KgZkYOrnCXqjIpNQDJgGX+pOG0FM= Message-ID: Date: Fri, 15 May 2026 12:34:17 +0530 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation To: Ryan Roberts , akpm@linux-foundation.org, vbabka@kernel.org, harry@kernel.org, ryabinin.a.a@gmail.com Cc: surenb@google.com, mhocko@suse.com, jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com, hao.li@linux.dev, cl@gentwo.org, rientjes@google.com, roman.gushchin@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, glider@google.com, andreyknvl@gmail.com, dvyukov@google.com, vincenzo.frascino@arm.com, kasan-dev@googlegroups.com, anshuman.khandual@arm.com, catalin.marinas@arm.com References: <20260513105734.3380544-1-dev.jain@arm.com> <20260513105734.3380544-2-dev.jain@arm.com> <83aca9c5-6bd0-438e-b571-c4d0335c9901@arm.com> Content-Language: en-US From: Dev Jain In-Reply-To: <83aca9c5-6bd0-438e-b571-c4d0335c9901@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 14/05/26 5:41 pm, Ryan Roberts wrote: > On 13/05/2026 11:57, Dev Jain wrote: >> When a new slab page is allocated, the buddy will unpoison the page. >> Then slab immediately poisons the page via kasan_poison_slab(). This >> is wasted work. >> >> Similar to what is done in vmalloc currently, use GFP_SKIP_KASAN >> (hw tags flag only) to skip unpoisoning of the slab page. >> >> Signed-off-by: Dev Jain >> --- >> mm/page_alloc.c | 2 +- >> mm/slub.c | 11 +++++++++-- >> 2 files changed, 10 insertions(+), 3 deletions(-) >> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 227d58dc3de6..c3a69913aaa9 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -7723,7 +7723,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned >> struct alloc_context ac = { }; >> struct page *page; >> >> - VM_WARN_ON_ONCE(gfp_flags & ~__GFP_ACCOUNT); >> + VM_WARN_ON_ONCE(gfp_flags & ~(__GFP_ACCOUNT | __GFP_SKIP_KASAN)); >> /* >> * In PREEMPT_RT spin_trylock() will call raw_spin_lock() which is >> * unsafe in NMI. If spin_trylock() is called from hard IRQ the current >> diff --git a/mm/slub.c b/mm/slub.c >> index 0baa906f39ab..da3520769d1f 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -3269,9 +3269,16 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node, >> struct slab *slab; >> unsigned int order = oo_order(oo); >> >> + /* >> + * New slab pages are immediately poisoned by kasan_poison_slab() >> + * before any object is handed out, so page allocator unpoisoning >> + * is wasted work for HW_TAGS KASAN. >> + */ >> + flags |= __GFP_SKIP_KASAN; > > You will also want to elide kasan_poison_slab() right? In which case, it might > be better to handle __GFP_SKIP_KASAN in allocate_slab() (which calls > alloc_slab_page()), because that's the place where kasan_poison_slab() currently > is and it's probably better to keep the logic together. Okay. > > Note that there is a wrinkle though; logically, there are different types of > memory poison; KASAN_PAGE_FREE, KASAN_PAGE_REDZONE, KASAN_SLAB_REDZONE, etc. > Memory returned by the page allocator with __GFP_SKIP_KASAN set, will have > KASAN_PAGE_FREE poison (I think). But kasan_poison_slab() sets > KASAN_SLAB_REDZONE poison. > > However, this is only distinguished in practice for KASAN_GENERIC. For > KASAN_SW_TAGS and KASAN_HW_TAGS these distinct logical types all map to the same > KASAN_TAG_INVALID tag. So this optimization can only be safely applied to > KASAN_SW_TAGS and KASAN_HW_TAGS. GFP_SKIP_KASAN is only for HW_TAGS. I will mention the KASAN_SLAB_REDZONE = KASAN_PAGE_FREE for hw tags in the description. > > It would be nice if this could be abstracted away somehow... > >> + >> if (unlikely(!allow_spin)) >> - page = alloc_frozen_pages_nolock(0/* __GFP_COMP is implied */, > > nit: you may want to keep this comment around? Yes I will keep this. > >> - node, order); >> + page = alloc_frozen_pages_nolock(__GFP_SKIP_KASAN, >> + node, order); >> else if (node == NUMA_NO_NODE) >> page = alloc_frozen_pages(flags, order); >> else >