From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CA4213DB964 for ; Wed, 13 May 2026 10:58:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778669887; cv=none; b=bON643uzHrwlaPmIsajznjqBrqusEccE1Zfg6vTfuinKdrhygxU8f0cPVs6CskesWzOoD0EA/VBpwjzxaXn7IORoYGvhe1PajuclyqnWfL5OuYDFpKvF9Pin+3aLEPxfwJw9YHy3WEXmuYdVP8XjK8hk0UGboYFdY/hlukA7A9Q= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778669887; c=relaxed/simple; bh=XRXFMW82MTJcynAAttMY00dBFajezS7qCJhkkN3XZNs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aiq6PRDAkaCyXCImb7LSktb6dy7vgWPE+4dGKVmYm8eplUn0wUOE+sDgZVH0VXMBTagCz9oDOMJ341Gr3IvRP+t/91+drLqhdJOGXSqYRek4fFLYhYoSSoSUGoU4OR6OXEuc4h/zmdqzQ4ZaBrfywj/xDf5N6l8kEXT33tvieos= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=K57lqIma; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="K57lqIma" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A85C01C14; Wed, 13 May 2026 03:57:59 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 989C23F7B4; Wed, 13 May 2026 03:57:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778669884; bh=XRXFMW82MTJcynAAttMY00dBFajezS7qCJhkkN3XZNs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K57lqImajoAQF8JERG+bA2jfqFK2fCcPAdfcdLC95ysF8I8LFcnqCi1X9m2ihyO/c GZ7Qu80Q6sdhjqcQPbS4G33bKzFAsKi6k8PWwxXmxtMzufCC8iIw8LnAGzR8zn7piE PmpCzLyDmCnlx40u/j867YiA/tti1xTdnAHFepfg= From: Dev Jain To: akpm@linux-foundation.org, vbabka@kernel.org, harry@kernel.org, ryabinin.a.a@gmail.com Cc: Dev Jain , surenb@google.com, mhocko@suse.com, jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com, hao.li@linux.dev, cl@gentwo.org, rientjes@google.com, roman.gushchin@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, glider@google.com, andreyknvl@gmail.com, dvyukov@google.com, vincenzo.frascino@arm.com, kasan-dev@googlegroups.com, ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com Subject: [PATCH 1/3] mm/slub: hw_tags: skip page-allocator unpoisoning on slab allocation Date: Wed, 13 May 2026 16:27:32 +0530 Message-Id: <20260513105734.3380544-2-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260513105734.3380544-1-dev.jain@arm.com> References: <20260513105734.3380544-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit When a new slab page is allocated, the buddy will unpoison the page. Then slab immediately poisons the page via kasan_poison_slab(). This is wasted work. Similar to what is done in vmalloc currently, use GFP_SKIP_KASAN (hw tags flag only) to skip unpoisoning of the slab page. Signed-off-by: Dev Jain --- mm/page_alloc.c | 2 +- mm/slub.c | 11 +++++++++-- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 227d58dc3de6..c3a69913aaa9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7723,7 +7723,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned struct alloc_context ac = { }; struct page *page; - VM_WARN_ON_ONCE(gfp_flags & ~__GFP_ACCOUNT); + VM_WARN_ON_ONCE(gfp_flags & ~(__GFP_ACCOUNT | __GFP_SKIP_KASAN)); /* * In PREEMPT_RT spin_trylock() will call raw_spin_lock() which is * unsafe in NMI. If spin_trylock() is called from hard IRQ the current diff --git a/mm/slub.c b/mm/slub.c index 0baa906f39ab..da3520769d1f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3269,9 +3269,16 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node, struct slab *slab; unsigned int order = oo_order(oo); + /* + * New slab pages are immediately poisoned by kasan_poison_slab() + * before any object is handed out, so page allocator unpoisoning + * is wasted work for HW_TAGS KASAN. + */ + flags |= __GFP_SKIP_KASAN; + if (unlikely(!allow_spin)) - page = alloc_frozen_pages_nolock(0/* __GFP_COMP is implied */, - node, order); + page = alloc_frozen_pages_nolock(__GFP_SKIP_KASAN, + node, order); else if (node == NUMA_NO_NODE) page = alloc_frozen_pages(flags, order); else -- 2.43.0