From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9FD643491D0; Fri, 24 Apr 2026 18:32:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777055542; cv=none; b=ERj6dvYgOlqKMLf3w4rpLQxmtpci1PL8sqcqGyKyfWZ5HxUjtwT8Gdq+faj7TemNKHe56X8DFfGViU3EBtHaPU/X7TD9RdlgmKvDZTC8tE8fuOSZXCuo6ubJBFMjAIfLFIDPTwWiNwN+PphjO1gyJJYzk2mGIpHblC2nFagk4oU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777055542; c=relaxed/simple; bh=QKB3JZ7g2c9VEy/es5AGCSmsO1y75sbX2AXFlGxaubw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=A/+nmNk+nkU8YQEX/OMI5z6KAnUSd5TTM+OFN6r9oPGYtMqfKl3GHN4t7Cm2DV/UDL+2PsVt+G95ll3ywo1NfsKZu9KexNfHyOP+oYUmYXzVXILhta6DL1cfCOB0+iM0fWiBHdXaSayiRbarm9iztcWNqx7L2uIsiMxLPd3e6N4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=HkmEng6M; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="HkmEng6M" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 15E4635BD; Fri, 24 Apr 2026 11:32:14 -0700 (PDT) Received: from arm.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D64543FAA1; Fri, 24 Apr 2026 11:32:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1777055539; bh=QKB3JZ7g2c9VEy/es5AGCSmsO1y75sbX2AXFlGxaubw=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=HkmEng6MQ0mN5UA/B4zvsQ7CuMkuqsLgFOq/mDQ61S+yV+9yLVau+m1EwCf8rBaKn yJGqU/ulj3zjqHHthJDAF3df4Dg0IsiIC5cW7GeUiLq9O+bCPmUqYVENOByDeOMzxB 4Y4FOiydJWdwx4jFtRbrGsriNSaedpyBC82UGk4Y= Date: Fri, 24 Apr 2026 19:32:13 +0100 From: Catalin Marinas To: Dev Jain Cc: arnd@arndb.de, kees@kernel.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, akpm@linux-foundation.org, david@kernel.org, urezki@gmail.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, tglx@kernel.org, usama.anjum@arm.com, mathieu.desnoyers@efficios.com, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Ryan Roberts Subject: Re: [PATCH v3 1/3] vmalloc: add __GFP_SKIP_KASAN support Message-ID: References: <20260424130157.3163009-1-dev.jain@arm.com> <20260424130157.3163009-2-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-arch@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260424130157.3163009-2-dev.jain@arm.com> On Fri, Apr 24, 2026 at 06:31:55PM +0530, Dev Jain wrote: > From: Muhammad Usama Anjum > > For allocations that will be accessed only with match-all pointers > (e.g., kernel stacks), setting tags is wasted work. If the caller > already set __GFP_SKIP_KASAN, skip tag setting of vmalloc pages. > > Before this patch, __GFP_SKIP_KASAN wasn't being used with vmalloc > APIs. So it wasn't being checked. Now its being checked and acted > upon. Other KASAN modes are unchanged because __GFP_SKIP_KASAN isn't > defined there. > > This is a preparatory patch for optimizing kernel stack allocations. > > Co-developed-by: Ryan Roberts > Co-developed-by: Dev Jain > Signed-off-by: Muhammad Usama Anjum Co-developers need to sign off as well. See submitting-patches.rst. Same comment about your SoB as on patch 3. > --- > mm/vmalloc.c | 20 +++++++++++++++++--- > 1 file changed, 17 insertions(+), 3 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index b31b208f6ecb3..c94fcb2725b6b 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -3939,7 +3939,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > __GFP_NOFAIL | __GFP_ZERO |\ > __GFP_NORETRY | __GFP_RETRY_MAYFAIL |\ > GFP_NOFS | GFP_NOIO | GFP_KERNEL_ACCOUNT |\ > - GFP_USER | __GFP_NOLOCKDEP) > + GFP_USER | __GFP_NOLOCKDEP | __GFP_SKIP_KASAN) > > static gfp_t vmalloc_fix_flags(gfp_t flags) > { > @@ -3980,6 +3980,9 @@ static gfp_t vmalloc_fix_flags(gfp_t flags) > * > * %__GFP_NOWARN can be used to suppress failure messages. > * > + * %__GFP_SKIP_KASAN can be used to skip unpoisoning of mapped pages > + * (when prot=%PAGE_KERNEL). > + * > * Can not be called from interrupt nor NMI contexts. > * Return: the address of the area or %NULL on failure > */ > @@ -3993,6 +3996,10 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, > kasan_vmalloc_flags_t kasan_flags = KASAN_VMALLOC_NONE; > unsigned long original_align = align; > unsigned int shift = PAGE_SHIFT; > + bool skip_vmalloc_kasan = gfp_mask & __GFP_SKIP_KASAN; > + > + /* Don't skip metadata kasan unpoisoning */ > + gfp_mask &= ~__GFP_SKIP_KASAN; > > if (WARN_ON_ONCE(!size)) > return NULL; > @@ -4041,7 +4048,7 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, > * kasan_unpoison_vmalloc(). > */ > if (pgprot_val(prot) == pgprot_val(PAGE_KERNEL)) { > - if (kasan_hw_tags_enabled()) { > + if (kasan_hw_tags_enabled() && !skip_vmalloc_kasan) { > /* > * Modify protection bits to allow tagging. > * This must be done before mapping. > @@ -4054,6 +4061,12 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, > * poisoned and zeroed by kasan_unpoison_vmalloc(). > */ > gfp_mask |= __GFP_SKIP_KASAN | __GFP_SKIP_ZERO; > + } else if (skip_vmalloc_kasan) { > + /* > + * Skip page_alloc unpoisoning physical pages backing > + * VM_ALLOC mapping, as requested by caller. > + */ > + gfp_mask |= __GFP_SKIP_KASAN; > } This playing around with some of the GFP flags meant for metadata and the actual page allocation gets confusing. You remove __GFP_SKIP_KASAN early from gfp_mask, add it back here. You might as well just remove it when calling __get_vm_area_node() and we won't have to figure out why it's added back above. The __GFP_SKIP_ZERO flag is meant for the page allocator and used in this function later to actually tell kasan to initialise the memory (not skip this). __GFP_SKIP_KASAN, OTOH, is used to actually tell both vmalloc() and the underlying page allocator to avoid tagging. I wonder whether it would be better to have a VM_SKIP_KASAN flag instead and leave the GFP flags alone. -- Catalin