From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 99834283121 for ; Mon, 5 Jan 2026 05:33:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767591223; cv=none; b=WraKQNJZDoc7EHs3EkisVVJgyb2ckOSlSDbjBW3itmQ9i3NIx1yDmzCnLPez+Bmd2iaZ4JEv+zDgwBrDfASVhUMk0ARYQ975iJtLlVSul15W53alpJUJbs23s90lk4YxBWW4eawZ0wU6yK9dsHnmBYSLjB36vpUsF4KnJeM74GE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767591223; c=relaxed/simple; bh=g7zDcul5EYDw5yPPkeTtsQNPbOB9Z5lAQ8naTDUdOmY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=mGu8DW+1ApAZyNExe5i/9gzgJ1pwEZfpWqDFSxRFAFOJIWGCWJ3eYWHAyZGSCwHA3Ssh2711FWQbN2/yQ/vmPhQhmUZ7DRddOSzNORZcb3rnT/vWXJYDuxxDSoJgUUknulQgDICQy+/MEMbzOOzTN3oyQBkNvU5ot7BYFfMXI5E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7AEA6339; Sun, 4 Jan 2026 21:33:31 -0800 (PST) Received: from localhost (a079125.arm.com [10.164.21.37]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7FF703F694; Sun, 4 Jan 2026 21:33:37 -0800 (PST) Date: Mon, 5 Jan 2026 11:03:34 +0530 From: Linu Cherian To: Ryan Roberts Cc: Will Deacon , Ard Biesheuvel , Catalin Marinas , Mark Rutland , Linus Torvalds , Oliver Upton , Marc Zyngier , Dev Jain , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v1 02/13] arm64: mm: Introduce a C wrapper for by-range TLB invalidation Message-ID: References: <20251216144601.2106412-1-ryan.roberts@arm.com> <20251216144601.2106412-3-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20251216144601.2106412-3-ryan.roberts@arm.com> Ryan, On Tue, Dec 16, 2025 at 02:45:47PM +0000, Ryan Roberts wrote: > As part of efforts to reduce our reliance on complex preprocessor macros > for TLB invalidation routines, introduce a new C wrapper for by-range > TLB invalidation which can be used instead of the __tlbi() macro and can > additionally be called from C code. > > Each specific tlbi range op is implemented as a C function and the > appropriate function pointer is passed to __tlbi_range(). Since > everything is declared inline and is statically resolvable, the compiler > will convert the indirect function call to a direct inline execution. > > Suggested-by: Linus Torvalds > Signed-off-by: Ryan Roberts > --- > arch/arm64/include/asm/tlbflush.h | 33 ++++++++++++++++++++++++++++++- > 1 file changed, 32 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 13a59cf28943..c5111d2afc66 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -459,6 +459,37 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) > * operations can only span an even number of pages. We save this for last to > * ensure 64KB start alignment is maintained for the LPA2 case. > */ > +static __always_inline void rvae1is(u64 arg) > +{ > + __tlbi(rvae1is, arg); > +} > + > +static __always_inline void rvale1(u64 arg) > +{ > + __tlbi(rvale1, arg); > + __tlbi_user(rvale1, arg); Should this __tlbi_user be added as part of patch 3 ? > +} > + > +static __always_inline void rvale1is(u64 arg) > +{ > + __tlbi(rvale1is, arg); > +} > + > +static __always_inline void rvaale1is(u64 arg) > +{ > + __tlbi(rvaale1is, arg); > +} > + > +static __always_inline void ripas2e1is(u64 arg) > +{ > + __tlbi(ripas2e1is, arg); > +} > + > +static __always_inline void __tlbi_range(tlbi_op op, u64 arg) > +{ > + op(arg); > +} > + > #define __flush_tlb_range_op(op, start, pages, stride, \ > asid, tlb_level, tlbi_user, lpa2) \ > do { \ > @@ -486,7 +517,7 @@ do { \ > if (num >= 0) { \ > addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \ > scale, num, tlb_level); \ > - __tlbi(r##op, addr); \ > + __tlbi_range(r##op, addr); \ > if (tlbi_user) \ > __tlbi_user(r##op, addr); \ > __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ > -- > 2.43.0 >