From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4809CEA4E07 for ; Mon, 2 Mar 2026 13:56:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=X8uZ9aEQf95LCYAvJ9g8YoxJqdQU/iBY9By17Vq4pms=; b=g8QBH5iWQB41UP/pI3p+/zBMms KaQjourKO+RjPih+co3ZMdJ3M3RESrzmqVNhlE5eLRGg8Av4gMUx7Am88DCOzUSym1+kX305EdLYd DhKO2XxQGmUacO4bAI3+4sNFAS24hleW/HVD77HO85E09LS0Sc/8pQBCUs0qZ8xw4MpJuGQQUtyJG V8qj2dieXM9FYs+jFV3PnVkNz/rzkL47Bi5XVA9xFg/O76zM85XB+9ltfj//qHwISDOwmWEoo2sQR 3dNXSsZP+VTBbIQlzIxbb7wY0tVXmLiySKinVZpIkETimtWCGq1Q6unkbmFmjpuF9ATWHvripPXCo 2PZA8FaA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vx3l3-0000000DAid-1T5g; Mon, 02 Mar 2026 13:56:25 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vx3l1-0000000DAhj-23g6 for linux-arm-kernel@lists.infradead.org; Mon, 02 Mar 2026 13:56:24 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D80601570; Mon, 2 Mar 2026 05:56:14 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5644A3F73B; Mon, 2 Mar 2026 05:56:19 -0800 (PST) From: Ryan Roberts To: Will Deacon , Ard Biesheuvel , Catalin Marinas , Mark Rutland , Linus Torvalds , Oliver Upton , Marc Zyngier , Dev Jain , Linu Cherian , Jonathan Cameron Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 02/13] arm64: mm: Introduce a C wrapper for by-range TLB invalidation Date: Mon, 2 Mar 2026 13:55:49 +0000 Message-ID: <20260302135602.3716920-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260302135602.3716920-1-ryan.roberts@arm.com> References: <20260302135602.3716920-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260302_055623_595916_39DB526D X-CRM114-Status: GOOD ( 12.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As part of efforts to reduce our reliance on complex preprocessor macros for TLB invalidation routines, introduce a new C wrapper for by-range TLB invalidation which can be used instead of the __tlbi() macro and can additionally be called from C code. Each specific tlbi range op is implemented as a C function and the appropriate function pointer is passed to __tlbi_range(). Since everything is declared inline and is statically resolvable, the compiler will convert the indirect function call to a direct inline execution. Suggested-by: Linus Torvalds Reviewed-by: Jonathan Cameron Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/tlbflush.h | 32 ++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index a0e3ebe299864..b3b86e5f7034e 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -468,6 +468,36 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) * operations can only span an even number of pages. We save this for last to * ensure 64KB start alignment is maintained for the LPA2 case. */ +static __always_inline void rvae1is(u64 arg) +{ + __tlbi(rvae1is, arg); +} + +static __always_inline void rvale1(u64 arg) +{ + __tlbi(rvale1, arg); +} + +static __always_inline void rvale1is(u64 arg) +{ + __tlbi(rvale1is, arg); +} + +static __always_inline void rvaale1is(u64 arg) +{ + __tlbi(rvaale1is, arg); +} + +static __always_inline void ripas2e1is(u64 arg) +{ + __tlbi(ripas2e1is, arg); +} + +static __always_inline void __tlbi_range(tlbi_op op, u64 arg) +{ + op(arg); +} + #define __flush_tlb_range_op(op, start, pages, stride, \ asid, tlb_level, tlbi_user, lpa2) \ do { \ @@ -495,7 +525,7 @@ do { \ if (num >= 0) { \ addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \ scale, num, tlb_level); \ - __tlbi(r##op, addr); \ + __tlbi_range(r##op, addr); \ if (tlbi_user) \ __tlbi_user(r##op, addr); \ __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ -- 2.43.0