From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ED0B5C83F1B for ; Mon, 14 Jul 2025 12:36:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:References:Cc:To:From:Subject:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vMN6VpGBDdJPMTJ8xLsf0Amb+/vpg0U025WbPd5eCUk=; b=SWYrzM/QL4Dzf0sRG6bk1dLR7e lKHRWJ8Kzw7nWvx/5IvQEG/2ux7QId6e6+GLIC4aZ4MF+f/kTFtfzX+KHV1ycCfk/+m+rLVsYnm8y XC9GpPFoauTXhqLfBQXYbADJvv0ns4RTSSc6DtPmvWA6o+awuQjh+S1fJLYmgvUH9W+Iyje5Iu3tp 1o/bYnnNfG0Z0xk6S6zm6Y3aTGJML/rrpt/sZqLkr1YirwwrH7LvFSauuEk12Hpnz/eAW7bkwQ080 tn7++9lkHsQZYndAuRThoso54pLnY3kV3kuOMsfMTIAPXYM2bPKeR6hvNQq0BghjGzk6f7Br/wR0/ M7DMF6MQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1ubIQA-00000002BUE-20nP; Mon, 14 Jul 2025 12:36:38 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1ubFlm-00000001pTZ-1PfM for linux-arm-kernel@lists.infradead.org; Mon, 14 Jul 2025 09:46:47 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6CC831BC0; Mon, 14 Jul 2025 02:46:36 -0700 (PDT) Received: from [10.57.83.2] (unknown [10.57.83.2]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BF4C23F694; Mon, 14 Jul 2025 02:46:43 -0700 (PDT) Message-ID: Date: Mon, 14 Jul 2025 10:46:42 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 03/10] arm64: mm: Implicitly invalidate user ASID based on TLBI operation Content-Language: en-GB From: Ryan Roberts To: Will Deacon , linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , Catalin Marinas , Mark Rutland , Linus Torvalds , Oliver Upton , Marc Zyngier References: <20250711161732.384-1-will@kernel.org> <20250711161732.384-4-will@kernel.org> <9dccf004-1ac4-45ae-9098-69fcad7107a8@arm.com> In-Reply-To: <9dccf004-1ac4-45ae-9098-69fcad7107a8@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250714_024646_468384_137B276E X-CRM114-Status: GOOD ( 16.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 14/07/2025 09:44, Ryan Roberts wrote: > On 11/07/2025 17:17, Will Deacon wrote: >> When kpti is enabled, separate ASIDs are used for userspace and >> kernelspace, requiring ASID-qualified TLB invalidation by virtual >> address to invalidate both of them. >> >> Push the logic for invalidating the two ASIDs down into the low-level >> __tlbi_level_op() function based on the TLBI operation and remove the >> burden from the caller to handle the kpti-specific behaviour. >> >> Signed-off-by: Will Deacon >> --- >> arch/arm64/include/asm/tlbflush.h | 45 ++++++++++++++++++------------- >> 1 file changed, 26 insertions(+), 19 deletions(-) >> >> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h >> index 4408aeebf4d5..08e509f37b28 100644 >> --- a/arch/arm64/include/asm/tlbflush.h >> +++ b/arch/arm64/include/asm/tlbflush.h >> @@ -115,17 +115,25 @@ enum tlbi_op { >> >> #define TLBI_TTL_UNKNOWN INT_MAX >> >> -#define __GEN_TLBI_OP_CASE(op) \ >> +#define ___GEN_TLBI_OP_CASE(op) \ >> case op: \ >> - __tlbi(op, arg); \ >> + __tlbi(op, arg) >> + >> +#define __GEN_TLBI_OP_ASID_CASE(op) \ >> + ___GEN_TLBI_OP_CASE(op); \ >> + __tlbi_user(op, arg); \ >> + break >> + >> +#define __GEN_TLBI_OP_CASE(op) \ >> + ___GEN_TLBI_OP_CASE(op); \ >> break >> >> static __always_inline void __tlbi_level_op(const enum tlbi_op op, u64 arg) >> { >> switch (op) { >> - __GEN_TLBI_OP_CASE(vae1is); >> + __GEN_TLBI_OP_ASID_CASE(vae1is); >> __GEN_TLBI_OP_CASE(vae2is); >> - __GEN_TLBI_OP_CASE(vale1is); >> + __GEN_TLBI_OP_ASID_CASE(vale1is); >> __GEN_TLBI_OP_CASE(vale2is); >> __GEN_TLBI_OP_CASE(vaale1is); >> __GEN_TLBI_OP_CASE(ipas2e1); >> @@ -134,7 +142,8 @@ static __always_inline void __tlbi_level_op(const enum tlbi_op op, u64 arg) >> BUILD_BUG(); >> } >> } >> -#undef __GEN_TLBI_OP_CASE >> +#undef __GEN_TLBI_OP_ASID_CASE >> +#undef ___GEN_TLBI_OP_CASE >> >> #define __tlbi_level(op, addr, level) do { \ >> u64 arg = addr; \ >> @@ -150,11 +159,6 @@ static __always_inline void __tlbi_level_op(const enum tlbi_op op, u64 arg) >> __tlbi_level_op(op, arg); \ >> } while(0) >> >> -#define __tlbi_user_level(op, arg, level) do { \ >> - if (arm64_kernel_unmapped_at_el0()) \ >> - __tlbi_level(op, (arg | USER_ASID_FLAG), level); \ >> -} while (0) >> - >> /* >> * This macro creates a properly formatted VA operand for the TLB RANGE. The >> * value bit assignments are: >> @@ -418,22 +422,28 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) >> * operations can only span an even number of pages. We save this for last to >> * ensure 64KB start alignment is maintained for the LPA2 case. >> */ >> -#define __GEN_TLBI_OP_CASE(op) \ >> +#define ___GEN_TLBI_OP_CASE(op) \ >> case op: \ >> - __tlbi(r ## op, arg); \ >> + __tlbi(r ## op, arg) >> + >> +#define __GEN_TLBI_OP_ASID_CASE(op) \ >> + ___GEN_TLBI_OP_CASE(op); \ >> + __tlbi_user(r ## op, arg); \ >> break >> >> static __always_inline void __tlbi_range(const enum tlbi_op op, u64 arg) >> { >> switch (op) { >> - __GEN_TLBI_OP_CASE(vae1is); >> - __GEN_TLBI_OP_CASE(vale1is); >> + __GEN_TLBI_OP_ASID_CASE(vae1is); >> + __GEN_TLBI_OP_ASID_CASE(vale1is); >> __GEN_TLBI_OP_CASE(vaale1is); >> __GEN_TLBI_OP_CASE(ipas2e1is); > > Bug? This 2 underscore version is still defined from the level case above. So > this is no longer issuing a range-based tlbi? (i.e. you're no longer prepending > the "r" here. Do thse __GEN_TLBI_*() macros really help that much? I think I'd prefer to see the case statement just written out long hand. It will make things much clearer for not that many more lines, and if I'm right about that bug, would have prevented it. Thanks, Ryan > >> default: >> BUILD_BUG(); >> } >> } >> +#undef __GEN_TLBI_OP_ASID_CASE >> +#undef ___GEN_TLBI_OP_CASE >> #undef __GEN_TLBI_OP_CASE >> >> #define __flush_tlb_range_op(op, start, pages, stride, \ >> @@ -452,8 +462,6 @@ do { \ >> (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \ >> addr = __TLBI_VADDR(__flush_start, asid); \ >> __tlbi_level(op, addr, tlb_level); \ >> - if (tlbi_user) \ >> - __tlbi_user_level(op, addr, tlb_level); \ >> __flush_start += stride; \ >> __flush_pages -= stride >> PAGE_SHIFT; \ >> continue; \ >> @@ -464,8 +472,6 @@ do { \ >> addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \ >> scale, num, tlb_level); \ >> __tlbi_range(op, addr); \ >> - if (tlbi_user) \ >> - __tlbi_user(r##op, addr); \ >> __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ >> __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\ >> } \ >> @@ -584,6 +590,7 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b >> { >> __flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3); >> } >> -#endif >> >> +#undef __tlbi_user >> +#endif >> #endif >