From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 31E84CD11C2 for ; Fri, 5 Apr 2024 17:10:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=otp69t+i89GxJGD7Yk8gE6prZMnOMlh2fBp+JpyBhig=; b=Z8B18EcuAUY2gu vupRAzd47ZT6dLoeCxSIJ3hfi4a86uzzIwihc+y6ijzNQkWx+9TV5JTtWKzI7IrjJKliLFi3szkXr IldVVo61xPLOP5pPt+9AXajRX6Rd1RCitNEMj7hAjhagNtnqmood77rR5cPkK+eDp3klcWIVUb/cK 1hfNT69eX9uVYQkNDzlP4yjH/i2KSYNBbMi4Q6MroubgYUV2ZObwnVPSI0LSXM2XF58odwYV3CLa6 veSWhkPPzaMk2NHXfAVVeSMp4VOo4HR/ryVDqufznpXJVPRFOBukf2G5T3uiDSCeXrr/qecZox2eP VAGPtwqAgRyPzPBjt6iw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rsn55-00000008AHK-4092; Fri, 05 Apr 2024 17:10:23 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rsn53-00000008AG6-3sHO for linux-arm-kernel@lists.infradead.org; Fri, 05 Apr 2024 17:10:23 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 8198C60B5E; Fri, 5 Apr 2024 17:10:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1C64C433C7; Fri, 5 Apr 2024 17:10:16 +0000 (UTC) Date: Fri, 5 Apr 2024 18:10:14 +0100 From: Catalin Marinas To: Gavin Shan Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, akpm@linux-foundation.org, maz@kernel.org, oliver.upton@linux.dev, ryan.roberts@arm.com, apopple@nvidia.com, rananta@google.com, mark.rutland@arm.com, v-songbaohua@oppo.com, yangyicong@hisilicon.com, shahuang@redhat.com, yihyu@redhat.com, shan.gavin@gmail.com Subject: Re: [PATCH v3 1/3] arm64: tlb: Fix TLBI RANGE operand Message-ID: References: <20240405035852.1532010-1-gshan@redhat.com> <20240405035852.1532010-2-gshan@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240405035852.1532010-2-gshan@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240405_101022_103246_40AA6E9F X-CRM114-Status: GOOD ( 19.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Apr 05, 2024 at 01:58:50PM +1000, Gavin Shan wrote: > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 3b0e8248e1a4..a75de2665d84 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -161,12 +161,18 @@ static inline unsigned long get_trans_granule(void) > #define MAX_TLBI_RANGE_PAGES __TLBI_RANGE_PAGES(31, 3) > > /* > - * Generate 'num' values from -1 to 30 with -1 rejected by the > - * __flush_tlb_range() loop below. > + * Generate 'num' values from -1 to 31 with -1 rejected by the > + * __flush_tlb_range() loop below. Its return value is only > + * significant for a maximum of MAX_TLBI_RANGE_PAGES pages. If > + * 'pages' is more than that, you must iterate over the overall > + * range. > */ > -#define TLBI_RANGE_MASK GENMASK_ULL(4, 0) > -#define __TLBI_RANGE_NUM(pages, scale) \ > - ((((pages) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK) - 1) > +#define __TLBI_RANGE_NUM(pages, scale) \ > + ({ \ > + int __pages = min((pages), \ > + __TLBI_RANGE_PAGES(31, (scale))); \ > + (__pages >> (5 * (scale) + 1)) - 1; \ > + }) Reviewed-by: Catalin Marinas This looks correct to me as well. I spent a bit of time to update an old CBMC model I had around. With the original __TLBI_RANGE_NUM indeed shows 'scale' becoming negative on the kvm_tlb_flush_vmid_range() path. The patch above fixes it and it also allows the non-KVM path to use the range TLBI for MAX_TLBI_RANGE_PAGES (as per patch 3). FWIW, here's the model: -----------------------8<-------------------------------------- // SPDX-License-Identifier: GPL-2.0-only /* * Check with: * cbmc --unwind 6 tlbinval.c */ #define PAGE_SHIFT (12) #define PAGE_SIZE (1 << PAGE_SHIFT) #define VA_RANGE (1UL << 48) #define SZ_64K 0x00010000 #define __round_mask(x, y) ((__typeof__(x))((y)-1)) #define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1) #define round_down(x, y) ((x) & ~__round_mask(x, y)) #define min(x, y) (x <= y ? x : y) #define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (__typeof__(x))(a) - 1) #define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask)) #define ALIGN(x, a) __ALIGN_KERNEL((x), (a)) /* only masking out irrelevant bits */ #define __TLBI_RANGE_VADDR(addr, shift) ((addr) & ~((1UL << shift) - 1)) #define __TLBI_VADDR(addr) __TLBI_RANGE_VADDR(addr, PAGE_SHIFT) #define __TLBI_RANGE_PAGES(num, scale) ((unsigned long)((num) + 1) << (5 * (scale) + 1)) #define MAX_TLBI_RANGE_PAGES __TLBI_RANGE_PAGES(31, 3) #if 0 /* original code */ #define TLBI_RANGE_MASK 0x1fUL #define __TLBI_RANGE_NUM(pages, scale) \ ((((int)(pages) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK) - 1) #else #define __TLBI_RANGE_NUM(pages, scale) \ ({ \ int __pages = min((pages), \ __TLBI_RANGE_PAGES(31, (scale))); \ (__pages >> (5 * (scale) + 1)) - 1; \ }) #endif const static _Bool lpa2 = 1; const static _Bool kvm = 1; static unsigned long inval_start; static unsigned long inval_end; static void tlbi(unsigned long start, unsigned long size) { unsigned long end = start + size; if (inval_end == 0) { inval_start = start; inval_end = end; return; } /* optimal invalidation */ __CPROVER_assert(start >= inval_end || end <= inval_start, "No overlapping TLBI range"); if (start < inval_start) { __CPROVER_assert(end >= inval_start, "No TLBI range gaps"); inval_start = start; } if (end > inval_end) { __CPROVER_assert(start <= inval_end, "No TLBI range gaps"); inval_end = end; } } static void tlbi_range(unsigned long start, int num, int scale) { unsigned long size = __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; tlbi(start, size); } static void __flush_tlb_range_op(unsigned long start, unsigned long pages, unsigned long stride) { int num = 0; int scale = 3; int shift = lpa2 ? 16 : PAGE_SHIFT; unsigned long addr; while (pages > 0) { if (pages == 1 || (lpa2 && start != ALIGN(start, SZ_64K))) { addr = __TLBI_VADDR(start); tlbi(addr, stride); start += stride; pages -= stride >> PAGE_SHIFT; continue; } __CPROVER_assert(scale >= 0 && scale <= 3, "Scale in range"); num = __TLBI_RANGE_NUM(pages, scale); __CPROVER_assert(num <= 31, "Num in range"); if (num >= 0) { addr = __TLBI_RANGE_VADDR(start, shift); tlbi_range(addr, num, scale); start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; pages -= __TLBI_RANGE_PAGES(num, scale); } scale--; } } static void __flush_tlb_range(unsigned long start, unsigned long pages, unsigned long stride) { if (pages > MAX_TLBI_RANGE_PAGES) { tlbi(0, VA_RANGE); return; } __flush_tlb_range_op(start, pages, stride); } void __kvm_tlb_flush_vmid_range(unsigned long start, unsigned long pages) { unsigned long stride; stride = PAGE_SIZE; start = round_down(start, stride); __flush_tlb_range_op(start, pages, stride); } static void kvm_tlb_flush_vmid_range(unsigned long addr, unsigned long size) { unsigned long pages, inval_pages; pages = size >> PAGE_SHIFT; while (pages > 0) { inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); __kvm_tlb_flush_vmid_range(addr, inval_pages); addr += inval_pages << PAGE_SHIFT; pages -= inval_pages; } } static unsigned long nondet_ulong(void); int main(void) { unsigned long stride = nondet_ulong(); unsigned long start = round_down(nondet_ulong(), stride); unsigned long end = round_up(nondet_ulong(), stride); unsigned long pages = (end - start) >> PAGE_SHIFT; __CPROVER_assume(stride == PAGE_SIZE || stride == PAGE_SIZE << (PAGE_SHIFT - 3) || stride == PAGE_SIZE << (2 * (PAGE_SHIFT - 3))); __CPROVER_assume(start < end); __CPROVER_assume(end <= VA_RANGE); if (kvm) kvm_tlb_flush_vmid_range(start, pages << PAGE_SHIFT); else __flush_tlb_range(start, pages, stride); __CPROVER_assert((inval_start == 0 && inval_end == VA_RANGE) || (inval_start == start && inval_end == end), "Correct invalidation"); return 0; } _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel