public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Oliver Upton <oliver.upton@linux.dev>,
	Marc Zyngier <maz@kernel.org>, Dev Jain <dev.jain@arm.com>,
	Linu Cherian <Linu.Cherian@arm.com>,
	Jonathan Cameron <jonathan.cameron@huawei.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH v3 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess()
Date: Mon,  2 Mar 2026 13:55:55 +0000	[thread overview]
Message-ID: <20260302135602.3716920-9-ryan.roberts@arm.com> (raw)
In-Reply-To: <20260302135602.3716920-1-ryan.roberts@arm.com>

From: Will Deacon <will@kernel.org>

__flush_tlb_range_limit_excess() is unnecessarily complicated:

  - It takes a 'start', 'end' and 'pages' argument, whereas it only
    needs 'pages' (which the caller has computed from the other two
    arguments!).

  - It erroneously compares 'pages' with MAX_TLBI_RANGE_PAGES when
    the system doesn't support range-based invalidation but the range to
    be invalidated would result in fewer than MAX_DVM_OPS invalidations.

Simplify the function so that it no longer takes the 'start' and 'end'
arguments and only considers the MAX_TLBI_RANGE_PAGES threshold on
systems that implement range-based invalidation.

Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/tlbflush.h | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index fb7e541cfdfd9..fd86647db24bb 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -537,21 +537,19 @@ static __always_inline void __flush_tlb_range_op(tlbi_op lop, tlbi_op rop,
 #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \
 	__flush_tlb_range_op(op, r##op, start, pages, stride, 0, tlb_level, kvm_lpa2_is_enabled())
 
-static inline bool __flush_tlb_range_limit_excess(unsigned long start,
-		unsigned long end, unsigned long pages, unsigned long stride)
+static inline bool __flush_tlb_range_limit_excess(unsigned long pages,
+						  unsigned long stride)
 {
 	/*
-	 * When the system does not support TLB range based flush
-	 * operation, (MAX_DVM_OPS - 1) pages can be handled. But
-	 * with TLB range based operation, MAX_TLBI_RANGE_PAGES
-	 * pages can be handled.
+	 * Assume that the worst case number of DVM ops required to flush a
+	 * given range on a system that supports tlb-range is 20 (4 scales, 1
+	 * final page, 15 for alignment on LPA2 systems), which is much smaller
+	 * than MAX_DVM_OPS.
 	 */
-	if ((!system_supports_tlb_range() &&
-	     (end - start) >= (MAX_DVM_OPS * stride)) ||
-	    pages > MAX_TLBI_RANGE_PAGES)
-		return true;
+	if (system_supports_tlb_range())
+		return pages > MAX_TLBI_RANGE_PAGES;
 
-	return false;
+	return pages >= (MAX_DVM_OPS * stride) >> PAGE_SHIFT;
 }
 
 static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
@@ -565,7 +563,7 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
 	end = round_up(end, stride);
 	pages = (end - start) >> PAGE_SHIFT;
 
-	if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
+	if (__flush_tlb_range_limit_excess(pages, stride)) {
 		flush_tlb_mm(mm);
 		return;
 	}
@@ -629,7 +627,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
 	end = round_up(end, stride);
 	pages = (end - start) >> PAGE_SHIFT;
 
-	if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
+	if (__flush_tlb_range_limit_excess(pages, stride)) {
 		flush_tlb_all();
 		return;
 	}
-- 
2.43.0



  parent reply	other threads:[~2026-03-02 13:56 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-02 13:55 [PATCH v3 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
2026-03-02 13:55 ` [PATCH v3 01/13] arm64: mm: Re-implement the __tlbi_level macro as a C function Ryan Roberts
2026-03-02 13:55 ` [PATCH v3 02/13] arm64: mm: Introduce a C wrapper for by-range TLB invalidation Ryan Roberts
2026-03-02 13:55 ` [PATCH v3 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation Ryan Roberts
2026-03-02 13:55 ` [PATCH v3 04/13] arm64: mm: Push __TLBI_VADDR() into __tlbi_level() Ryan Roberts
2026-03-02 13:55 ` [PATCH v3 05/13] arm64: mm: Inline __TLBI_VADDR_RANGE() into __tlbi_range() Ryan Roberts
2026-03-02 13:55 ` [PATCH v3 06/13] arm64: mm: Re-implement the __flush_tlb_range_op macro in C Ryan Roberts
2026-03-02 13:55 ` [PATCH v3 07/13] arm64: mm: Simplify __TLBI_RANGE_NUM() macro Ryan Roberts
2026-03-02 13:55 ` Ryan Roberts [this message]
2026-03-02 13:55 ` [PATCH v3 09/13] arm64: mm: Refactor flush_tlb_page() to use __tlbi_level_asid() Ryan Roberts
2026-03-02 13:55 ` [PATCH v3 10/13] arm64: mm: Refactor __flush_tlb_range() to take flags Ryan Roberts
2026-03-02 13:55 ` [PATCH v3 11/13] arm64: mm: More flags for __flush_tlb_range() Ryan Roberts
2026-03-03  9:57   ` Jonathan Cameron
2026-03-03 13:54     ` Ryan Roberts
2026-03-03 17:34       ` Jonathan Cameron
2026-03-02 13:55 ` [PATCH v3 12/13] arm64: mm: Wrap flush_tlb_page() around __do_flush_tlb_range() Ryan Roberts
2026-03-03  9:59   ` Jonathan Cameron
2026-03-02 13:56 ` [PATCH v3 13/13] arm64: mm: Provide level hint for flush_tlb_page() Ryan Roberts
2026-03-02 14:42   ` Mark Rutland
2026-03-02 17:39     ` Ryan Roberts
2026-03-02 17:56       ` Mark Rutland
2026-03-13 19:43 ` [PATCH v3 00/13] arm64: Refactor TLB invalidation API and implementation Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260302135602.3716920-9-ryan.roberts@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=Linu.Cherian@arm.com \
    --cc=ardb@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=dev.jain@arm.com \
    --cc=jonathan.cameron@huawei.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox