public inbox for linux-riscv@lists.infradead.org
 help / color / mirror / Atom feed
From: Raghavendra Rao Ananta <rananta@google.com>
To: Oliver Upton <oliver.upton@linux.dev>,
	Marc Zyngier <maz@kernel.org>,  James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	 Huacai Chen <chenhuacai@kernel.org>,
	Zenghui Yu <yuzenghui@huawei.com>,
	 Anup Patel <anup@brainfault.org>,
	Atish Patra <atishp@atishpatra.org>,
	 Jing Zhang <jingzhangos@google.com>,
	Colton Lewis <coltonlewis@google.com>,
	 Raghavendra Rao Anata <rananta@google.com>,
	David Matlack <dmatlack@google.com>,
	 linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	 linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org,
	 linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org,
	 kvm@vger.kernel.org, Catalin Marinas <catalin.marinas@arm.com>,
	 Gavin Shan <gshan@redhat.com>
Subject: [PATCH v6 05/11] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range
Date: Sat, 15 Jul 2023 00:53:59 +0000	[thread overview]
Message-ID: <20230715005405.3689586-6-rananta@google.com> (raw)
In-Reply-To: <20230715005405.3689586-1-rananta@google.com>

Currently, the core TLB flush functionality of __flush_tlb_range()
hardcodes vae1is (and variants) for the flush operation. In the
upcoming patches, the KVM code reuses this core algorithm with
ipas2e1is for range based TLB invalidations based on the IPA.
Hence, extract the core flush functionality of __flush_tlb_range()
into its own macro that accepts an 'op' argument to pass any
TLBI operation, such that other callers (KVM) can benefit.

No functional changes intended.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
---
 arch/arm64/include/asm/tlbflush.h | 109 +++++++++++++++---------------
 1 file changed, 56 insertions(+), 53 deletions(-)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 412a3b9a3c25..f7fafba25add 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -278,14 +278,62 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
  */
 #define MAX_TLBI_OPS	PTRS_PER_PTE
 
+/* When the CPU does not support TLB range operations, flush the TLB
+ * entries one by one at the granularity of 'stride'. If the TLB
+ * range ops are supported, then:
+ *
+ * 1. If 'pages' is odd, flush the first page through non-range
+ *    operations;
+ *
+ * 2. For remaining pages: the minimum range granularity is decided
+ *    by 'scale', so multiple range TLBI operations may be required.
+ *    Start from scale = 0, flush the corresponding number of pages
+ *    ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it
+ *    until no pages left.
+ *
+ * Note that certain ranges can be represented by either num = 31 and
+ * scale or num = 0 and scale + 1. The loop below favours the latter
+ * since num is limited to 30 by the __TLBI_RANGE_NUM() macro.
+ */
+#define __flush_tlb_range_op(op, start, pages, stride,			\
+				asid, tlb_level, tlbi_user)		\
+do {									\
+	int num = 0;							\
+	int scale = 0;							\
+	unsigned long addr;						\
+									\
+	while (pages > 0) {						\
+		if (!system_supports_tlb_range() ||			\
+		    pages % 2 == 1) {					\
+			addr = __TLBI_VADDR(start, asid);		\
+			__tlbi_level(op, addr, tlb_level);		\
+			if (tlbi_user)					\
+				__tlbi_user_level(op, addr, tlb_level);	\
+			start += stride;				\
+			pages -= stride >> PAGE_SHIFT;			\
+			continue;					\
+		}							\
+									\
+		num = __TLBI_RANGE_NUM(pages, scale);			\
+		if (num >= 0) {						\
+			addr = __TLBI_VADDR_RANGE(start, asid, scale,	\
+						  num, tlb_level);	\
+			__tlbi(r##op, addr);				\
+			if (tlbi_user)					\
+				__tlbi_user(r##op, addr);		\
+			start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
+			pages -= __TLBI_RANGE_PAGES(num, scale);	\
+		}							\
+		scale++;						\
+	}								\
+} while (0)
+
 static inline void __flush_tlb_range(struct vm_area_struct *vma,
 				     unsigned long start, unsigned long end,
 				     unsigned long stride, bool last_level,
 				     int tlb_level)
 {
-	int num = 0;
-	int scale = 0;
-	unsigned long asid, addr, pages;
+	unsigned long asid, pages;
 
 	start = round_down(start, stride);
 	end = round_up(end, stride);
@@ -307,56 +355,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
 	dsb(ishst);
 	asid = ASID(vma->vm_mm);
 
-	/*
-	 * When the CPU does not support TLB range operations, flush the TLB
-	 * entries one by one at the granularity of 'stride'. If the TLB
-	 * range ops are supported, then:
-	 *
-	 * 1. If 'pages' is odd, flush the first page through non-range
-	 *    operations;
-	 *
-	 * 2. For remaining pages: the minimum range granularity is decided
-	 *    by 'scale', so multiple range TLBI operations may be required.
-	 *    Start from scale = 0, flush the corresponding number of pages
-	 *    ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it
-	 *    until no pages left.
-	 *
-	 * Note that certain ranges can be represented by either num = 31 and
-	 * scale or num = 0 and scale + 1. The loop below favours the latter
-	 * since num is limited to 30 by the __TLBI_RANGE_NUM() macro.
-	 */
-	while (pages > 0) {
-		if (!system_supports_tlb_range() ||
-		    pages % 2 == 1) {
-			addr = __TLBI_VADDR(start, asid);
-			if (last_level) {
-				__tlbi_level(vale1is, addr, tlb_level);
-				__tlbi_user_level(vale1is, addr, tlb_level);
-			} else {
-				__tlbi_level(vae1is, addr, tlb_level);
-				__tlbi_user_level(vae1is, addr, tlb_level);
-			}
-			start += stride;
-			pages -= stride >> PAGE_SHIFT;
-			continue;
-		}
-
-		num = __TLBI_RANGE_NUM(pages, scale);
-		if (num >= 0) {
-			addr = __TLBI_VADDR_RANGE(start, asid, scale,
-						  num, tlb_level);
-			if (last_level) {
-				__tlbi(rvale1is, addr);
-				__tlbi_user(rvale1is, addr);
-			} else {
-				__tlbi(rvae1is, addr);
-				__tlbi_user(rvae1is, addr);
-			}
-			start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT;
-			pages -= __TLBI_RANGE_PAGES(num, scale);
-		}
-		scale++;
-	}
+	if (last_level)
+		__flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true);
+	else
+		__flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true);
+
 	dsb(ish);
 }
 
-- 
2.41.0.455.g037347b96a-goog


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

  parent reply	other threads:[~2023-07-15  0:54 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-15  0:53 [PATCH v6 00/11] KVM: arm64: Add support for FEAT_TLBIRANGE Raghavendra Rao Ananta
2023-07-15  0:53 ` [PATCH v6 01/11] KVM: Rename kvm_arch_flush_remote_tlb() to kvm_arch_flush_remote_tlbs() Raghavendra Rao Ananta
2023-07-17  8:03   ` Philippe Mathieu-Daudé
2023-07-17 11:06   ` Shaoqin Huang
2023-07-15  0:53 ` [PATCH v6 02/11] KVM: arm64: Use kvm_arch_flush_remote_tlbs() Raghavendra Rao Ananta
2023-07-17  8:12   ` Philippe Mathieu-Daudé
2023-07-17 16:28     ` Raghavendra Rao Ananta
2023-07-17 11:27   ` Shaoqin Huang
2023-07-15  0:53 ` [PATCH v6 03/11] KVM: Allow range-based TLB invalidation from common code Raghavendra Rao Ananta
2023-07-17 11:40   ` Shaoqin Huang
2023-07-17 16:37     ` Raghavendra Rao Ananta
2023-07-18  2:49       ` Shaoqin Huang
2023-07-18 16:31         ` Raghavendra Rao Ananta
2023-07-15  0:53 ` [PATCH v6 04/11] KVM: Move kvm_arch_flush_remote_tlbs_memslot() to " Raghavendra Rao Ananta
2023-07-18  2:47   ` Shaoqin Huang
2023-07-15  0:53 ` Raghavendra Rao Ananta [this message]
2023-07-18  6:21   ` [PATCH v6 05/11] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range Shaoqin Huang
2023-07-15  0:54 ` [PATCH v6 06/11] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() Raghavendra Rao Ananta
2023-07-18  7:50   ` Shaoqin Huang
2023-07-18 16:39     ` Raghavendra Rao Ananta
2023-07-15  0:54 ` [PATCH v6 07/11] KVM: arm64: Define kvm_tlb_flush_vmid_range() Raghavendra Rao Ananta
2023-07-18  8:39   ` Shaoqin Huang
2023-07-15  0:54 ` [PATCH v6 08/11] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() Raghavendra Rao Ananta
2023-07-18 11:15   ` Shaoqin Huang
2023-07-15  0:54 ` [PATCH v6 09/11] KVM: arm64: Flush only the memslot after write-protect Raghavendra Rao Ananta
2023-07-18 11:17   ` Shaoqin Huang
2023-07-15  0:54 ` [PATCH v6 10/11] KVM: arm64: Invalidate the table entries upon a range Raghavendra Rao Ananta
2023-07-18 11:17   ` Shaoqin Huang
2023-07-15  0:54 ` [PATCH v6 11/11] KVM: arm64: Use TLBI range-based intructions for unmap Raghavendra Rao Ananta
2023-07-15  2:02 ` [PATCH v6 00/11] KVM: arm64: Add support for FEAT_TLBIRANGE Raghavendra Rao Ananta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230715005405.3689586-6-rananta@google.com \
    --to=rananta@google.com \
    --cc=anup@brainfault.org \
    --cc=atishp@atishpatra.org \
    --cc=catalin.marinas@arm.com \
    --cc=chenhuacai@kernel.org \
    --cc=coltonlewis@google.com \
    --cc=dmatlack@google.com \
    --cc=gshan@redhat.com \
    --cc=james.morse@arm.com \
    --cc=jingzhangos@google.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox