From: Raghavendra Rao Ananta <rananta@google.com>
To: Oliver Upton <oliver.upton@linux.dev>,
Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Sean Christopherson <seanjc@google.com>,
Huacai Chen <chenhuacai@kernel.org>,
Zenghui Yu <yuzenghui@huawei.com>,
Anup Patel <anup@brainfault.org>,
Atish Patra <atishp@atishpatra.org>,
Jing Zhang <jingzhangos@google.com>,
Reiji Watanabe <reijiw@google.com>,
Colton Lewis <coltonlewis@google.com>,
Raghavendra Rao Anata <rananta@google.com>,
David Matlack <dmatlack@google.com>,
Fuad Tabba <tabba@google.com>,
linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org,
linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org, Catalin Marinas <catalin.marinas@arm.com>,
Gavin Shan <gshan@redhat.com>,
Shaoqin Huang <shahuang@redhat.com>
Subject: [PATCH v9 07/14] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range
Date: Fri, 11 Aug 2023 04:51:20 +0000 [thread overview]
Message-ID: <20230811045127.3308641-8-rananta@google.com> (raw)
In-Reply-To: <20230811045127.3308641-1-rananta@google.com>
Currently, the core TLB flush functionality of __flush_tlb_range()
hardcodes vae1is (and variants) for the flush operation. In the
upcoming patches, the KVM code reuses this core algorithm with
ipas2e1is for range based TLB invalidations based on the IPA.
Hence, extract the core flush functionality of __flush_tlb_range()
into its own macro that accepts an 'op' argument to pass any
TLBI operation, such that other callers (KVM) can benefit.
No functional changes intended.
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
---
arch/arm64/include/asm/tlbflush.h | 121 +++++++++++++++++-------------
1 file changed, 68 insertions(+), 53 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 412a3b9a3c25d..b9475a852d5be 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -278,14 +278,74 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
*/
#define MAX_TLBI_OPS PTRS_PER_PTE
+/*
+ * __flush_tlb_range_op - Perform TLBI operation upon a range
+ *
+ * @op: TLBI instruction that operates on a range (has 'r' prefix)
+ * @start: The start address of the range
+ * @pages: Range as the number of pages from 'start'
+ * @stride: Flush granularity
+ * @asid: The ASID of the task (0 for IPA instructions)
+ * @tlb_level: Translation Table level hint, if known
+ * @tlbi_user: If 'true', call an additional __tlbi_user()
+ * (typically for user ASIDs). 'flase' for IPA instructions
+ *
+ * When the CPU does not support TLB range operations, flush the TLB
+ * entries one by one at the granularity of 'stride'. If the TLB
+ * range ops are supported, then:
+ *
+ * 1. If 'pages' is odd, flush the first page through non-range
+ * operations;
+ *
+ * 2. For remaining pages: the minimum range granularity is decided
+ * by 'scale', so multiple range TLBI operations may be required.
+ * Start from scale = 0, flush the corresponding number of pages
+ * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it
+ * until no pages left.
+ *
+ * Note that certain ranges can be represented by either num = 31 and
+ * scale or num = 0 and scale + 1. The loop below favours the latter
+ * since num is limited to 30 by the __TLBI_RANGE_NUM() macro.
+ */
+#define __flush_tlb_range_op(op, start, pages, stride, \
+ asid, tlb_level, tlbi_user) \
+do { \
+ int num = 0; \
+ int scale = 0; \
+ unsigned long addr; \
+ \
+ while (pages > 0) { \
+ if (!system_supports_tlb_range() || \
+ pages % 2 == 1) { \
+ addr = __TLBI_VADDR(start, asid); \
+ __tlbi_level(op, addr, tlb_level); \
+ if (tlbi_user) \
+ __tlbi_user_level(op, addr, tlb_level); \
+ start += stride; \
+ pages -= stride >> PAGE_SHIFT; \
+ continue; \
+ } \
+ \
+ num = __TLBI_RANGE_NUM(pages, scale); \
+ if (num >= 0) { \
+ addr = __TLBI_VADDR_RANGE(start, asid, scale, \
+ num, tlb_level); \
+ __tlbi(r##op, addr); \
+ if (tlbi_user) \
+ __tlbi_user(r##op, addr); \
+ start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
+ pages -= __TLBI_RANGE_PAGES(num, scale); \
+ } \
+ scale++; \
+ } \
+} while (0)
+
static inline void __flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end,
unsigned long stride, bool last_level,
int tlb_level)
{
- int num = 0;
- int scale = 0;
- unsigned long asid, addr, pages;
+ unsigned long asid, pages;
start = round_down(start, stride);
end = round_up(end, stride);
@@ -307,56 +367,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
dsb(ishst);
asid = ASID(vma->vm_mm);
- /*
- * When the CPU does not support TLB range operations, flush the TLB
- * entries one by one at the granularity of 'stride'. If the TLB
- * range ops are supported, then:
- *
- * 1. If 'pages' is odd, flush the first page through non-range
- * operations;
- *
- * 2. For remaining pages: the minimum range granularity is decided
- * by 'scale', so multiple range TLBI operations may be required.
- * Start from scale = 0, flush the corresponding number of pages
- * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it
- * until no pages left.
- *
- * Note that certain ranges can be represented by either num = 31 and
- * scale or num = 0 and scale + 1. The loop below favours the latter
- * since num is limited to 30 by the __TLBI_RANGE_NUM() macro.
- */
- while (pages > 0) {
- if (!system_supports_tlb_range() ||
- pages % 2 == 1) {
- addr = __TLBI_VADDR(start, asid);
- if (last_level) {
- __tlbi_level(vale1is, addr, tlb_level);
- __tlbi_user_level(vale1is, addr, tlb_level);
- } else {
- __tlbi_level(vae1is, addr, tlb_level);
- __tlbi_user_level(vae1is, addr, tlb_level);
- }
- start += stride;
- pages -= stride >> PAGE_SHIFT;
- continue;
- }
-
- num = __TLBI_RANGE_NUM(pages, scale);
- if (num >= 0) {
- addr = __TLBI_VADDR_RANGE(start, asid, scale,
- num, tlb_level);
- if (last_level) {
- __tlbi(rvale1is, addr);
- __tlbi_user(rvale1is, addr);
- } else {
- __tlbi(rvae1is, addr);
- __tlbi_user(rvae1is, addr);
- }
- start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT;
- pages -= __TLBI_RANGE_PAGES(num, scale);
- }
- scale++;
- }
+ if (last_level)
+ __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true);
+ else
+ __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true);
+
dsb(ish);
}
--
2.41.0.640.ga95def55d0-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2023-08-11 4:52 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-11 4:51 [PATCH v9 00/14] KVM: arm64: Add support for FEAT_TLBIRANGE Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 01/14] KVM: Rename kvm_arch_flush_remote_tlb() to kvm_arch_flush_remote_tlbs() Raghavendra Rao Ananta
2023-08-15 22:30 ` Sean Christopherson
2023-08-11 4:51 ` [PATCH v9 02/14] KVM: Declare kvm_arch_flush_remote_tlbs() globally Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 03/14] KVM: arm64: Use kvm_arch_flush_remote_tlbs() Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 04/14] KVM: Remove CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 05/14] KVM: Allow range-based TLB invalidation from common code Raghavendra Rao Ananta
2023-08-15 22:30 ` Sean Christopherson
2023-08-17 8:39 ` Marc Zyngier
2023-08-11 4:51 ` [PATCH v9 06/14] KVM: Move kvm_arch_flush_remote_tlbs_memslot() to " Raghavendra Rao Ananta
2023-08-15 22:31 ` Sean Christopherson
2023-08-11 4:51 ` Raghavendra Rao Ananta [this message]
2023-08-11 4:51 ` [PATCH v9 08/14] arm64: tlb: Implement __flush_s2_tlb_range_op() Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 09/14] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 10/14] KVM: arm64: Define kvm_tlb_flush_vmid_range() Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 11/14] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 12/14] KVM: arm64: Flush only the memslot after write-protect Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 13/14] KVM: arm64: Invalidate the table entries upon a range Raghavendra Rao Ananta
2023-08-11 4:51 ` [PATCH v9 14/14] KVM: arm64: Use TLBI range-based instructions for unmap Raghavendra Rao Ananta
2023-08-14 23:16 ` [PATCH v9 00/14] KVM: arm64: Add support for FEAT_TLBIRANGE Sean Christopherson
2023-08-15 0:29 ` Raghavendra Rao Ananta
2023-08-15 0:52 ` Sean Christopherson
2023-08-15 17:11 ` Raghavendra Rao Ananta
2023-08-15 17:15 ` Sean Christopherson
2023-08-17 9:29 ` Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230811045127.3308641-8-rananta@google.com \
--to=rananta@google.com \
--cc=anup@brainfault.org \
--cc=atishp@atishpatra.org \
--cc=catalin.marinas@arm.com \
--cc=chenhuacai@kernel.org \
--cc=coltonlewis@google.com \
--cc=dmatlack@google.com \
--cc=gshan@redhat.com \
--cc=james.morse@arm.com \
--cc=jingzhangos@google.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=pbonzini@redhat.com \
--cc=reijiw@google.com \
--cc=seanjc@google.com \
--cc=shahuang@redhat.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).