* [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation
@ 2025-12-16 14:45 Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 01/13] arm64: mm: Re-implement the __tlbi_level macro as a C function Ryan Roberts
` (12 more replies)
0 siblings, 13 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
Hi All,
This series refactors the TLB invalidation API to make it more general and
flexible, and refactors the implementation, aiming to make it more robust,
easier to understand and easier to add new features in future.
It is heavily based on the series posted by Will back in July at [1]; I've
attempted to maintain correct authorship and tags - apologies if I got any of
the etiquette wrong.
The first 8 patches reimplement the full scope of Will's series, but fixed up to
use function pointers instead of the enum, as per Linus's suggestion. Patches
9-12 then reformulate the API for the range- and page-based functions to remove
all the "nosync", "nonotify" and "local" function variants and replace with a
set of flags to modify the behaviour instead. This allows having a single
implementation that can rely on constant folding. IMO It's much cleaner and more
flexible. Finally, patch 13 provides a minor theoretical performance improvement
by hinting the TTL for page-based invalidations (the preceeding API improvements
made that pretty simple).
We have a couple of other things in the queue to put on top of this series,
which these changes make simpler:
- Optimization to only do local TLBI when an mm is single-threaded
- Introduce TLBIP for use with D128 pgtables
The series applies on top of v6.19-rc1, I've compile tested each patch and run
mm selftests for the end result in a VM on Apple M2; all tests pass. I've run an
earlier version of this code through our performance benchmarking system and no
regressions were found. I've looked at the generated instructions and all the
expected constant folding seems to be happening, and I've checked code size
before and after; there is no significant change.
[1] https://lore.kernel.org/linux-arm-kernel/20250711161732.384-1-will@kernel.org/
Thanks,
Ryan
Ryan Roberts (9):
arm64: mm: Re-implement the __tlbi_level macro as a C function
arm64: mm: Introduce a C wrapper for by-range TLB invalidation
arm64: mm: Implicitly invalidate user ASID based on TLBI operation
arm64: mm: Re-implement the __flush_tlb_range_op macro in C
arm64: mm: Refactor flush_tlb_page() to use __tlbi_level_asid()
arm64: mm: Refactor __flush_tlb_range() to take flags
arm64: mm: More flags for __flush_tlb_range()
arm64: mm: Wrap flush_tlb_page() around ___flush_tlb_range()
arm64: mm: Provide level hint for flush_tlb_page()
Will Deacon (4):
arm64: mm: Push __TLBI_VADDR() into __tlbi_level()
arm64: mm: Inline __TLBI_VADDR_RANGE() into __tlbi_range()
arm64: mm: Simplify __TLBI_RANGE_NUM() macro
arm64: mm: Simplify __flush_tlb_range_limit_excess()
arch/arm64/include/asm/hugetlb.h | 12 +-
arch/arm64/include/asm/pgtable.h | 13 +-
arch/arm64/include/asm/tlb.h | 6 +-
arch/arm64/include/asm/tlbflush.h | 461 +++++++++++++++++-------------
arch/arm64/kernel/sys_compat.c | 2 +-
arch/arm64/kvm/hyp/nvhe/mm.c | 2 +-
arch/arm64/kvm/hyp/pgtable.c | 4 +-
arch/arm64/mm/contpte.c | 12 +-
arch/arm64/mm/fault.c | 2 +-
arch/arm64/mm/hugetlbpage.c | 4 +-
arch/arm64/mm/mmu.c | 2 +-
11 files changed, 288 insertions(+), 232 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v1 01/13] arm64: mm: Re-implement the __tlbi_level macro as a C function
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 17:53 ` Jonathan Cameron
2025-12-16 14:45 ` [PATCH v1 02/13] arm64: mm: Introduce a C wrapper for by-range TLB invalidation Ryan Roberts
` (11 subsequent siblings)
12 siblings, 1 reply; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
As part of efforts to reduce our reliance on complex preprocessor macros
for TLB invalidation routines, convert the __tlbi_level macro to a C
function for by-level TLB invalidation.
Each specific tlbi level op is implemented as a C function and the
appropriate function pointer is passed to __tlbi_level(). Since
everything is declared inline and is statically resolvable, the compiler
will convert the indirect function call to a direct inline execution.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 69 +++++++++++++++++++++++++------
1 file changed, 56 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index a2d65d7d6aae..13a59cf28943 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -105,19 +105,62 @@ static inline unsigned long get_trans_granule(void)
#define TLBI_TTL_UNKNOWN INT_MAX
-#define __tlbi_level(op, addr, level) do { \
- u64 arg = addr; \
- \
- if (alternative_has_cap_unlikely(ARM64_HAS_ARMv8_4_TTL) && \
- level >= 0 && level <= 3) { \
- u64 ttl = level & 3; \
- ttl |= get_trans_granule() << 2; \
- arg &= ~TLBI_TTL_MASK; \
- arg |= FIELD_PREP(TLBI_TTL_MASK, ttl); \
- } \
- \
- __tlbi(op, arg); \
-} while(0)
+typedef void (*tlbi_op)(u64 arg);
+
+static __always_inline void vae1is(u64 arg)
+{
+ __tlbi(vae1is, arg);
+}
+
+static __always_inline void vae2is(u64 arg)
+{
+ __tlbi(vae2is, arg);
+}
+
+static __always_inline void vale1(u64 arg)
+{
+ __tlbi(vale1, arg);
+ __tlbi_user(vale1, arg);
+}
+
+static __always_inline void vale1is(u64 arg)
+{
+ __tlbi(vale1is, arg);
+}
+
+static __always_inline void vale2is(u64 arg)
+{
+ __tlbi(vale2is, arg);
+}
+
+static __always_inline void vaale1is(u64 arg)
+{
+ __tlbi(vaale1is, arg);
+}
+
+static __always_inline void ipas2e1(u64 arg)
+{
+ __tlbi(ipas2e1, arg);
+}
+
+static __always_inline void ipas2e1is(u64 arg)
+{
+ __tlbi(ipas2e1is, arg);
+}
+
+static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
+{
+ u64 arg = addr;
+
+ if (alternative_has_cap_unlikely(ARM64_HAS_ARMv8_4_TTL) && level <= 3) {
+ u64 ttl = level | (get_trans_granule() << 2);
+
+ arg &= ~TLBI_TTL_MASK;
+ arg |= FIELD_PREP(TLBI_TTL_MASK, ttl);
+ }
+
+ op(arg);
+}
#define __tlbi_user_level(op, arg, level) do { \
if (arm64_kernel_unmapped_at_el0()) \
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 02/13] arm64: mm: Introduce a C wrapper for by-range TLB invalidation
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 01/13] arm64: mm: Re-implement the __tlbi_level macro as a C function Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation Ryan Roberts
` (10 subsequent siblings)
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
As part of efforts to reduce our reliance on complex preprocessor macros
for TLB invalidation routines, introduce a new C wrapper for by-range
TLB invalidation which can be used instead of the __tlbi() macro and can
additionally be called from C code.
Each specific tlbi range op is implemented as a C function and the
appropriate function pointer is passed to __tlbi_range(). Since
everything is declared inline and is statically resolvable, the compiler
will convert the indirect function call to a direct inline execution.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 33 ++++++++++++++++++++++++++++++-
1 file changed, 32 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 13a59cf28943..c5111d2afc66 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -459,6 +459,37 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
* operations can only span an even number of pages. We save this for last to
* ensure 64KB start alignment is maintained for the LPA2 case.
*/
+static __always_inline void rvae1is(u64 arg)
+{
+ __tlbi(rvae1is, arg);
+}
+
+static __always_inline void rvale1(u64 arg)
+{
+ __tlbi(rvale1, arg);
+ __tlbi_user(rvale1, arg);
+}
+
+static __always_inline void rvale1is(u64 arg)
+{
+ __tlbi(rvale1is, arg);
+}
+
+static __always_inline void rvaale1is(u64 arg)
+{
+ __tlbi(rvaale1is, arg);
+}
+
+static __always_inline void ripas2e1is(u64 arg)
+{
+ __tlbi(ripas2e1is, arg);
+}
+
+static __always_inline void __tlbi_range(tlbi_op op, u64 arg)
+{
+ op(arg);
+}
+
#define __flush_tlb_range_op(op, start, pages, stride, \
asid, tlb_level, tlbi_user, lpa2) \
do { \
@@ -486,7 +517,7 @@ do { \
if (num >= 0) { \
addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
scale, num, tlb_level); \
- __tlbi(r##op, addr); \
+ __tlbi_range(r##op, addr); \
if (tlbi_user) \
__tlbi_user(r##op, addr); \
__flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 01/13] arm64: mm: Re-implement the __tlbi_level macro as a C function Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 02/13] arm64: mm: Introduce a C wrapper for by-range TLB invalidation Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 18:01 ` Jonathan Cameron
2025-12-18 6:30 ` Linu Cherian
2025-12-16 14:45 ` [PATCH v1 04/13] arm64: mm: Push __TLBI_VADDR() into __tlbi_level() Ryan Roberts
` (9 subsequent siblings)
12 siblings, 2 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
When kpti is enabled, separate ASIDs are used for userspace and
kernelspace, requiring ASID-qualified TLB invalidation by virtual
address to invalidate both of them.
Push the logic for invalidating the two ASIDs down into the low-level
tlbi-op-specific functions and remove the burden from the caller to
handle the kpti-specific behaviour.
Co-developed-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 27 ++++++++++-----------------
1 file changed, 10 insertions(+), 17 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index c5111d2afc66..31f43d953ce2 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -110,6 +110,7 @@ typedef void (*tlbi_op)(u64 arg);
static __always_inline void vae1is(u64 arg)
{
__tlbi(vae1is, arg);
+ __tlbi_user(vae1is, arg);
}
static __always_inline void vae2is(u64 arg)
@@ -126,6 +127,7 @@ static __always_inline void vale1(u64 arg)
static __always_inline void vale1is(u64 arg)
{
__tlbi(vale1is, arg);
+ __tlbi_user(vale1is, arg);
}
static __always_inline void vale2is(u64 arg)
@@ -162,11 +164,6 @@ static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
op(arg);
}
-#define __tlbi_user_level(op, arg, level) do { \
- if (arm64_kernel_unmapped_at_el0()) \
- __tlbi_level(op, (arg | USER_ASID_FLAG), level); \
-} while (0)
-
/*
* This macro creates a properly formatted VA operand for the TLB RANGE. The
* value bit assignments are:
@@ -435,8 +432,6 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
* @stride: Flush granularity
* @asid: The ASID of the task (0 for IPA instructions)
* @tlb_level: Translation Table level hint, if known
- * @tlbi_user: If 'true', call an additional __tlbi_user()
- * (typically for user ASIDs). 'flase' for IPA instructions
* @lpa2: If 'true', the lpa2 scheme is used as set out below
*
* When the CPU does not support TLB range operations, flush the TLB
@@ -462,6 +457,7 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
static __always_inline void rvae1is(u64 arg)
{
__tlbi(rvae1is, arg);
+ __tlbi_user(rvae1is, arg);
}
static __always_inline void rvale1(u64 arg)
@@ -473,6 +469,7 @@ static __always_inline void rvale1(u64 arg)
static __always_inline void rvale1is(u64 arg)
{
__tlbi(rvale1is, arg);
+ __tlbi_user(rvale1is, arg);
}
static __always_inline void rvaale1is(u64 arg)
@@ -491,7 +488,7 @@ static __always_inline void __tlbi_range(tlbi_op op, u64 arg)
}
#define __flush_tlb_range_op(op, start, pages, stride, \
- asid, tlb_level, tlbi_user, lpa2) \
+ asid, tlb_level, lpa2) \
do { \
typeof(start) __flush_start = start; \
typeof(pages) __flush_pages = pages; \
@@ -506,8 +503,6 @@ do { \
(lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \
addr = __TLBI_VADDR(__flush_start, asid); \
__tlbi_level(op, addr, tlb_level); \
- if (tlbi_user) \
- __tlbi_user_level(op, addr, tlb_level); \
__flush_start += stride; \
__flush_pages -= stride >> PAGE_SHIFT; \
continue; \
@@ -518,8 +513,6 @@ do { \
addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
scale, num, tlb_level); \
__tlbi_range(r##op, addr); \
- if (tlbi_user) \
- __tlbi_user(r##op, addr); \
__flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
__flush_pages -= __TLBI_RANGE_PAGES(num, scale);\
} \
@@ -528,7 +521,7 @@ do { \
} while (0)
#define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \
- __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, false, kvm_lpa2_is_enabled());
+ __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, kvm_lpa2_is_enabled());
static inline bool __flush_tlb_range_limit_excess(unsigned long start,
unsigned long end, unsigned long pages, unsigned long stride)
@@ -568,10 +561,10 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
if (last_level)
__flush_tlb_range_op(vale1is, start, pages, stride, asid,
- tlb_level, true, lpa2_is_enabled());
+ tlb_level, lpa2_is_enabled());
else
__flush_tlb_range_op(vae1is, start, pages, stride, asid,
- tlb_level, true, lpa2_is_enabled());
+ tlb_level, lpa2_is_enabled());
mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
}
@@ -630,7 +623,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
dsb(ishst);
__flush_tlb_range_op(vaale1is, start, pages, stride, 0,
- TLBI_TTL_UNKNOWN, false, lpa2_is_enabled());
+ TLBI_TTL_UNKNOWN, lpa2_is_enabled());
dsb(ish);
isb();
}
@@ -681,6 +674,6 @@ static inline bool huge_pmd_needs_flush(pmd_t oldpmd, pmd_t newpmd)
}
#define huge_pmd_needs_flush huge_pmd_needs_flush
+#undef __tlbi_user
#endif
-
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 04/13] arm64: mm: Push __TLBI_VADDR() into __tlbi_level()
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (2 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 05/13] arm64: mm: Inline __TLBI_VADDR_RANGE() into __tlbi_range() Ryan Roberts
` (8 subsequent siblings)
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
From: Will Deacon <will@kernel.org>
The __TLBI_VADDR() macro takes an ASID and an address and converts them
into a single argument formatted correctly for a TLB invalidation
instruction.
Rather than have callers worry about this (especially in the case where
the ASID is zero), push the macro down into __tlbi_level() via a new
__tlbi_level_asid() helper.
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 14 ++++++++++----
arch/arm64/kernel/sys_compat.c | 2 +-
arch/arm64/kvm/hyp/nvhe/mm.c | 2 +-
arch/arm64/kvm/hyp/pgtable.c | 4 ++--
4 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 31f43d953ce2..39717f98c31e 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -150,9 +150,10 @@ static __always_inline void ipas2e1is(u64 arg)
__tlbi(ipas2e1is, arg);
}
-static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
+static __always_inline void __tlbi_level_asid(tlbi_op op, u64 addr, u32 level,
+ u16 asid)
{
- u64 arg = addr;
+ u64 arg = __TLBI_VADDR(addr, asid);
if (alternative_has_cap_unlikely(ARM64_HAS_ARMv8_4_TTL) && level <= 3) {
u64 ttl = level | (get_trans_granule() << 2);
@@ -164,6 +165,11 @@ static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
op(arg);
}
+static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
+{
+ __tlbi_level_asid(op, addr, level, 0);
+}
+
/*
* This macro creates a properly formatted VA operand for the TLB RANGE. The
* value bit assignments are:
@@ -501,8 +507,7 @@ do { \
if (!system_supports_tlb_range() || \
__flush_pages == 1 || \
(lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \
- addr = __TLBI_VADDR(__flush_start, asid); \
- __tlbi_level(op, addr, tlb_level); \
+ __tlbi_level_asid(op, __flush_start, tlb_level, asid); \
__flush_start += stride; \
__flush_pages -= stride >> PAGE_SHIFT; \
continue; \
@@ -675,5 +680,6 @@ static inline bool huge_pmd_needs_flush(pmd_t oldpmd, pmd_t newpmd)
#define huge_pmd_needs_flush huge_pmd_needs_flush
#undef __tlbi_user
+#undef __TLBI_VADDR
#endif
#endif
diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
index 4a609e9b65de..ad4857df4830 100644
--- a/arch/arm64/kernel/sys_compat.c
+++ b/arch/arm64/kernel/sys_compat.c
@@ -36,7 +36,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
* The workaround requires an inner-shareable tlbi.
* We pick the reserved-ASID to minimise the impact.
*/
- __tlbi(aside1is, __TLBI_VADDR(0, 0));
+ __tlbi(aside1is, 0UL);
dsb(ish);
}
diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c
index ae8391baebc3..581385b21826 100644
--- a/arch/arm64/kvm/hyp/nvhe/mm.c
+++ b/arch/arm64/kvm/hyp/nvhe/mm.c
@@ -270,7 +270,7 @@ static void fixmap_clear_slot(struct hyp_fixmap_slot *slot)
* https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#mf10dfbaf1eaef9274c581b81c53758918c1d0f03
*/
dsb(ishst);
- __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level);
+ __tlbi_level(vale2is, addr, level);
dsb(ish);
isb();
}
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 947ac1a951a5..9292c569afe6 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -472,14 +472,14 @@ static int hyp_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
kvm_clear_pte(ctx->ptep);
dsb(ishst);
- __tlbi_level(vae2is, __TLBI_VADDR(ctx->addr, 0), TLBI_TTL_UNKNOWN);
+ __tlbi_level(vae2is, ctx->addr, TLBI_TTL_UNKNOWN);
} else {
if (ctx->end - ctx->addr < granule)
return -EINVAL;
kvm_clear_pte(ctx->ptep);
dsb(ishst);
- __tlbi_level(vale2is, __TLBI_VADDR(ctx->addr, 0), ctx->level);
+ __tlbi_level(vale2is, ctx->addr, ctx->level);
*unmapped += granule;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 05/13] arm64: mm: Inline __TLBI_VADDR_RANGE() into __tlbi_range()
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (3 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 04/13] arm64: mm: Push __TLBI_VADDR() into __tlbi_level() Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 06/13] arm64: mm: Re-implement the __flush_tlb_range_op macro in C Ryan Roberts
` (7 subsequent siblings)
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
From: Will Deacon <will@kernel.org>
The __TLBI_VADDR_RANGE() macro is only used in one place and isn't
something that's generally useful outside of the low-level range
invalidation gubbins.
Inline __TLBI_VADDR_RANGE() into the __tlbi_range() function so that the
macro can be removed entirely.
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 32 +++++++++++++------------------
1 file changed, 13 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 39717f98c31e..887dd1f05a89 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -195,19 +195,6 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
#define TLBIR_TTL_MASK GENMASK_ULL(38, 37)
#define TLBIR_BADDR_MASK GENMASK_ULL(36, 0)
-#define __TLBI_VADDR_RANGE(baddr, asid, scale, num, ttl) \
- ({ \
- unsigned long __ta = 0; \
- unsigned long __ttl = (ttl >= 1 && ttl <= 3) ? ttl : 0; \
- __ta |= FIELD_PREP(TLBIR_BADDR_MASK, baddr); \
- __ta |= FIELD_PREP(TLBIR_TTL_MASK, __ttl); \
- __ta |= FIELD_PREP(TLBIR_NUM_MASK, num); \
- __ta |= FIELD_PREP(TLBIR_SCALE_MASK, scale); \
- __ta |= FIELD_PREP(TLBIR_TG_MASK, get_trans_granule()); \
- __ta |= FIELD_PREP(TLBIR_ASID_MASK, asid); \
- __ta; \
- })
-
/* These macros are used by the TLBI RANGE feature. */
#define __TLBI_RANGE_PAGES(num, scale) \
((unsigned long)((num) + 1) << (5 * (scale) + 1))
@@ -488,8 +475,19 @@ static __always_inline void ripas2e1is(u64 arg)
__tlbi(ripas2e1is, arg);
}
-static __always_inline void __tlbi_range(tlbi_op op, u64 arg)
+static __always_inline void __tlbi_range(tlbi_op op, u64 addr,
+ u16 asid, int scale, int num,
+ u32 level, bool lpa2)
{
+ u64 arg = 0;
+
+ arg |= FIELD_PREP(TLBIR_BADDR_MASK, addr >> (lpa2 ? 16 : PAGE_SHIFT));
+ arg |= FIELD_PREP(TLBIR_TTL_MASK, level > 3 ? 0 : level);
+ arg |= FIELD_PREP(TLBIR_NUM_MASK, num);
+ arg |= FIELD_PREP(TLBIR_SCALE_MASK, scale);
+ arg |= FIELD_PREP(TLBIR_TG_MASK, get_trans_granule());
+ arg |= FIELD_PREP(TLBIR_ASID_MASK, asid);
+
op(arg);
}
@@ -500,8 +498,6 @@ do { \
typeof(pages) __flush_pages = pages; \
int num = 0; \
int scale = 3; \
- int shift = lpa2 ? 16 : PAGE_SHIFT; \
- unsigned long addr; \
\
while (__flush_pages > 0) { \
if (!system_supports_tlb_range() || \
@@ -515,9 +511,7 @@ do { \
\
num = __TLBI_RANGE_NUM(__flush_pages, scale); \
if (num >= 0) { \
- addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
- scale, num, tlb_level); \
- __tlbi_range(r##op, addr); \
+ __tlbi_range(r##op, __flush_start, asid, scale, num, tlb_level, lpa2); \
__flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
__flush_pages -= __TLBI_RANGE_PAGES(num, scale);\
} \
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 06/13] arm64: mm: Re-implement the __flush_tlb_range_op macro in C
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (4 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 05/13] arm64: mm: Inline __TLBI_VADDR_RANGE() into __tlbi_range() Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 07/13] arm64: mm: Simplify __TLBI_RANGE_NUM() macro Ryan Roberts
` (6 subsequent siblings)
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
The __flush_tlb_range_op() macro is horrible and has been a previous
source of bugs thanks to multiple expansions of its arguments (see
commit f7edb07ad7c6 ("Fix mmu notifiers for range-based invalidates")).
Rewrite the thing in C.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Co-developed-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 84 +++++++++++++++++--------------
1 file changed, 46 insertions(+), 38 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 887dd1f05a89..d2a144a09a8f 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -419,12 +419,13 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
/*
* __flush_tlb_range_op - Perform TLBI operation upon a range
*
- * @op: TLBI instruction that operates on a range (has 'r' prefix)
+ * @lop: TLBI level operation to perform
+ * @rop: TLBI range operation to perform
* @start: The start address of the range
* @pages: Range as the number of pages from 'start'
* @stride: Flush granularity
* @asid: The ASID of the task (0 for IPA instructions)
- * @tlb_level: Translation Table level hint, if known
+ * @level: Translation Table level hint, if known
* @lpa2: If 'true', the lpa2 scheme is used as set out below
*
* When the CPU does not support TLB range operations, flush the TLB
@@ -491,36 +492,44 @@ static __always_inline void __tlbi_range(tlbi_op op, u64 addr,
op(arg);
}
-#define __flush_tlb_range_op(op, start, pages, stride, \
- asid, tlb_level, lpa2) \
-do { \
- typeof(start) __flush_start = start; \
- typeof(pages) __flush_pages = pages; \
- int num = 0; \
- int scale = 3; \
- \
- while (__flush_pages > 0) { \
- if (!system_supports_tlb_range() || \
- __flush_pages == 1 || \
- (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \
- __tlbi_level_asid(op, __flush_start, tlb_level, asid); \
- __flush_start += stride; \
- __flush_pages -= stride >> PAGE_SHIFT; \
- continue; \
- } \
- \
- num = __TLBI_RANGE_NUM(__flush_pages, scale); \
- if (num >= 0) { \
- __tlbi_range(r##op, __flush_start, asid, scale, num, tlb_level, lpa2); \
- __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
- __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\
- } \
- scale--; \
- } \
-} while (0)
+static __always_inline void __flush_tlb_range_op(tlbi_op lop, tlbi_op rop,
+ u64 start, size_t pages,
+ u64 stride, u16 asid,
+ u32 level, bool lpa2)
+{
+ u64 addr = start, end = start + pages * PAGE_SIZE;
+ int scale = 3;
+
+ while (addr != end) {
+ int num;
+
+ pages = (end - addr) >> PAGE_SHIFT;
+
+ if (!system_supports_tlb_range() || pages == 1)
+ goto invalidate_one;
+
+ if (lpa2 && !IS_ALIGNED(addr, SZ_64K))
+ goto invalidate_one;
+
+ num = __TLBI_RANGE_NUM(pages, scale);
+ if (num >= 0) {
+ __tlbi_range(rop, addr, asid, scale, num, level, lpa2);
+ addr += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT;
+ }
+
+ scale--;
+ continue;
+invalidate_one:
+ __tlbi_level_asid(lop, addr, level, asid);
+ addr += stride;
+ }
+}
+
+#define __flush_s1_tlb_range_op(op, start, pages, stride, asid, tlb_level) \
+ __flush_tlb_range_op(op, r##op, start, pages, stride, asid, tlb_level, lpa2_is_enabled())
#define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \
- __flush_tlb_range_op(op, start, pages, stride, 0, tlb_level, kvm_lpa2_is_enabled());
+ __flush_tlb_range_op(op, r##op, start, pages, stride, 0, tlb_level, kvm_lpa2_is_enabled())
static inline bool __flush_tlb_range_limit_excess(unsigned long start,
unsigned long end, unsigned long pages, unsigned long stride)
@@ -559,11 +568,11 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
asid = ASID(mm);
if (last_level)
- __flush_tlb_range_op(vale1is, start, pages, stride, asid,
- tlb_level, lpa2_is_enabled());
+ __flush_s1_tlb_range_op(vale1is, start, pages, stride,
+ asid, tlb_level);
else
- __flush_tlb_range_op(vae1is, start, pages, stride, asid,
- tlb_level, lpa2_is_enabled());
+ __flush_s1_tlb_range_op(vae1is, start, pages, stride,
+ asid, tlb_level);
mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
}
@@ -587,8 +596,7 @@ static inline void local_flush_tlb_contpte(struct vm_area_struct *vma,
dsb(nshst);
asid = ASID(vma->vm_mm);
- __flush_tlb_range_op(vale1, addr, CONT_PTES, PAGE_SIZE, asid,
- 3, true, lpa2_is_enabled());
+ __flush_s1_tlb_range_op(vale1, addr, CONT_PTES, PAGE_SIZE, asid, 3);
mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, addr,
addr + CONT_PTE_SIZE);
dsb(nsh);
@@ -621,8 +629,8 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
}
dsb(ishst);
- __flush_tlb_range_op(vaale1is, start, pages, stride, 0,
- TLBI_TTL_UNKNOWN, lpa2_is_enabled());
+ __flush_s1_tlb_range_op(vaale1is, start, pages, stride, 0,
+ TLBI_TTL_UNKNOWN);
dsb(ish);
isb();
}
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 07/13] arm64: mm: Simplify __TLBI_RANGE_NUM() macro
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (5 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 06/13] arm64: mm: Re-implement the __flush_tlb_range_op macro in C Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess() Ryan Roberts
` (5 subsequent siblings)
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
From: Will Deacon <will@kernel.org>
Since commit e2768b798a19 ("arm64/mm: Modify range-based tlbi to
decrement scale"), we don't need to clamp the 'pages' argument to fit
the range for the specified 'scale' as we know that the upper bits will
have been processed in a prior iteration.
Drop the clamping and simplify the __TLBI_RANGE_NUM() macro.
Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index d2a144a09a8f..0e1902f66e01 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -208,11 +208,7 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
* range.
*/
#define __TLBI_RANGE_NUM(pages, scale) \
- ({ \
- int __pages = min((pages), \
- __TLBI_RANGE_PAGES(31, (scale))); \
- (__pages >> (5 * (scale) + 1)) - 1; \
- })
+ (((pages) >> (5 * (scale) + 1)) - 1)
/*
* TLB Invalidation
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess()
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (6 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 07/13] arm64: mm: Simplify __TLBI_RANGE_NUM() macro Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-17 8:12 ` Dev Jain
2025-12-16 14:45 ` [PATCH v1 09/13] arm64: mm: Refactor flush_tlb_page() to use __tlbi_level_asid() Ryan Roberts
` (4 subsequent siblings)
12 siblings, 1 reply; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
From: Will Deacon <will@kernel.org>
__flush_tlb_range_limit_excess() is unnecessarily complicated:
- It takes a 'start', 'end' and 'pages' argument, whereas it only
needs 'pages' (which the caller has computed from the other two
arguments!).
- It erroneously compares 'pages' with MAX_TLBI_RANGE_PAGES when
the system doesn't support range-based invalidation but the range to
be invalidated would result in fewer than MAX_DVM_OPS invalidations.
Simplify the function so that it no longer takes the 'start' and 'end'
arguments and only considers the MAX_TLBI_RANGE_PAGES threshold on
systems that implement range-based invalidation.
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 0e1902f66e01..3b72a71feac0 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -527,21 +527,13 @@ static __always_inline void __flush_tlb_range_op(tlbi_op lop, tlbi_op rop,
#define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \
__flush_tlb_range_op(op, r##op, start, pages, stride, 0, tlb_level, kvm_lpa2_is_enabled())
-static inline bool __flush_tlb_range_limit_excess(unsigned long start,
- unsigned long end, unsigned long pages, unsigned long stride)
+static inline bool __flush_tlb_range_limit_excess(unsigned long pages,
+ unsigned long stride)
{
- /*
- * When the system does not support TLB range based flush
- * operation, (MAX_DVM_OPS - 1) pages can be handled. But
- * with TLB range based operation, MAX_TLBI_RANGE_PAGES
- * pages can be handled.
- */
- if ((!system_supports_tlb_range() &&
- (end - start) >= (MAX_DVM_OPS * stride)) ||
- pages > MAX_TLBI_RANGE_PAGES)
+ if (system_supports_tlb_range() && pages > MAX_TLBI_RANGE_PAGES)
return true;
- return false;
+ return pages >= (MAX_DVM_OPS * stride) >> PAGE_SHIFT;
}
static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
@@ -555,7 +547,7 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
end = round_up(end, stride);
pages = (end - start) >> PAGE_SHIFT;
- if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
+ if (__flush_tlb_range_limit_excess(pages, stride)) {
flush_tlb_mm(mm);
return;
}
@@ -619,7 +611,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
end = round_up(end, stride);
pages = (end - start) >> PAGE_SHIFT;
- if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
+ if (__flush_tlb_range_limit_excess(pages, stride)) {
flush_tlb_all();
return;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 09/13] arm64: mm: Refactor flush_tlb_page() to use __tlbi_level_asid()
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (7 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess() Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 10/13] arm64: mm: Refactor __flush_tlb_range() to take flags Ryan Roberts
` (3 subsequent siblings)
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
Now that we have __tlbi_level_asid(), let's refactor the
*flush_tlb_page*() variants to use it rather than open coding.
The emitted tlbi(s) is/are intended to be exactly the same as before; no
TTL hint is provided. Although the spec for flush_tlb_page() allows for
setting the TTL hint to 3, it turns out that
flush_tlb_fix_spurious_fault_pmd() depends on
local_flush_tlb_page_nonotify() to invalidate the level 2 entry. This
will be fixed separately.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 3b72a71feac0..37c782ddc149 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -327,12 +327,8 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
static inline void __local_flush_tlb_page_nonotify_nosync(struct mm_struct *mm,
unsigned long uaddr)
{
- unsigned long addr;
-
dsb(nshst);
- addr = __TLBI_VADDR(uaddr, ASID(mm));
- __tlbi(vale1, addr);
- __tlbi_user(vale1, addr);
+ __tlbi_level_asid(vale1, uaddr, TLBI_TTL_UNKNOWN, ASID(mm));
}
static inline void local_flush_tlb_page_nonotify(struct vm_area_struct *vma,
@@ -354,12 +350,8 @@ static inline void local_flush_tlb_page(struct vm_area_struct *vma,
static inline void __flush_tlb_page_nosync(struct mm_struct *mm,
unsigned long uaddr)
{
- unsigned long addr;
-
dsb(ishst);
- addr = __TLBI_VADDR(uaddr, ASID(mm));
- __tlbi(vale1is, addr);
- __tlbi_user(vale1is, addr);
+ __tlbi_level_asid(vale1is, uaddr, TLBI_TTL_UNKNOWN, ASID(mm));
mmu_notifier_arch_invalidate_secondary_tlbs(mm, uaddr & PAGE_MASK,
(uaddr & PAGE_MASK) + PAGE_SIZE);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 10/13] arm64: mm: Refactor __flush_tlb_range() to take flags
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (8 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 09/13] arm64: mm: Refactor flush_tlb_page() to use __tlbi_level_asid() Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 11/13] arm64: mm: More flags for __flush_tlb_range() Ryan Roberts
` (2 subsequent siblings)
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
We have function variants with "_nosync", "_local", "_nonotify" as well
as the "last_level" parameter. Let's generalize and simplify by using a
flags parameter to encode all these variants.
As a first step, convert the "last_level" boolean parameter to a flags
parameter and create the first flag, TLBF_NOWALKCACHE. When present,
walk cache entries are not evicted, which is the same as the old
last_level=true.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/hugetlb.h | 12 ++++++------
arch/arm64/include/asm/pgtable.h | 4 ++--
arch/arm64/include/asm/tlb.h | 6 +++---
arch/arm64/include/asm/tlbflush.h | 28 ++++++++++++++++------------
arch/arm64/mm/contpte.c | 5 +++--
arch/arm64/mm/hugetlbpage.c | 4 ++--
arch/arm64/mm/mmu.c | 2 +-
7 files changed, 33 insertions(+), 28 deletions(-)
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 44c1f757bfcf..04af9499faf2 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -73,23 +73,23 @@ static inline void __flush_hugetlb_tlb_range(struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
unsigned long stride,
- bool last_level)
+ tlbf_t flags)
{
switch (stride) {
#ifndef __PAGETABLE_PMD_FOLDED
case PUD_SIZE:
- __flush_tlb_range(vma, start, end, PUD_SIZE, last_level, 1);
+ __flush_tlb_range(vma, start, end, PUD_SIZE, 1, flags);
break;
#endif
case CONT_PMD_SIZE:
case PMD_SIZE:
- __flush_tlb_range(vma, start, end, PMD_SIZE, last_level, 2);
+ __flush_tlb_range(vma, start, end, PMD_SIZE, 2, flags);
break;
case CONT_PTE_SIZE:
- __flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, 3);
+ __flush_tlb_range(vma, start, end, PAGE_SIZE, 3, flags);
break;
default:
- __flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, TLBI_TTL_UNKNOWN);
+ __flush_tlb_range(vma, start, end, PAGE_SIZE, TLBI_TTL_UNKNOWN, flags);
}
}
@@ -100,7 +100,7 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
{
unsigned long stride = huge_page_size(hstate_vma(vma));
- __flush_hugetlb_tlb_range(vma, start, end, stride, false);
+ __flush_hugetlb_tlb_range(vma, start, end, stride, TLBF_NONE);
}
#endif /* __ASM_HUGETLB_H */
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 64d5f1d9cce9..736747fbc843 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -124,9 +124,9 @@ static inline void arch_leave_lazy_mmu_mode(void)
/* Set stride and tlb_level in flush_*_tlb_range */
#define flush_pmd_tlb_range(vma, addr, end) \
- __flush_tlb_range(vma, addr, end, PMD_SIZE, false, 2)
+ __flush_tlb_range(vma, addr, end, PMD_SIZE, 2, TLBF_NONE)
#define flush_pud_tlb_range(vma, addr, end) \
- __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
+ __flush_tlb_range(vma, addr, end, PUD_SIZE, 1, TLBF_NONE)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
/*
diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index 8d762607285c..10869d7731b8 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -53,7 +53,7 @@ static inline int tlb_get_level(struct mmu_gather *tlb)
static inline void tlb_flush(struct mmu_gather *tlb)
{
struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0);
- bool last_level = !tlb->freed_tables;
+ tlbf_t flags = tlb->freed_tables ? TLBF_NONE : TLBF_NOWALKCACHE;
unsigned long stride = tlb_get_unmap_size(tlb);
int tlb_level = tlb_get_level(tlb);
@@ -63,13 +63,13 @@ static inline void tlb_flush(struct mmu_gather *tlb)
* reallocate our ASID without invalidating the entire TLB.
*/
if (tlb->fullmm) {
- if (!last_level)
+ if (tlb->freed_tables)
flush_tlb_mm(tlb->mm);
return;
}
__flush_tlb_range(&vma, tlb->start, tlb->end, stride,
- last_level, tlb_level);
+ tlb_level, flags);
}
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 37c782ddc149..9a37a6a014dc 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -267,16 +267,16 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
* CPUs, ensuring that any walk-cache entries associated with the
* translation are also invalidated.
*
- * __flush_tlb_range(vma, start, end, stride, last_level, tlb_level)
+ * __flush_tlb_range(vma, start, end, stride, last_level, tlb_level, flags)
* Invalidate the virtual-address range '[start, end)' on all
* CPUs for the user address space corresponding to 'vma->mm'.
* The invalidation operations are issued at a granularity
- * determined by 'stride' and only affect any walk-cache entries
- * if 'last_level' is equal to false. tlb_level is the level at
+ * determined by 'stride'. tlb_level is the level at
* which the invalidation must take place. If the level is wrong,
* no invalidation may take place. In the case where the level
* cannot be easily determined, the value TLBI_TTL_UNKNOWN will
- * perform a non-hinted invalidation.
+ * perform a non-hinted invalidation. flags may be TLBF_NONE (0) or
+ * TLBF_NOWALKCACHE (elide eviction of walk cache entries).
*
* local_flush_tlb_page(vma, addr)
* Local variant of flush_tlb_page(). Stale TLB entries may
@@ -528,10 +528,14 @@ static inline bool __flush_tlb_range_limit_excess(unsigned long pages,
return pages >= (MAX_DVM_OPS * stride) >> PAGE_SHIFT;
}
+typedef unsigned __bitwise tlbf_t;
+#define TLBF_NONE ((__force tlbf_t)0)
+#define TLBF_NOWALKCACHE ((__force tlbf_t)BIT(0))
+
static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
unsigned long start, unsigned long end,
- unsigned long stride, bool last_level,
- int tlb_level)
+ unsigned long stride, int tlb_level,
+ tlbf_t flags)
{
unsigned long asid, pages;
@@ -547,7 +551,7 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
dsb(ishst);
asid = ASID(mm);
- if (last_level)
+ if (flags & TLBF_NOWALKCACHE)
__flush_s1_tlb_range_op(vale1is, start, pages, stride,
asid, tlb_level);
else
@@ -559,11 +563,11 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
static inline void __flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end,
- unsigned long stride, bool last_level,
- int tlb_level)
+ unsigned long stride, int tlb_level,
+ tlbf_t flags)
{
__flush_tlb_range_nosync(vma->vm_mm, start, end, stride,
- last_level, tlb_level);
+ tlb_level, flags);
dsb(ish);
}
@@ -591,7 +595,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
* Set the tlb_level to TLBI_TTL_UNKNOWN because we can not get enough
* information here.
*/
- __flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
+ __flush_tlb_range(vma, start, end, PAGE_SIZE, TLBI_TTL_UNKNOWN, TLBF_NONE);
}
static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
@@ -632,7 +636,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)
static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
struct mm_struct *mm, unsigned long start, unsigned long end)
{
- __flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3);
+ __flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, 3, TLBF_NOWALKCACHE);
}
static inline bool __pte_flags_need_flush(ptdesc_t oldval, ptdesc_t newval)
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index 589bcf878938..1a12bb728ee1 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -205,7 +205,8 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr,
*/
if (!system_supports_bbml2_noabort())
- __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3);
+ __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, 3,
+ TLBF_NOWALKCACHE);
__set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES);
}
@@ -527,7 +528,7 @@ int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
*/
addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
__flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE,
- PAGE_SIZE, true, 3);
+ PAGE_SIZE, 3, TLBF_NOWALKCACHE);
}
return young;
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 1d90a7e75333..7b95663f8c76 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -184,7 +184,7 @@ static pte_t get_clear_contig_flush(struct mm_struct *mm,
struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
unsigned long end = addr + (pgsize * ncontig);
- __flush_hugetlb_tlb_range(&vma, addr, end, pgsize, true);
+ __flush_hugetlb_tlb_range(&vma, addr, end, pgsize, TLBF_NOWALKCACHE);
return orig_pte;
}
@@ -212,7 +212,7 @@ static void clear_flush(struct mm_struct *mm,
if (mm == &init_mm)
flush_tlb_kernel_range(saddr, addr);
else
- __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true);
+ __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, TLBF_NOWALKCACHE);
}
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9ae7ce00a7ef..a17d617a959a 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -2150,7 +2150,7 @@ pte_t modify_prot_start_ptes(struct vm_area_struct *vma, unsigned long addr,
*/
if (pte_accessible(vma->vm_mm, pte) && pte_user_exec(pte))
__flush_tlb_range(vma, addr, nr * PAGE_SIZE,
- PAGE_SIZE, true, 3);
+ PAGE_SIZE, 3, TLBF_NOWALKCACHE);
}
return pte;
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 11/13] arm64: mm: More flags for __flush_tlb_range()
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (9 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 10/13] arm64: mm: Refactor __flush_tlb_range() to take flags Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 12/13] arm64: mm: Wrap flush_tlb_page() around ___flush_tlb_range() Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 13/13] arm64: mm: Provide level hint for flush_tlb_page() Ryan Roberts
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
Refactor function variants with "_nosync", "_local" and "_nonotify" into
a single __always_inline implementation that takes flags and rely on
constant folding to select the parts that are actually needed at any
given callsite, based on the provided flags.
Flags all live in the tlbf_t (TLB flags) type; TLBF_NONE (0) continues
to provide the strongest semantics (i.e. evict from walk cache,
broadcast, synchronise and notify). Each flag reduces the strength in
some way; TLBF_NONOTIFY, TLBF_NOSYNC and TLBF_NOBROADCAST are added to
complement the existing TLBF_NOWALKCACHE.
The result is a clearer, simpler, more powerful API.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/tlbflush.h | 101 ++++++++++++++++++------------
arch/arm64/mm/contpte.c | 9 ++-
2 files changed, 68 insertions(+), 42 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 9a37a6a014dc..ee747e66bbef 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -107,6 +107,12 @@ static inline unsigned long get_trans_granule(void)
typedef void (*tlbi_op)(u64 arg);
+static __always_inline void vae1(u64 arg)
+{
+ __tlbi(vae1, arg);
+ __tlbi_user(vae1, arg);
+}
+
static __always_inline void vae1is(u64 arg)
{
__tlbi(vae1is, arg);
@@ -276,7 +282,10 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
* no invalidation may take place. In the case where the level
* cannot be easily determined, the value TLBI_TTL_UNKNOWN will
* perform a non-hinted invalidation. flags may be TLBF_NONE (0) or
- * TLBF_NOWALKCACHE (elide eviction of walk cache entries).
+ * any combination of TLBF_NOWALKCACHE (elide eviction of walk
+ * cache entries), TLBF_NONOTIFY (don't call mmu notifiers),
+ * TLBF_NOSYNC (don't issue trailing dsb) and TLBF_NOBROADCAST
+ * (only perform the invalidation for the local cpu).
*
* local_flush_tlb_page(vma, addr)
* Local variant of flush_tlb_page(). Stale TLB entries may
@@ -286,12 +295,6 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
* Same as local_flush_tlb_page() except MMU notifier will not be
* called.
*
- * local_flush_tlb_contpte(vma, addr)
- * Invalidate the virtual-address range
- * '[addr, addr+CONT_PTE_SIZE)' mapped with contpte on local CPU
- * for the user address space corresponding to 'vma->mm'. Stale
- * TLB entries may remain in remote CPUs.
- *
* Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
* on top of these routines, since that is our interface to the mmu_gather
* API as used by munmap() and friends.
@@ -436,6 +439,12 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
* operations can only span an even number of pages. We save this for last to
* ensure 64KB start alignment is maintained for the LPA2 case.
*/
+static __always_inline void rvae1(u64 arg)
+{
+ __tlbi(rvae1, arg);
+ __tlbi_user(rvae1, arg);
+}
+
static __always_inline void rvae1is(u64 arg)
{
__tlbi(rvae1is, arg);
@@ -531,16 +540,18 @@ static inline bool __flush_tlb_range_limit_excess(unsigned long pages,
typedef unsigned __bitwise tlbf_t;
#define TLBF_NONE ((__force tlbf_t)0)
#define TLBF_NOWALKCACHE ((__force tlbf_t)BIT(0))
+#define TLBF_NOSYNC ((__force tlbf_t)BIT(1))
+#define TLBF_NONOTIFY ((__force tlbf_t)BIT(2))
+#define TLBF_NOBROADCAST ((__force tlbf_t)BIT(3))
-static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
- unsigned long start, unsigned long end,
- unsigned long stride, int tlb_level,
- tlbf_t flags)
+static __always_inline void ___flush_tlb_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ unsigned long stride, int tlb_level,
+ tlbf_t flags)
{
+ struct mm_struct *mm = vma->vm_mm;
unsigned long asid, pages;
- start = round_down(start, stride);
- end = round_up(end, stride);
pages = (end - start) >> PAGE_SHIFT;
if (__flush_tlb_range_limit_excess(pages, stride)) {
@@ -548,17 +559,41 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
return;
}
- dsb(ishst);
+ if (!(flags & TLBF_NOBROADCAST))
+ dsb(ishst);
+ else
+ dsb(nshst);
+
asid = ASID(mm);
- if (flags & TLBF_NOWALKCACHE)
- __flush_s1_tlb_range_op(vale1is, start, pages, stride,
- asid, tlb_level);
- else
+ switch (flags & (TLBF_NOWALKCACHE | TLBF_NOBROADCAST)) {
+ case TLBF_NONE:
__flush_s1_tlb_range_op(vae1is, start, pages, stride,
- asid, tlb_level);
+ asid, tlb_level);
+ break;
+ case TLBF_NOWALKCACHE:
+ __flush_s1_tlb_range_op(vale1is, start, pages, stride,
+ asid, tlb_level);
+ break;
+ case TLBF_NOBROADCAST:
+ __flush_s1_tlb_range_op(vae1, start, pages, stride,
+ asid, tlb_level);
+ break;
+ case TLBF_NOWALKCACHE | TLBF_NOBROADCAST:
+ __flush_s1_tlb_range_op(vale1, start, pages, stride,
+ asid, tlb_level);
+ break;
+ }
- mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
+ if (!(flags & TLBF_NONOTIFY))
+ mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
+
+ if (!(flags & TLBF_NOSYNC)) {
+ if (!(flags & TLBF_NOBROADCAST))
+ dsb(ish);
+ else
+ dsb(nsh);
+ }
}
static inline void __flush_tlb_range(struct vm_area_struct *vma,
@@ -566,24 +601,9 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
unsigned long stride, int tlb_level,
tlbf_t flags)
{
- __flush_tlb_range_nosync(vma->vm_mm, start, end, stride,
- tlb_level, flags);
- dsb(ish);
-}
-
-static inline void local_flush_tlb_contpte(struct vm_area_struct *vma,
- unsigned long addr)
-{
- unsigned long asid;
-
- addr = round_down(addr, CONT_PTE_SIZE);
-
- dsb(nshst);
- asid = ASID(vma->vm_mm);
- __flush_s1_tlb_range_op(vale1, addr, CONT_PTES, PAGE_SIZE, asid, 3);
- mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, addr,
- addr + CONT_PTE_SIZE);
- dsb(nsh);
+ start = round_down(start, stride);
+ end = round_up(end, stride);
+ ___flush_tlb_range(vma, start, end, stride, tlb_level, flags);
}
static inline void flush_tlb_range(struct vm_area_struct *vma,
@@ -636,7 +656,10 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)
static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
struct mm_struct *mm, unsigned long start, unsigned long end)
{
- __flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, 3, TLBF_NOWALKCACHE);
+ struct vm_area_struct vma = { .vm_mm = mm, .vm_flags = 0 };
+
+ __flush_tlb_range(&vma, start, end, PAGE_SIZE, 3,
+ TLBF_NOWALKCACHE | TLBF_NOSYNC);
}
static inline bool __pte_flags_need_flush(ptdesc_t oldval, ptdesc_t newval)
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index 1a12bb728ee1..ec17a0e70415 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -527,8 +527,8 @@ int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
* eliding the trailing DSB applies here.
*/
addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
- __flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE,
- PAGE_SIZE, 3, TLBF_NOWALKCACHE);
+ __flush_tlb_range(vma, addr, addr + CONT_PTE_SIZE,
+ PAGE_SIZE, 3, TLBF_NOWALKCACHE | TLBF_NOSYNC);
}
return young;
@@ -623,7 +623,10 @@ int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
__ptep_set_access_flags(vma, addr, ptep, entry, 0);
if (dirty)
- local_flush_tlb_contpte(vma, start_addr);
+ __flush_tlb_range(vma, start_addr,
+ start_addr + CONT_PTE_SIZE,
+ PAGE_SIZE, 3,
+ TLBF_NOWALKCACHE | TLBF_NOBROADCAST);
} else {
__contpte_try_unfold(vma->vm_mm, addr, ptep, orig_pte);
__ptep_set_access_flags(vma, addr, ptep, entry, dirty);
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 12/13] arm64: mm: Wrap flush_tlb_page() around ___flush_tlb_range()
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (10 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 11/13] arm64: mm: More flags for __flush_tlb_range() Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 13/13] arm64: mm: Provide level hint for flush_tlb_page() Ryan Roberts
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
Flushing a page from the tlb is just a special case of flushing a range.
So let's rework flush_tlb_page() so that it simply wraps
___flush_tlb_range(). While at it, let's also update the API to take the
same flags that we use when flushing a range. This allows us to delete
all the ugly "_nosync", "_local" and "_nonotify" variants.
Thanks to constant folding, all of the complex looping and tlbi-by-range
options get eliminated so that the generated code for flush_tlb_page()
looks very similar to the previous version.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/pgtable.h | 6 +--
arch/arm64/include/asm/tlbflush.h | 81 ++++++++++---------------------
arch/arm64/mm/fault.c | 2 +-
3 files changed, 29 insertions(+), 60 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 736747fbc843..b96a7ca465a1 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -136,10 +136,10 @@ static inline void arch_leave_lazy_mmu_mode(void)
* entries exist.
*/
#define flush_tlb_fix_spurious_fault(vma, address, ptep) \
- local_flush_tlb_page_nonotify(vma, address)
+ __flush_tlb_page(vma, address, TLBF_NOBROADCAST | TLBF_NONOTIFY)
#define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp) \
- local_flush_tlb_page_nonotify(vma, address)
+ __flush_tlb_page(vma, address, TLBF_NOBROADCAST | TLBF_NONOTIFY)
/*
* ZERO_PAGE is a global shared page that is always zero: used
@@ -1351,7 +1351,7 @@ static inline int __ptep_clear_flush_young(struct vm_area_struct *vma,
* context-switch, which provides a DSB to complete the TLB
* invalidation.
*/
- flush_tlb_page_nosync(vma, address);
+ __flush_tlb_page(vma, address, TLBF_NOSYNC);
}
return young;
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index ee747e66bbef..fa5aee990742 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -256,10 +256,7 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
* unmapping pages from vmalloc/io space.
*
* flush_tlb_page(vma, addr)
- * Invalidate a single user mapping for address 'addr' in the
- * address space corresponding to 'vma->mm'. Note that this
- * operation only invalidates a single, last-level page-table
- * entry and therefore does not affect any walk-caches.
+ * Equivalent to __flush_tlb_page(..., flags=TLBF_NONE)
*
*
* Next, we have some undocumented invalidation routines that you probably
@@ -287,13 +284,14 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
* TLBF_NOSYNC (don't issue trailing dsb) and TLBF_NOBROADCAST
* (only perform the invalidation for the local cpu).
*
- * local_flush_tlb_page(vma, addr)
- * Local variant of flush_tlb_page(). Stale TLB entries may
- * remain in remote CPUs.
- *
- * local_flush_tlb_page_nonotify(vma, addr)
- * Same as local_flush_tlb_page() except MMU notifier will not be
- * called.
+ * __flush_tlb_page(vma, addr, flags)
+ * Invalidate a single user mapping for address 'addr' in the
+ * address space corresponding to 'vma->mm'. Note that this
+ * operation only invalidates a single, last-level page-table entry
+ * and therefore does not affect any walk-caches. flags may contain
+ * any combination of TLBF_NONOTIFY (don't call mmu notifiers),
+ * TLBF_NOSYNC (don't issue trailing dsb) and TLBF_NOBROADCAST
+ * (only perform the invalidation for the local cpu).
*
* Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
* on top of these routines, since that is our interface to the mmu_gather
@@ -327,51 +325,6 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
}
-static inline void __local_flush_tlb_page_nonotify_nosync(struct mm_struct *mm,
- unsigned long uaddr)
-{
- dsb(nshst);
- __tlbi_level_asid(vale1, uaddr, TLBI_TTL_UNKNOWN, ASID(mm));
-}
-
-static inline void local_flush_tlb_page_nonotify(struct vm_area_struct *vma,
- unsigned long uaddr)
-{
- __local_flush_tlb_page_nonotify_nosync(vma->vm_mm, uaddr);
- dsb(nsh);
-}
-
-static inline void local_flush_tlb_page(struct vm_area_struct *vma,
- unsigned long uaddr)
-{
- __local_flush_tlb_page_nonotify_nosync(vma->vm_mm, uaddr);
- mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, uaddr & PAGE_MASK,
- (uaddr & PAGE_MASK) + PAGE_SIZE);
- dsb(nsh);
-}
-
-static inline void __flush_tlb_page_nosync(struct mm_struct *mm,
- unsigned long uaddr)
-{
- dsb(ishst);
- __tlbi_level_asid(vale1is, uaddr, TLBI_TTL_UNKNOWN, ASID(mm));
- mmu_notifier_arch_invalidate_secondary_tlbs(mm, uaddr & PAGE_MASK,
- (uaddr & PAGE_MASK) + PAGE_SIZE);
-}
-
-static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
- unsigned long uaddr)
-{
- return __flush_tlb_page_nosync(vma->vm_mm, uaddr);
-}
-
-static inline void flush_tlb_page(struct vm_area_struct *vma,
- unsigned long uaddr)
-{
- flush_tlb_page_nosync(vma, uaddr);
- dsb(ish);
-}
-
static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
{
/*
@@ -618,6 +571,22 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
__flush_tlb_range(vma, start, end, PAGE_SIZE, TLBI_TTL_UNKNOWN, TLBF_NONE);
}
+static inline void __flush_tlb_page(struct vm_area_struct *vma,
+ unsigned long uaddr, tlbf_t flags)
+{
+ unsigned long start = round_down(uaddr, PAGE_SIZE);
+ unsigned long end = start + PAGE_SIZE;
+
+ ___flush_tlb_range(vma, start, end, PAGE_SIZE, TLBI_TTL_UNKNOWN,
+ TLBF_NOWALKCACHE | flags);
+}
+
+static inline void flush_tlb_page(struct vm_area_struct *vma,
+ unsigned long uaddr)
+{
+ __flush_tlb_page(vma, uaddr, TLBF_NONE);
+}
+
static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
const unsigned long stride = PAGE_SIZE;
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index be9dab2c7d6a..f91aa686f142 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -239,7 +239,7 @@ int __ptep_set_access_flags(struct vm_area_struct *vma,
* flush_tlb_fix_spurious_fault().
*/
if (dirty)
- local_flush_tlb_page(vma, address);
+ __flush_tlb_page(vma, address, TLBF_NOBROADCAST);
return 1;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v1 13/13] arm64: mm: Provide level hint for flush_tlb_page()
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
` (11 preceding siblings ...)
2025-12-16 14:45 ` [PATCH v1 12/13] arm64: mm: Wrap flush_tlb_page() around ___flush_tlb_range() Ryan Roberts
@ 2025-12-16 14:45 ` Ryan Roberts
12 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2025-12-16 14:45 UTC (permalink / raw)
To: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian
Cc: Ryan Roberts, linux-arm-kernel, linux-kernel
Previously tlb invalidations issued by __flush_tlb_page() did not
contain a level hint. But the function is clearly only ever targeting
level 3 tlb entries and its documentation agrees:
| this operation only invalidates a single, last-level page-table
| entry and therefore does not affect any walk-caches
However, it turns out that the function was actually being used to
invalidate a level 2 mapping via flush_tlb_fix_spurious_fault_pmd(). The
bug was benign because the level hint was not set so the HW would still
invalidate the PMD mapping, and also because the TLBF_NONOTIFY flag was
set, the bounds of the mapping were never used for anything else.
Now that we have the new and improved range-invalidation API, it is
trival to fix flush_tlb_fix_spurious_fault_pmd() to explicitly flush the
whole range (locally, without notification and last level only). So
let's do that, and then update __flush_tlb_page() to hint level 3.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/arm64/include/asm/pgtable.h | 5 +++--
arch/arm64/include/asm/tlbflush.h | 2 +-
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index b96a7ca465a1..61f57647361a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -138,8 +138,9 @@ static inline void arch_leave_lazy_mmu_mode(void)
#define flush_tlb_fix_spurious_fault(vma, address, ptep) \
__flush_tlb_page(vma, address, TLBF_NOBROADCAST | TLBF_NONOTIFY)
-#define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp) \
- __flush_tlb_page(vma, address, TLBF_NOBROADCAST | TLBF_NONOTIFY)
+#define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp) \
+ __flush_tlb_range(vma, address, address + PMD_SIZE, PMD_SIZE, 2, \
+ TLBF_NOBROADCAST | TLBF_NONOTIFY | TLBF_NOWALKCACHE)
/*
* ZERO_PAGE is a global shared page that is always zero: used
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index fa5aee990742..f24211b51df3 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -577,7 +577,7 @@ static inline void __flush_tlb_page(struct vm_area_struct *vma,
unsigned long start = round_down(uaddr, PAGE_SIZE);
unsigned long end = start + PAGE_SIZE;
- ___flush_tlb_range(vma, start, end, PAGE_SIZE, TLBI_TTL_UNKNOWN,
+ ___flush_tlb_range(vma, start, end, PAGE_SIZE, 3,
TLBF_NOWALKCACHE | flags);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v1 01/13] arm64: mm: Re-implement the __tlbi_level macro as a C function
2025-12-16 14:45 ` [PATCH v1 01/13] arm64: mm: Re-implement the __tlbi_level macro as a C function Ryan Roberts
@ 2025-12-16 17:53 ` Jonathan Cameron
2026-01-02 14:18 ` Ryan Roberts
0 siblings, 1 reply; 24+ messages in thread
From: Jonathan Cameron @ 2025-12-16 17:53 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian, linux-arm-kernel, linux-kernel
On Tue, 16 Dec 2025 14:45:46 +0000
Ryan Roberts <ryan.roberts@arm.com> wrote:
> As part of efforts to reduce our reliance on complex preprocessor macros
> for TLB invalidation routines, convert the __tlbi_level macro to a C
> function for by-level TLB invalidation.
>
> Each specific tlbi level op is implemented as a C function and the
> appropriate function pointer is passed to __tlbi_level(). Since
> everything is declared inline and is statically resolvable, the compiler
> will convert the indirect function call to a direct inline execution.
>
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> +static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
> +{
> + u64 arg = addr;
> +
> + if (alternative_has_cap_unlikely(ARM64_HAS_ARMv8_4_TTL) && level <= 3) {
> + u64 ttl = level | (get_trans_granule() << 2);
> +
> + arg &= ~TLBI_TTL_MASK;
> + arg |= FIELD_PREP(TLBI_TTL_MASK, ttl);
Probably don't care, but I think you could do
FIELD_MODIFY(TLBI_TTL_MASK, &arg, ttl);
instead of those two lines. Code generation hopefully similar?
So depends on which macros you find more readable.
> + }
> +
> + op(arg);
> +}
>
> #define __tlbi_user_level(op, arg, level) do { \
> if (arm64_kernel_unmapped_at_el0()) \
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation
2025-12-16 14:45 ` [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation Ryan Roberts
@ 2025-12-16 18:01 ` Jonathan Cameron
2026-01-02 14:20 ` Ryan Roberts
2025-12-18 6:30 ` Linu Cherian
1 sibling, 1 reply; 24+ messages in thread
From: Jonathan Cameron @ 2025-12-16 18:01 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian, linux-arm-kernel, linux-kernel
On Tue, 16 Dec 2025 14:45:48 +0000
Ryan Roberts <ryan.roberts@arm.com> wrote:
> When kpti is enabled, separate ASIDs are used for userspace and
> kernelspace, requiring ASID-qualified TLB invalidation by virtual
> address to invalidate both of them.
>
> Push the logic for invalidating the two ASIDs down into the low-level
> tlbi-op-specific functions and remove the burden from the caller to
> handle the kpti-specific behaviour.
>
> Co-developed-by: Will Deacon <will@kernel.org>
> Signed-off-by: Will Deacon <will@kernel.org>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/include/asm/tlbflush.h | 27 ++++++++++-----------------
> 1 file changed, 10 insertions(+), 17 deletions(-)
>
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index c5111d2afc66..31f43d953ce2 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> +#undef __tlbi_user
> #endif
> -
Hi Ryan,
It's trivial Tuesday so... Unrelated white space change.
> #endif
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v1 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess()
2025-12-16 14:45 ` [PATCH v1 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess() Ryan Roberts
@ 2025-12-17 8:12 ` Dev Jain
2026-01-02 15:23 ` Ryan Roberts
0 siblings, 1 reply; 24+ messages in thread
From: Dev Jain @ 2025-12-17 8:12 UTC (permalink / raw)
To: Ryan Roberts, Will Deacon, Ard Biesheuvel, Catalin Marinas,
Mark Rutland, Linus Torvalds, Oliver Upton, Marc Zyngier,
Linu Cherian
Cc: linux-arm-kernel, linux-kernel
On 16/12/25 8:15 pm, Ryan Roberts wrote:
> From: Will Deacon <will@kernel.org>
>
> __flush_tlb_range_limit_excess() is unnecessarily complicated:
>
> - It takes a 'start', 'end' and 'pages' argument, whereas it only
> needs 'pages' (which the caller has computed from the other two
> arguments!).
>
> - It erroneously compares 'pages' with MAX_TLBI_RANGE_PAGES when
> the system doesn't support range-based invalidation but the range to
> be invalidated would result in fewer than MAX_DVM_OPS invalidations.
>
> Simplify the function so that it no longer takes the 'start' and 'end'
> arguments and only considers the MAX_TLBI_RANGE_PAGES threshold on
> systems that implement range-based invalidation.
>
> Signed-off-by: Will Deacon <will@kernel.org>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/include/asm/tlbflush.h | 20 ++++++--------------
> 1 file changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index 0e1902f66e01..3b72a71feac0 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -527,21 +527,13 @@ static __always_inline void __flush_tlb_range_op(tlbi_op lop, tlbi_op rop,
> #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \
> __flush_tlb_range_op(op, r##op, start, pages, stride, 0, tlb_level, kvm_lpa2_is_enabled())
>
> -static inline bool __flush_tlb_range_limit_excess(unsigned long start,
> - unsigned long end, unsigned long pages, unsigned long stride)
> +static inline bool __flush_tlb_range_limit_excess(unsigned long pages,
> + unsigned long stride)
> {
> - /*
> - * When the system does not support TLB range based flush
> - * operation, (MAX_DVM_OPS - 1) pages can be handled. But
> - * with TLB range based operation, MAX_TLBI_RANGE_PAGES
> - * pages can be handled.
> - */
> - if ((!system_supports_tlb_range() &&
> - (end - start) >= (MAX_DVM_OPS * stride)) ||
> - pages > MAX_TLBI_RANGE_PAGES)
> + if (system_supports_tlb_range() && pages > MAX_TLBI_RANGE_PAGES)
> return true;
>
> - return false;
> + return pages >= (MAX_DVM_OPS * stride) >> PAGE_SHIFT;
The function will return true if tlb range is supported, but
((MAX_DVM_OPS * stride) >> PAGE_SHIFT) < pages <= MAX_TLBI_RANGE_PAGES.
So I think you need to do
https://lore.kernel.org/all/1b15b4f0-5490-4dac-8344-e716dd189751@arm.com/
> }
>
> static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
> @@ -555,7 +547,7 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
> end = round_up(end, stride);
> pages = (end - start) >> PAGE_SHIFT;
>
> - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
> + if (__flush_tlb_range_limit_excess(pages, stride)) {
> flush_tlb_mm(mm);
> return;
> }
> @@ -619,7 +611,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
> end = round_up(end, stride);
> pages = (end - start) >> PAGE_SHIFT;
>
> - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
> + if (__flush_tlb_range_limit_excess(pages, stride)) {
> flush_tlb_all();
> return;
> }
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation
2025-12-16 14:45 ` [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation Ryan Roberts
2025-12-16 18:01 ` Jonathan Cameron
@ 2025-12-18 6:30 ` Linu Cherian
2025-12-18 7:05 ` Linu Cherian
1 sibling, 1 reply; 24+ messages in thread
From: Linu Cherian @ 2025-12-18 6:30 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
linux-arm-kernel, linux-kernel
Ryan,
On Tue, Dec 16, 2025 at 02:45:48PM +0000, Ryan Roberts wrote:
> When kpti is enabled, separate ASIDs are used for userspace and
> kernelspace, requiring ASID-qualified TLB invalidation by virtual
> address to invalidate both of them.
>
> Push the logic for invalidating the two ASIDs down into the low-level
> tlbi-op-specific functions and remove the burden from the caller to
> handle the kpti-specific behaviour.
>
> Co-developed-by: Will Deacon <will@kernel.org>
> Signed-off-by: Will Deacon <will@kernel.org>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> arch/arm64/include/asm/tlbflush.h | 27 ++++++++++-----------------
> 1 file changed, 10 insertions(+), 17 deletions(-)
>
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index c5111d2afc66..31f43d953ce2 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -110,6 +110,7 @@ typedef void (*tlbi_op)(u64 arg);
> static __always_inline void vae1is(u64 arg)
> {
> __tlbi(vae1is, arg);
> + __tlbi_user(vae1is, arg);
> }
>
> static __always_inline void vae2is(u64 arg)
> @@ -126,6 +127,7 @@ static __always_inline void vale1(u64 arg)
> static __always_inline void vale1is(u64 arg)
> {
> __tlbi(vale1is, arg);
> + __tlbi_user(vale1is, arg);
> }
>
> static __always_inline void vale2is(u64 arg)
> @@ -162,11 +164,6 @@ static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
> op(arg);
> }
>
> -#define __tlbi_user_level(op, arg, level) do { \
> - if (arm64_kernel_unmapped_at_el0()) \
> - __tlbi_level(op, (arg | USER_ASID_FLAG), level); \
> -} while (0)
> -
> /*
> * This macro creates a properly formatted VA operand for the TLB RANGE. The
> * value bit assignments are:
> @@ -435,8 +432,6 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> * @stride: Flush granularity
> * @asid: The ASID of the task (0 for IPA instructions)
> * @tlb_level: Translation Table level hint, if known
> - * @tlbi_user: If 'true', call an additional __tlbi_user()
> - * (typically for user ASIDs). 'flase' for IPA instructions
> * @lpa2: If 'true', the lpa2 scheme is used as set out below
> *
> * When the CPU does not support TLB range operations, flush the TLB
> @@ -462,6 +457,7 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> static __always_inline void rvae1is(u64 arg)
> {
> __tlbi(rvae1is, arg);
> + __tlbi_user(rvae1is, arg);
> }
>
> static __always_inline void rvale1(u64 arg)
> @@ -473,6 +469,7 @@ static __always_inline void rvale1(u64 arg)
> static __always_inline void rvale1is(u64 arg)
> {
> __tlbi(rvale1is, arg);
> + __tlbi_user(rvale1is, arg);
> }
>
> static __always_inline void rvaale1is(u64 arg)
> @@ -491,7 +488,7 @@ static __always_inline void __tlbi_range(tlbi_op op, u64 arg)
> }
>
> #define __flush_tlb_range_op(op, start, pages, stride, \
> - asid, tlb_level, tlbi_user, lpa2) \
> + asid, tlb_level, lpa2) \
> do { \
> typeof(start) __flush_start = start; \
> typeof(pages) __flush_pages = pages; \
> @@ -506,8 +503,6 @@ do { \
> (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \
> addr = __TLBI_VADDR(__flush_start, asid); \
> __tlbi_level(op, addr, tlb_level); \
> - if (tlbi_user) \
> - __tlbi_user_level(op, addr, tlb_level); \
> __flush_start += stride; \
> __flush_pages -= stride >> PAGE_SHIFT; \
> continue; \
> @@ -518,8 +513,6 @@ do { \
> addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
> scale, num, tlb_level); \
> __tlbi_range(r##op, addr); \
> - if (tlbi_user) \
> - __tlbi_user(r##op, addr); \
> __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
> __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\
There are more __tlbi_user invocations in __flush_tlb_mm, __local_flush_tlb_page_nonotify_nosync
and __flush_tlb_page_nosync in this file. Should we not address them as well as
part of this ?
--
Linu Cherian.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation
2025-12-18 6:30 ` Linu Cherian
@ 2025-12-18 7:05 ` Linu Cherian
2025-12-18 15:47 ` Linu Cherian
0 siblings, 1 reply; 24+ messages in thread
From: Linu Cherian @ 2025-12-18 7:05 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
linux-arm-kernel, linux-kernel
On Thu, Dec 18, 2025 at 12:00:57PM +0530, Linu Cherian wrote:
> Ryan,
>
> On Tue, Dec 16, 2025 at 02:45:48PM +0000, Ryan Roberts wrote:
> > When kpti is enabled, separate ASIDs are used for userspace and
> > kernelspace, requiring ASID-qualified TLB invalidation by virtual
> > address to invalidate both of them.
> >
> > Push the logic for invalidating the two ASIDs down into the low-level
> > tlbi-op-specific functions and remove the burden from the caller to
> > handle the kpti-specific behaviour.
> >
> > Co-developed-by: Will Deacon <will@kernel.org>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> > ---
> > arch/arm64/include/asm/tlbflush.h | 27 ++++++++++-----------------
> > 1 file changed, 10 insertions(+), 17 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> > index c5111d2afc66..31f43d953ce2 100644
> > --- a/arch/arm64/include/asm/tlbflush.h
> > +++ b/arch/arm64/include/asm/tlbflush.h
> > @@ -110,6 +110,7 @@ typedef void (*tlbi_op)(u64 arg);
> > static __always_inline void vae1is(u64 arg)
> > {
> > __tlbi(vae1is, arg);
> > + __tlbi_user(vae1is, arg);
> > }
> >
> > static __always_inline void vae2is(u64 arg)
> > @@ -126,6 +127,7 @@ static __always_inline void vale1(u64 arg)
> > static __always_inline void vale1is(u64 arg)
> > {
> > __tlbi(vale1is, arg);
> > + __tlbi_user(vale1is, arg);
> > }
> >
> > static __always_inline void vale2is(u64 arg)
> > @@ -162,11 +164,6 @@ static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
> > op(arg);
> > }
> >
> > -#define __tlbi_user_level(op, arg, level) do { \
> > - if (arm64_kernel_unmapped_at_el0()) \
> > - __tlbi_level(op, (arg | USER_ASID_FLAG), level); \
> > -} while (0)
> > -
> > /*
> > * This macro creates a properly formatted VA operand for the TLB RANGE. The
> > * value bit assignments are:
> > @@ -435,8 +432,6 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> > * @stride: Flush granularity
> > * @asid: The ASID of the task (0 for IPA instructions)
> > * @tlb_level: Translation Table level hint, if known
> > - * @tlbi_user: If 'true', call an additional __tlbi_user()
> > - * (typically for user ASIDs). 'flase' for IPA instructions
> > * @lpa2: If 'true', the lpa2 scheme is used as set out below
> > *
> > * When the CPU does not support TLB range operations, flush the TLB
> > @@ -462,6 +457,7 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> > static __always_inline void rvae1is(u64 arg)
> > {
> > __tlbi(rvae1is, arg);
> > + __tlbi_user(rvae1is, arg);
> > }
> >
> > static __always_inline void rvale1(u64 arg)
> > @@ -473,6 +469,7 @@ static __always_inline void rvale1(u64 arg)
> > static __always_inline void rvale1is(u64 arg)
> > {
> > __tlbi(rvale1is, arg);
> > + __tlbi_user(rvale1is, arg);
> > }
> >
> > static __always_inline void rvaale1is(u64 arg)
> > @@ -491,7 +488,7 @@ static __always_inline void __tlbi_range(tlbi_op op, u64 arg)
> > }
> >
> > #define __flush_tlb_range_op(op, start, pages, stride, \
> > - asid, tlb_level, tlbi_user, lpa2) \
> > + asid, tlb_level, lpa2) \
> > do { \
> > typeof(start) __flush_start = start; \
> > typeof(pages) __flush_pages = pages; \
> > @@ -506,8 +503,6 @@ do { \
> > (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \
> > addr = __TLBI_VADDR(__flush_start, asid); \
> > __tlbi_level(op, addr, tlb_level); \
> > - if (tlbi_user) \
> > - __tlbi_user_level(op, addr, tlb_level); \
> > __flush_start += stride; \
> > __flush_pages -= stride >> PAGE_SHIFT; \
> > continue; \
> > @@ -518,8 +513,6 @@ do { \
> > addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
> > scale, num, tlb_level); \
> > __tlbi_range(r##op, addr); \
> > - if (tlbi_user) \
> > - __tlbi_user(r##op, addr); \
> > __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
> > __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\
>
>
> There are more __tlbi_user invocations in __flush_tlb_mm, __local_flush_tlb_page_nonotify_nosync
> and __flush_tlb_page_nosync in this file. Should we not address them as well as
> part of this ?
>
I see that except __flush_tlb_mm, the others got addressed in subsequent patches.
Should we hint this in the commit message ?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation
2025-12-18 7:05 ` Linu Cherian
@ 2025-12-18 15:47 ` Linu Cherian
2026-01-02 14:30 ` Ryan Roberts
0 siblings, 1 reply; 24+ messages in thread
From: Linu Cherian @ 2025-12-18 15:47 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
linux-arm-kernel, linux-kernel
On Thu, Dec 18, 2025 at 12:35:41PM +0530, Linu Cherian wrote:
>
>
> On Thu, Dec 18, 2025 at 12:00:57PM +0530, Linu Cherian wrote:
> > Ryan,
> >
> > On Tue, Dec 16, 2025 at 02:45:48PM +0000, Ryan Roberts wrote:
> > > When kpti is enabled, separate ASIDs are used for userspace and
> > > kernelspace, requiring ASID-qualified TLB invalidation by virtual
> > > address to invalidate both of them.
> > >
> > > Push the logic for invalidating the two ASIDs down into the low-level
> > > tlbi-op-specific functions and remove the burden from the caller to
> > > handle the kpti-specific behaviour.
> > >
> > > Co-developed-by: Will Deacon <will@kernel.org>
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> > > ---
> > > arch/arm64/include/asm/tlbflush.h | 27 ++++++++++-----------------
> > > 1 file changed, 10 insertions(+), 17 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> > > index c5111d2afc66..31f43d953ce2 100644
> > > --- a/arch/arm64/include/asm/tlbflush.h
> > > +++ b/arch/arm64/include/asm/tlbflush.h
> > > @@ -110,6 +110,7 @@ typedef void (*tlbi_op)(u64 arg);
> > > static __always_inline void vae1is(u64 arg)
> > > {
> > > __tlbi(vae1is, arg);
> > > + __tlbi_user(vae1is, arg);
> > > }
> > >
> > > static __always_inline void vae2is(u64 arg)
> > > @@ -126,6 +127,7 @@ static __always_inline void vale1(u64 arg)
> > > static __always_inline void vale1is(u64 arg)
> > > {
> > > __tlbi(vale1is, arg);
> > > + __tlbi_user(vale1is, arg);
> > > }
> > >
> > > static __always_inline void vale2is(u64 arg)
> > > @@ -162,11 +164,6 @@ static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
> > > op(arg);
> > > }
> > >
> > > -#define __tlbi_user_level(op, arg, level) do { \
> > > - if (arm64_kernel_unmapped_at_el0()) \
> > > - __tlbi_level(op, (arg | USER_ASID_FLAG), level); \
> > > -} while (0)
> > > -
> > > /*
> > > * This macro creates a properly formatted VA operand for the TLB RANGE. The
> > > * value bit assignments are:
> > > @@ -435,8 +432,6 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> > > * @stride: Flush granularity
> > > * @asid: The ASID of the task (0 for IPA instructions)
> > > * @tlb_level: Translation Table level hint, if known
> > > - * @tlbi_user: If 'true', call an additional __tlbi_user()
> > > - * (typically for user ASIDs). 'flase' for IPA instructions
> > > * @lpa2: If 'true', the lpa2 scheme is used as set out below
> > > *
> > > * When the CPU does not support TLB range operations, flush the TLB
> > > @@ -462,6 +457,7 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> > > static __always_inline void rvae1is(u64 arg)
> > > {
> > > __tlbi(rvae1is, arg);
> > > + __tlbi_user(rvae1is, arg);
> > > }
> > >
> > > static __always_inline void rvale1(u64 arg)
> > > @@ -473,6 +469,7 @@ static __always_inline void rvale1(u64 arg)
> > > static __always_inline void rvale1is(u64 arg)
> > > {
> > > __tlbi(rvale1is, arg);
> > > + __tlbi_user(rvale1is, arg);
> > > }
> > >
> > > static __always_inline void rvaale1is(u64 arg)
> > > @@ -491,7 +488,7 @@ static __always_inline void __tlbi_range(tlbi_op op, u64 arg)
> > > }
> > >
> > > #define __flush_tlb_range_op(op, start, pages, stride, \
> > > - asid, tlb_level, tlbi_user, lpa2) \
> > > + asid, tlb_level, lpa2) \
> > > do { \
> > > typeof(start) __flush_start = start; \
> > > typeof(pages) __flush_pages = pages; \
> > > @@ -506,8 +503,6 @@ do { \
> > > (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \
> > > addr = __TLBI_VADDR(__flush_start, asid); \
> > > __tlbi_level(op, addr, tlb_level); \
> > > - if (tlbi_user) \
> > > - __tlbi_user_level(op, addr, tlb_level); \
> > > __flush_start += stride; \
> > > __flush_pages -= stride >> PAGE_SHIFT; \
> > > continue; \
> > > @@ -518,8 +513,6 @@ do { \
> > > addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
> > > scale, num, tlb_level); \
> > > __tlbi_range(r##op, addr); \
> > > - if (tlbi_user) \
> > > - __tlbi_user(r##op, addr); \
> > > __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
> > > __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\
> >
> >
> > There are more __tlbi_user invocations in __flush_tlb_mm, __local_flush_tlb_page_nonotify_nosync
> > and __flush_tlb_page_nosync in this file. Should we not address them as well as
> > part of this ?
> >
>
> I see that except __flush_tlb_mm, the others got addressed in subsequent patches.
> Should we hint this in the commit message ?
Please ignore this comment, somehow the commit message gave me an impression that
all the invocations of __tlbi_user is going to get updated.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v1 01/13] arm64: mm: Re-implement the __tlbi_level macro as a C function
2025-12-16 17:53 ` Jonathan Cameron
@ 2026-01-02 14:18 ` Ryan Roberts
0 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2026-01-02 14:18 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian, linux-arm-kernel, linux-kernel
Happy new year!
On 16/12/2025 17:53, Jonathan Cameron wrote:
> On Tue, 16 Dec 2025 14:45:46 +0000
> Ryan Roberts <ryan.roberts@arm.com> wrote:
>
>> As part of efforts to reduce our reliance on complex preprocessor macros
>> for TLB invalidation routines, convert the __tlbi_level macro to a C
>> function for by-level TLB invalidation.
>>
>> Each specific tlbi level op is implemented as a C function and the
>> appropriate function pointer is passed to __tlbi_level(). Since
>> everything is declared inline and is statically resolvable, the compiler
>> will convert the indirect function call to a direct inline execution.
>>
>> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>
>> +static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
>> +{
>> + u64 arg = addr;
>> +
>> + if (alternative_has_cap_unlikely(ARM64_HAS_ARMv8_4_TTL) && level <= 3) {
>> + u64 ttl = level | (get_trans_granule() << 2);
>> +
>> + arg &= ~TLBI_TTL_MASK;
>> + arg |= FIELD_PREP(TLBI_TTL_MASK, ttl);
>
> Probably don't care, but I think you could do
> FIELD_MODIFY(TLBI_TTL_MASK, &arg, ttl);
> instead of those two lines. Code generation hopefully similar?
> So depends on which macros you find more readable.
Yeah that's probably slightly neater - I'll switch to this for the next version.
Thanks,
Ryan
>
>> + }
>> +
>> + op(arg);
>> +}
>>
>> #define __tlbi_user_level(op, arg, level) do { \
>> if (arm64_kernel_unmapped_at_el0()) \
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation
2025-12-16 18:01 ` Jonathan Cameron
@ 2026-01-02 14:20 ` Ryan Roberts
0 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2026-01-02 14:20 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
Linu Cherian, linux-arm-kernel, linux-kernel
On 16/12/2025 18:01, Jonathan Cameron wrote:
> On Tue, 16 Dec 2025 14:45:48 +0000
> Ryan Roberts <ryan.roberts@arm.com> wrote:
>
>> When kpti is enabled, separate ASIDs are used for userspace and
>> kernelspace, requiring ASID-qualified TLB invalidation by virtual
>> address to invalidate both of them.
>>
>> Push the logic for invalidating the two ASIDs down into the low-level
>> tlbi-op-specific functions and remove the burden from the caller to
>> handle the kpti-specific behaviour.
>>
>> Co-developed-by: Will Deacon <will@kernel.org>
>> Signed-off-by: Will Deacon <will@kernel.org>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>> arch/arm64/include/asm/tlbflush.h | 27 ++++++++++-----------------
>> 1 file changed, 10 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>> index c5111d2afc66..31f43d953ce2 100644
>> --- a/arch/arm64/include/asm/tlbflush.h
>> +++ b/arch/arm64/include/asm/tlbflush.h
>
>> +#undef __tlbi_user
>> #endif
>> -
> Hi Ryan,
>
> It's trivial Tuesday so... Unrelated white space change.
Thanks, will fix!
>
>> #endif
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation
2025-12-18 15:47 ` Linu Cherian
@ 2026-01-02 14:30 ` Ryan Roberts
0 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2026-01-02 14:30 UTC (permalink / raw)
To: Linu Cherian
Cc: Will Deacon, Ard Biesheuvel, Catalin Marinas, Mark Rutland,
Linus Torvalds, Oliver Upton, Marc Zyngier, Dev Jain,
linux-arm-kernel, linux-kernel
On 18/12/2025 15:47, Linu Cherian wrote:
>
>
> On Thu, Dec 18, 2025 at 12:35:41PM +0530, Linu Cherian wrote:
>>
>>
>> On Thu, Dec 18, 2025 at 12:00:57PM +0530, Linu Cherian wrote:
>>> Ryan,
>>>
>>> On Tue, Dec 16, 2025 at 02:45:48PM +0000, Ryan Roberts wrote:
>>>> When kpti is enabled, separate ASIDs are used for userspace and
>>>> kernelspace, requiring ASID-qualified TLB invalidation by virtual
>>>> address to invalidate both of them.
>>>>
>>>> Push the logic for invalidating the two ASIDs down into the low-level
>>>> tlbi-op-specific functions and remove the burden from the caller to
>>>> handle the kpti-specific behaviour.
>>>>
>>>> Co-developed-by: Will Deacon <will@kernel.org>
>>>> Signed-off-by: Will Deacon <will@kernel.org>
>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>> ---
>>>> arch/arm64/include/asm/tlbflush.h | 27 ++++++++++-----------------
>>>> 1 file changed, 10 insertions(+), 17 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>>>> index c5111d2afc66..31f43d953ce2 100644
>>>> --- a/arch/arm64/include/asm/tlbflush.h
>>>> +++ b/arch/arm64/include/asm/tlbflush.h
>>>> @@ -110,6 +110,7 @@ typedef void (*tlbi_op)(u64 arg);
>>>> static __always_inline void vae1is(u64 arg)
>>>> {
>>>> __tlbi(vae1is, arg);
>>>> + __tlbi_user(vae1is, arg);
>>>> }
>>>>
>>>> static __always_inline void vae2is(u64 arg)
>>>> @@ -126,6 +127,7 @@ static __always_inline void vale1(u64 arg)
>>>> static __always_inline void vale1is(u64 arg)
>>>> {
>>>> __tlbi(vale1is, arg);
>>>> + __tlbi_user(vale1is, arg);
>>>> }
>>>>
>>>> static __always_inline void vale2is(u64 arg)
>>>> @@ -162,11 +164,6 @@ static __always_inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
>>>> op(arg);
>>>> }
>>>>
>>>> -#define __tlbi_user_level(op, arg, level) do { \
>>>> - if (arm64_kernel_unmapped_at_el0()) \
>>>> - __tlbi_level(op, (arg | USER_ASID_FLAG), level); \
>>>> -} while (0)
>>>> -
>>>> /*
>>>> * This macro creates a properly formatted VA operand for the TLB RANGE. The
>>>> * value bit assignments are:
>>>> @@ -435,8 +432,6 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>>>> * @stride: Flush granularity
>>>> * @asid: The ASID of the task (0 for IPA instructions)
>>>> * @tlb_level: Translation Table level hint, if known
>>>> - * @tlbi_user: If 'true', call an additional __tlbi_user()
>>>> - * (typically for user ASIDs). 'flase' for IPA instructions
>>>> * @lpa2: If 'true', the lpa2 scheme is used as set out below
>>>> *
>>>> * When the CPU does not support TLB range operations, flush the TLB
>>>> @@ -462,6 +457,7 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>>>> static __always_inline void rvae1is(u64 arg)
>>>> {
>>>> __tlbi(rvae1is, arg);
>>>> + __tlbi_user(rvae1is, arg);
>>>> }
>>>>
>>>> static __always_inline void rvale1(u64 arg)
>>>> @@ -473,6 +469,7 @@ static __always_inline void rvale1(u64 arg)
>>>> static __always_inline void rvale1is(u64 arg)
>>>> {
>>>> __tlbi(rvale1is, arg);
>>>> + __tlbi_user(rvale1is, arg);
>>>> }
>>>>
>>>> static __always_inline void rvaale1is(u64 arg)
>>>> @@ -491,7 +488,7 @@ static __always_inline void __tlbi_range(tlbi_op op, u64 arg)
>>>> }
>>>>
>>>> #define __flush_tlb_range_op(op, start, pages, stride, \
>>>> - asid, tlb_level, tlbi_user, lpa2) \
>>>> + asid, tlb_level, lpa2) \
>>>> do { \
>>>> typeof(start) __flush_start = start; \
>>>> typeof(pages) __flush_pages = pages; \
>>>> @@ -506,8 +503,6 @@ do { \
>>>> (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \
>>>> addr = __TLBI_VADDR(__flush_start, asid); \
>>>> __tlbi_level(op, addr, tlb_level); \
>>>> - if (tlbi_user) \
>>>> - __tlbi_user_level(op, addr, tlb_level); \
>>>> __flush_start += stride; \
>>>> __flush_pages -= stride >> PAGE_SHIFT; \
>>>> continue; \
>>>> @@ -518,8 +513,6 @@ do { \
>>>> addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
>>>> scale, num, tlb_level); \
>>>> __tlbi_range(r##op, addr); \
>>>> - if (tlbi_user) \
>>>> - __tlbi_user(r##op, addr); \
>>>> __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
>>>> __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\
>>>
>>>
>>> There are more __tlbi_user invocations in __flush_tlb_mm, __local_flush_tlb_page_nonotify_nosync
>>> and __flush_tlb_page_nosync in this file. Should we not address them as well as
>>> part of this ?
>>>
>>
>> I see that except __flush_tlb_mm, the others got addressed in subsequent patches.
>> Should we hint this in the commit message ?
>
> Please ignore this comment, somehow the commit message gave me an impression that
> all the invocations of __tlbi_user is going to get updated.
>
I think you're telling me to ignore the whole thread here, so nothing to
address? Shout if I misunderstood.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v1 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess()
2025-12-17 8:12 ` Dev Jain
@ 2026-01-02 15:23 ` Ryan Roberts
0 siblings, 0 replies; 24+ messages in thread
From: Ryan Roberts @ 2026-01-02 15:23 UTC (permalink / raw)
To: Dev Jain, Will Deacon, Ard Biesheuvel, Catalin Marinas,
Mark Rutland, Linus Torvalds, Oliver Upton, Marc Zyngier,
Linu Cherian
Cc: linux-arm-kernel, linux-kernel
On 17/12/2025 08:12, Dev Jain wrote:
>
> On 16/12/25 8:15 pm, Ryan Roberts wrote:
>> From: Will Deacon <will@kernel.org>
>>
>> __flush_tlb_range_limit_excess() is unnecessarily complicated:
>>
>> - It takes a 'start', 'end' and 'pages' argument, whereas it only
>> needs 'pages' (which the caller has computed from the other two
>> arguments!).
>>
>> - It erroneously compares 'pages' with MAX_TLBI_RANGE_PAGES when
>> the system doesn't support range-based invalidation but the range to
>> be invalidated would result in fewer than MAX_DVM_OPS invalidations.
>>
>> Simplify the function so that it no longer takes the 'start' and 'end'
>> arguments and only considers the MAX_TLBI_RANGE_PAGES threshold on
>> systems that implement range-based invalidation.
>>
>> Signed-off-by: Will Deacon <will@kernel.org>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> ---
>> arch/arm64/include/asm/tlbflush.h | 20 ++++++--------------
>> 1 file changed, 6 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>> index 0e1902f66e01..3b72a71feac0 100644
>> --- a/arch/arm64/include/asm/tlbflush.h
>> +++ b/arch/arm64/include/asm/tlbflush.h
>> @@ -527,21 +527,13 @@ static __always_inline void __flush_tlb_range_op(tlbi_op lop, tlbi_op rop,
>> #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \
>> __flush_tlb_range_op(op, r##op, start, pages, stride, 0, tlb_level, kvm_lpa2_is_enabled())
>>
>> -static inline bool __flush_tlb_range_limit_excess(unsigned long start,
>> - unsigned long end, unsigned long pages, unsigned long stride)
>> +static inline bool __flush_tlb_range_limit_excess(unsigned long pages,
>> + unsigned long stride)
>> {
>> - /*
>> - * When the system does not support TLB range based flush
>> - * operation, (MAX_DVM_OPS - 1) pages can be handled. But
>> - * with TLB range based operation, MAX_TLBI_RANGE_PAGES
>> - * pages can be handled.
>> - */
>> - if ((!system_supports_tlb_range() &&
>> - (end - start) >= (MAX_DVM_OPS * stride)) ||
>> - pages > MAX_TLBI_RANGE_PAGES)
>> + if (system_supports_tlb_range() && pages > MAX_TLBI_RANGE_PAGES)
>> return true;
>>
>> - return false;
>> + return pages >= (MAX_DVM_OPS * stride) >> PAGE_SHIFT;
>
> The function will return true if tlb range is supported, but
> ((MAX_DVM_OPS * stride) >> PAGE_SHIFT) < pages <= MAX_TLBI_RANGE_PAGES.
> So I think you need to do
> https://lore.kernel.org/all/1b15b4f0-5490-4dac-8344-e716dd189751@arm.com/
I agree with your overall proposal, but I think a few of the details are not
quite correct.
I think the max number of DVM ops that could be issued by a single
__flush_tlb_range() call on a system with tlb-range is 20, not 4 as you suggest;
- 4 for each of the scales
- 1 for the final single page
- 15 to align to a 64K boundary on systems with LPA2 (with 4K page size)
But that doesn't really change your argument.
So proposing to change it to this in next version:
static inline bool __flush_tlb_range_limit_excess(unsigned long pages,
unsigned long stride)
{
/*
* Assume that the worst case number of DVM ops required to flush a
* given range on a system that supports tlb-range is 20 (4 scales, 1
* final page, 15 for alignment on LPA2 systems), which is much smaller
* than MAX_DVM_OPS.
*/
if (system_supports_tlb_range())
return pages > MAX_TLBI_RANGE_PAGES;
return pages >= (MAX_DVM_OPS * stride) >> PAGE_SHIFT;
}
Thanks,
Ryan
>
>> }
>>
>> static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
>> @@ -555,7 +547,7 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
>> end = round_up(end, stride);
>> pages = (end - start) >> PAGE_SHIFT;
>>
>> - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
>> + if (__flush_tlb_range_limit_excess(pages, stride)) {
>> flush_tlb_mm(mm);
>> return;
>> }
>> @@ -619,7 +611,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
>> end = round_up(end, stride);
>> pages = (end - start) >> PAGE_SHIFT;
>>
>> - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
>> + if (__flush_tlb_range_limit_excess(pages, stride)) {
>> flush_tlb_all();
>> return;
>> }
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2026-01-02 15:24 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-16 14:45 [PATCH v1 00/13] arm64: Refactor TLB invalidation API and implementation Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 01/13] arm64: mm: Re-implement the __tlbi_level macro as a C function Ryan Roberts
2025-12-16 17:53 ` Jonathan Cameron
2026-01-02 14:18 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 02/13] arm64: mm: Introduce a C wrapper for by-range TLB invalidation Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 03/13] arm64: mm: Implicitly invalidate user ASID based on TLBI operation Ryan Roberts
2025-12-16 18:01 ` Jonathan Cameron
2026-01-02 14:20 ` Ryan Roberts
2025-12-18 6:30 ` Linu Cherian
2025-12-18 7:05 ` Linu Cherian
2025-12-18 15:47 ` Linu Cherian
2026-01-02 14:30 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 04/13] arm64: mm: Push __TLBI_VADDR() into __tlbi_level() Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 05/13] arm64: mm: Inline __TLBI_VADDR_RANGE() into __tlbi_range() Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 06/13] arm64: mm: Re-implement the __flush_tlb_range_op macro in C Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 07/13] arm64: mm: Simplify __TLBI_RANGE_NUM() macro Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 08/13] arm64: mm: Simplify __flush_tlb_range_limit_excess() Ryan Roberts
2025-12-17 8:12 ` Dev Jain
2026-01-02 15:23 ` Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 09/13] arm64: mm: Refactor flush_tlb_page() to use __tlbi_level_asid() Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 10/13] arm64: mm: Refactor __flush_tlb_range() to take flags Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 11/13] arm64: mm: More flags for __flush_tlb_range() Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 12/13] arm64: mm: Wrap flush_tlb_page() around ___flush_tlb_range() Ryan Roberts
2025-12-16 14:45 ` [PATCH v1 13/13] arm64: mm: Provide level hint for flush_tlb_page() Ryan Roberts
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).