linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] arm64/mm: Elide TLB flush in certain pte protection transitions
@ 2025-09-20  5:10 Dev Jain
  2025-09-21 12:23 ` Anshuman Khandual
  0 siblings, 1 reply; 3+ messages in thread
From: Dev Jain @ 2025-09-20  5:10 UTC (permalink / raw)
  To: catalin.marinas, will
  Cc: anshuman.khandual, wangkefeng.wang, ryan.roberts, baohua,
	pjaroszynski, linux-arm-kernel, linux-kernel, Dev Jain

Currently arm64 does an unconditional TLB flush in mprotect(). This is not
required for some cases, for example, when changing from PROT_NONE to
PROT_READ | PROT_WRITE (a real usecase - glibc malloc does this to emulate
growing into the non-main heaps), and unsetting uffd-wp in a range.

Therefore, implement pte_needs_flush() for arm64, which is already
implemented by some other arches as well.

Running a userspace program changing permissions back and forth between
PROT_NONE and PROT_READ | PROT_WRITE, and measuring the average time taken
for the none->rw transition, I get a reduction from 3.2 microseconds to
2.85 microseconds, giving a 12.3% improvement.

Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
mm-selftests pass.

v1->v2:
 - Drop PTE_PRESENT_INVALID and PTE_AF checks, use ptdesc_t instead of
   pteval_t, return !!diff (Ryan)

 arch/arm64/include/asm/tlbflush.h | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 18a5dc0c9a54..40df783ba09a 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -524,6 +524,33 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
 {
 	__flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3);
 }
+
+static inline bool __pte_flags_need_flush(ptdesc_t oldval, ptdesc_t newval)
+{
+	ptdesc_t diff = oldval ^ newval;
+
+	/* invalid to valid transition requires no flush */
+	if (!(oldval & PTE_VALID))
+		return false;
+
+	/* Transition in the SW bits requires no flush */
+	diff &= ~PTE_SWBITS_MASK;
+
+	return !!diff;
+}
+
+static inline bool pte_needs_flush(pte_t oldpte, pte_t newpte)
+{
+	return __pte_flags_need_flush(pte_val(oldpte), pte_val(newpte));
+}
+#define pte_needs_flush pte_needs_flush
+
+static inline bool huge_pmd_needs_flush(pmd_t oldpmd, pmd_t newpmd)
+{
+	return __pte_flags_need_flush(pmd_val(oldpmd), pmd_val(newpmd));
+}
+#define huge_pmd_needs_flush huge_pmd_needs_flush
+
 #endif
 
 #endif
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] arm64/mm: Elide TLB flush in certain pte protection transitions
  2025-09-20  5:10 [PATCH v2] arm64/mm: Elide TLB flush in certain pte protection transitions Dev Jain
@ 2025-09-21 12:23 ` Anshuman Khandual
  2025-09-22  4:07   ` Dev Jain
  0 siblings, 1 reply; 3+ messages in thread
From: Anshuman Khandual @ 2025-09-21 12:23 UTC (permalink / raw)
  To: Dev Jain, catalin.marinas, will
  Cc: wangkefeng.wang, ryan.roberts, baohua, pjaroszynski,
	linux-arm-kernel, linux-kernel

On 20/09/25 10:40 AM, Dev Jain wrote:
> Currently arm64 does an unconditional TLB flush in mprotect(). This is not
> required for some cases, for example, when changing from PROT_NONE to
> PROT_READ | PROT_WRITE (a real usecase - glibc malloc does this to emulate

The following transition does not require a TLB flush on all
architectures ? In which case, should not this check be part
of generic mprotect() itself.

PROT_NONE ---> PROT_READ | PROT_WRITE
> growing into the non-main heaps), and unsetting uffd-wp in a range.
> 
> Therefore, implement pte_needs_flush() for arm64, which is already
> implemented by some other arches as well.

Agreed, defining pte_needs_flush() on the platform does make
sense, if it brings some perf improvement without additional
cost.
> 
> Running a userspace program changing permissions back and forth between
> PROT_NONE and PROT_READ | PROT_WRITE, and measuring the average time taken
> for the none->rw transition, I get a reduction from 3.2 microseconds to
> 2.85 microseconds, giving a 12.3% improvement.

But that's a very specific workload intended to demonstrate
the use case for this change. Hence the improvement claimed
here is not representative of real world work loads.
> 
> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> mm-selftests pass.
> 
> v1->v2:
>  - Drop PTE_PRESENT_INVALID and PTE_AF checks, use ptdesc_t instead of
>    pteval_t, return !!diff (Ryan)
> 
>  arch/arm64/include/asm/tlbflush.h | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index 18a5dc0c9a54..40df783ba09a 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -524,6 +524,33 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
>  {
>  	__flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3);
>  }
> +
> +static inline bool __pte_flags_need_flush(ptdesc_t oldval, ptdesc_t newval)
> +{
> +	ptdesc_t diff = oldval ^ newval;
> +
> +	/* invalid to valid transition requires no flush */
> +	if (!(oldval & PTE_VALID))
> +		return false;

Is not this true for all platforms which could be checked via
pte_present() helper ? Hence should not this be moved inside
the caller itself in generic MM ? I guess probably the above
mentioned transition should be moved as well.

PROT_NONE ---> PROT_READ | PROT_WRITE
> +
> +	/* Transition in the SW bits requires no flush */
> +	diff &= ~PTE_SWBITS_MASK;
> +
> +	return !!diff;
> +}
> +
> +static inline bool pte_needs_flush(pte_t oldpte, pte_t newpte)
> +{
> +	return __pte_flags_need_flush(pte_val(oldpte), pte_val(newpte));
> +}
> +#define pte_needs_flush pte_needs_flush
> +
> +static inline bool huge_pmd_needs_flush(pmd_t oldpmd, pmd_t newpmd)
> +{
> +	return __pte_flags_need_flush(pmd_val(oldpmd), pmd_val(newpmd));
> +}
> +#define huge_pmd_needs_flush huge_pmd_needs_flush
> +
>  #endif
>  
>  #endif



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] arm64/mm: Elide TLB flush in certain pte protection transitions
  2025-09-21 12:23 ` Anshuman Khandual
@ 2025-09-22  4:07   ` Dev Jain
  0 siblings, 0 replies; 3+ messages in thread
From: Dev Jain @ 2025-09-22  4:07 UTC (permalink / raw)
  To: Anshuman Khandual, catalin.marinas, will
  Cc: wangkefeng.wang, ryan.roberts, baohua, pjaroszynski,
	linux-arm-kernel, linux-kernel


On 21/09/25 5:53 pm, Anshuman Khandual wrote:
> On 20/09/25 10:40 AM, Dev Jain wrote:
>> Currently arm64 does an unconditional TLB flush in mprotect(). This is not
>> required for some cases, for example, when changing from PROT_NONE to
>> PROT_READ | PROT_WRITE (a real usecase - glibc malloc does this to emulate
> The following transition does not require a TLB flush on all
> architectures ? In which case, should not this check be part
> of generic mprotect() itself.

You are probably correct - there should be some common transitions
like these which won't require flush on all arches. But that should
be part of a wider cleanup :) And, in the ppc version of this function,
I see that it returns true in the case of !radix_enabled(). So not
very sure.
  

>
> PROT_NONE ---> PROT_READ | PROT_WRITE
>> growing into the non-main heaps), and unsetting uffd-wp in a range.
>>
>> Therefore, implement pte_needs_flush() for arm64, which is already
>> implemented by some other arches as well.
> Agreed, defining pte_needs_flush() on the platform does make
> sense, if it brings some perf improvement without additional
> cost.
>> Running a userspace program changing permissions back and forth between
>> PROT_NONE and PROT_READ | PROT_WRITE, and measuring the average time taken
>> for the none->rw transition, I get a reduction from 3.2 microseconds to
>> 2.85 microseconds, giving a 12.3% improvement.
> But that's a very specific workload intended to demonstrate
> the use case for this change. Hence the improvement claimed
> here is not representative of real world work loads.

True, but as I state, glibc malloc commonly uses this protection
transition to extend the heap.

>> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>> mm-selftests pass.
>>
>> v1->v2:
>>   - Drop PTE_PRESENT_INVALID and PTE_AF checks, use ptdesc_t instead of
>>     pteval_t, return !!diff (Ryan)
>>
>>   arch/arm64/include/asm/tlbflush.h | 27 +++++++++++++++++++++++++++
>>   1 file changed, 27 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>> index 18a5dc0c9a54..40df783ba09a 100644
>> --- a/arch/arm64/include/asm/tlbflush.h
>> +++ b/arch/arm64/include/asm/tlbflush.h
>> @@ -524,6 +524,33 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
>>   {
>>   	__flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3);
>>   }
>> +
>> +static inline bool __pte_flags_need_flush(ptdesc_t oldval, ptdesc_t newval)
>> +{
>> +	ptdesc_t diff = oldval ^ newval;
>> +
>> +	/* invalid to valid transition requires no flush */
>> +	if (!(oldval & PTE_VALID))
>> +		return false;
> Is not this true for all platforms which could be checked via
> pte_present() helper ? Hence should not this be moved inside
> the caller itself in generic MM ? I guess probably the above
> mentioned transition should be moved as well.
>
> PROT_NONE ---> PROT_READ | PROT_WRITE
>> +
>> +	/* Transition in the SW bits requires no flush */
>> +	diff &= ~PTE_SWBITS_MASK;
>> +
>> +	return !!diff;
>> +}
>> +
>> +static inline bool pte_needs_flush(pte_t oldpte, pte_t newpte)
>> +{
>> +	return __pte_flags_need_flush(pte_val(oldpte), pte_val(newpte));
>> +}
>> +#define pte_needs_flush pte_needs_flush
>> +
>> +static inline bool huge_pmd_needs_flush(pmd_t oldpmd, pmd_t newpmd)
>> +{
>> +	return __pte_flags_need_flush(pmd_val(oldpmd), pmd_val(newpmd));
>> +}
>> +#define huge_pmd_needs_flush huge_pmd_needs_flush
>> +
>>   #endif
>>   
>>   #endif


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-09-22  4:07 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-20  5:10 [PATCH v2] arm64/mm: Elide TLB flush in certain pte protection transitions Dev Jain
2025-09-21 12:23 ` Anshuman Khandual
2025-09-22  4:07   ` Dev Jain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).