* [PATCH v5 0/1] Risc-V Svinval support
@ 2023-06-23 12:38 Mayuresh Chitale
2023-06-23 12:38 ` [PATCH v5 1/1] riscv: mm: use svinval instructions instead of sfence.vma Mayuresh Chitale
2023-09-25 15:12 ` [PATCH v5 0/1] Risc-V Svinval support Palmer Dabbelt
0 siblings, 2 replies; 4+ messages in thread
From: Mayuresh Chitale @ 2023-06-23 12:38 UTC (permalink / raw)
To: Palmer Dabbelt, Paul Walmsley, Albert Ou
Cc: Mayuresh Chitale, Atish Patra, Anup Patel, linux-riscv
This patch adds support for the Svinval extension as defined in the
Risc V Privileged specification.
Changes in v5:
- Reduce tlb flush threshold to 64
- Improve implementation of local_flush_tlb* functions
Changes in v4:
- Rebase and refactor as per latest changes on torvalds/master
- Drop patch 1 in the series
Changes in v3:
- Fix incorrect vma used for sinval instructions
- Use unified static key mechanism for svinval
- Rebased on torvalds/master
Changes in v2:
- Rebased on 5.18-rc3
- update riscv_fill_hwcap to probe Svinval extension
Mayuresh Chitale (1):
riscv: mm: use svinval instructions instead of sfence.vma
arch/riscv/include/asm/tlbflush.h | 1 +
arch/riscv/mm/tlbflush.c | 66 +++++++++++++++++++++++++++----
2 files changed, 59 insertions(+), 8 deletions(-)
--
2.34.1
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v5 1/1] riscv: mm: use svinval instructions instead of sfence.vma
2023-06-23 12:38 [PATCH v5 0/1] Risc-V Svinval support Mayuresh Chitale
@ 2023-06-23 12:38 ` Mayuresh Chitale
2023-06-24 11:04 ` Andrew Jones
2023-09-25 15:12 ` [PATCH v5 0/1] Risc-V Svinval support Palmer Dabbelt
1 sibling, 1 reply; 4+ messages in thread
From: Mayuresh Chitale @ 2023-06-23 12:38 UTC (permalink / raw)
To: Palmer Dabbelt, Paul Walmsley, Albert Ou
Cc: Mayuresh Chitale, Atish Patra, Anup Patel, linux-riscv
When svinval is supported the local_flush_tlb_page*
functions would prefer to use the following sequence
to optimize the tlb flushes instead of a simple sfence.vma:
sfence.w.inval
svinval.vma
.
.
svinval.vma
sfence.inval.ir
The maximum number of consecutive svinval.vma instructions
that can be executed in local_flush_tlb_page* functions is
limited to 64. This is required to avoid soft lockups and the
approach is similar to that used in arm64.
Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
---
arch/riscv/include/asm/tlbflush.h | 1 +
arch/riscv/mm/tlbflush.c | 66 +++++++++++++++++++++++++++----
2 files changed, 59 insertions(+), 8 deletions(-)
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
index a09196f8de68..56490c04b0bd 100644
--- a/arch/riscv/include/asm/tlbflush.h
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -30,6 +30,7 @@ static inline void local_flush_tlb_page(unsigned long addr)
#endif /* CONFIG_MMU */
#if defined(CONFIG_SMP) && defined(CONFIG_MMU)
+extern unsigned long tlb_flush_all_threshold;
void flush_tlb_all(void);
void flush_tlb_mm(struct mm_struct *mm);
void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index 77be59aadc73..f63cdf8644f3 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -5,6 +5,17 @@
#include <linux/sched.h>
#include <asm/sbi.h>
#include <asm/mmu_context.h>
+#include <asm/hwcap.h>
+#include <asm/insn-def.h>
+
+#define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL)
+
+/*
+ * Flush entire TLB if number of entries to be flushed is greater
+ * than the threshold below. Platforms may override the threshold
+ * value based on marchid, mvendorid, and mimpid.
+ */
+unsigned long tlb_flush_all_threshold __read_mostly = 64;
static inline void local_flush_tlb_all_asid(unsigned long asid)
{
@@ -24,21 +35,60 @@ static inline void local_flush_tlb_page_asid(unsigned long addr,
}
static inline void local_flush_tlb_range(unsigned long start,
- unsigned long size, unsigned long stride)
+ unsigned long size,
+ unsigned long stride)
{
- if (size <= stride)
- local_flush_tlb_page(start);
- else
+ unsigned long end = start + size;
+ unsigned long num_entries = DIV_ROUND_UP(size, stride);
+
+ if (!num_entries || num_entries > tlb_flush_all_threshold) {
local_flush_tlb_all();
+ return;
+ }
+
+ if (has_svinval())
+ asm volatile(SFENCE_W_INVAL() ::: "memory");
+
+ while (start < end) {
+ if (has_svinval())
+ asm volatile(SINVAL_VMA(%0, zero)
+ : : "r" (start) : "memory");
+ else
+ local_flush_tlb_page(start);
+ start += stride;
+ }
+
+ if (has_svinval())
+ asm volatile(SFENCE_INVAL_IR() ::: "memory");
}
static inline void local_flush_tlb_range_asid(unsigned long start,
- unsigned long size, unsigned long stride, unsigned long asid)
+ unsigned long size,
+ unsigned long stride,
+ unsigned long asid)
{
- if (size <= stride)
- local_flush_tlb_page_asid(start, asid);
- else
+ unsigned long end = start + size;
+ unsigned long num_entries = DIV_ROUND_UP(size, stride);
+
+ if (!num_entries || num_entries > tlb_flush_all_threshold) {
local_flush_tlb_all_asid(asid);
+ return;
+ }
+
+ if (has_svinval())
+ asm volatile(SFENCE_W_INVAL() ::: "memory");
+
+ while (start < end) {
+ if (has_svinval())
+ asm volatile(SINVAL_VMA(%0, %1) : : "r" (start),
+ "r" (asid) : "memory");
+ else
+ local_flush_tlb_page_asid(start, asid);
+ start += stride;
+ }
+
+ if (has_svinval())
+ asm volatile(SFENCE_INVAL_IR() ::: "memory");
}
static void __ipi_flush_tlb_all(void *info)
--
2.34.1
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v5 1/1] riscv: mm: use svinval instructions instead of sfence.vma
2023-06-23 12:38 ` [PATCH v5 1/1] riscv: mm: use svinval instructions instead of sfence.vma Mayuresh Chitale
@ 2023-06-24 11:04 ` Andrew Jones
0 siblings, 0 replies; 4+ messages in thread
From: Andrew Jones @ 2023-06-24 11:04 UTC (permalink / raw)
To: Mayuresh Chitale
Cc: Palmer Dabbelt, Paul Walmsley, Albert Ou, Atish Patra, Anup Patel,
linux-riscv
On Fri, Jun 23, 2023 at 06:08:49PM +0530, Mayuresh Chitale wrote:
> When svinval is supported the local_flush_tlb_page*
> functions would prefer to use the following sequence
> to optimize the tlb flushes instead of a simple sfence.vma:
>
> sfence.w.inval
> svinval.vma
> .
> .
> svinval.vma
> sfence.inval.ir
>
> The maximum number of consecutive svinval.vma instructions
> that can be executed in local_flush_tlb_page* functions is
> limited to 64. This is required to avoid soft lockups and the
> approach is similar to that used in arm64.
>
> Signed-off-by: Mayuresh Chitale <mchitale@ventanamicro.com>
> ---
> arch/riscv/include/asm/tlbflush.h | 1 +
> arch/riscv/mm/tlbflush.c | 66 +++++++++++++++++++++++++++----
> 2 files changed, 59 insertions(+), 8 deletions(-)
>
> diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
> index a09196f8de68..56490c04b0bd 100644
> --- a/arch/riscv/include/asm/tlbflush.h
> +++ b/arch/riscv/include/asm/tlbflush.h
> @@ -30,6 +30,7 @@ static inline void local_flush_tlb_page(unsigned long addr)
> #endif /* CONFIG_MMU */
>
> #if defined(CONFIG_SMP) && defined(CONFIG_MMU)
> +extern unsigned long tlb_flush_all_threshold;
> void flush_tlb_all(void);
> void flush_tlb_mm(struct mm_struct *mm);
> void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
> index 77be59aadc73..f63cdf8644f3 100644
> --- a/arch/riscv/mm/tlbflush.c
> +++ b/arch/riscv/mm/tlbflush.c
> @@ -5,6 +5,17 @@
> #include <linux/sched.h>
> #include <asm/sbi.h>
> #include <asm/mmu_context.h>
> +#include <asm/hwcap.h>
> +#include <asm/insn-def.h>
> +
> +#define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL)
> +
> +/*
> + * Flush entire TLB if number of entries to be flushed is greater
> + * than the threshold below. Platforms may override the threshold
> + * value based on marchid, mvendorid, and mimpid.
> + */
> +unsigned long tlb_flush_all_threshold __read_mostly = 64;
>
> static inline void local_flush_tlb_all_asid(unsigned long asid)
> {
> @@ -24,21 +35,60 @@ static inline void local_flush_tlb_page_asid(unsigned long addr,
> }
>
> static inline void local_flush_tlb_range(unsigned long start,
> - unsigned long size, unsigned long stride)
> + unsigned long size,
> + unsigned long stride)
> {
> - if (size <= stride)
> - local_flush_tlb_page(start);
> - else
> + unsigned long end = start + size;
> + unsigned long num_entries = DIV_ROUND_UP(size, stride);
> +
> + if (!num_entries || num_entries > tlb_flush_all_threshold) {
> local_flush_tlb_all();
> + return;
> + }
> +
> + if (has_svinval())
> + asm volatile(SFENCE_W_INVAL() ::: "memory");
> +
> + while (start < end) {
> + if (has_svinval())
> + asm volatile(SINVAL_VMA(%0, zero)
> + : : "r" (start) : "memory");
> + else
> + local_flush_tlb_page(start);
> + start += stride;
> + }
> +
> + if (has_svinval())
> + asm volatile(SFENCE_INVAL_IR() ::: "memory");
> }
>
> static inline void local_flush_tlb_range_asid(unsigned long start,
> - unsigned long size, unsigned long stride, unsigned long asid)
> + unsigned long size,
> + unsigned long stride,
> + unsigned long asid)
> {
> - if (size <= stride)
> - local_flush_tlb_page_asid(start, asid);
> - else
> + unsigned long end = start + size;
> + unsigned long num_entries = DIV_ROUND_UP(size, stride);
> +
> + if (!num_entries || num_entries > tlb_flush_all_threshold) {
> local_flush_tlb_all_asid(asid);
> + return;
> + }
> +
> + if (has_svinval())
> + asm volatile(SFENCE_W_INVAL() ::: "memory");
> +
> + while (start < end) {
> + if (has_svinval())
> + asm volatile(SINVAL_VMA(%0, %1) : : "r" (start),
> + "r" (asid) : "memory");
> + else
> + local_flush_tlb_page_asid(start, asid);
> + start += stride;
> + }
> +
> + if (has_svinval())
> + asm volatile(SFENCE_INVAL_IR() ::: "memory");
> }
>
> static void __ipi_flush_tlb_all(void *info)
> --
> 2.34.1
>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v5 0/1] Risc-V Svinval support
2023-06-23 12:38 [PATCH v5 0/1] Risc-V Svinval support Mayuresh Chitale
2023-06-23 12:38 ` [PATCH v5 1/1] riscv: mm: use svinval instructions instead of sfence.vma Mayuresh Chitale
@ 2023-09-25 15:12 ` Palmer Dabbelt
1 sibling, 0 replies; 4+ messages in thread
From: Palmer Dabbelt @ 2023-09-25 15:12 UTC (permalink / raw)
To: mchitale; +Cc: Paul Walmsley, aou, mchitale, atishp, anup, linux-riscv
On Fri, 23 Jun 2023 05:38:48 PDT (-0700), mchitale@ventanamicro.com wrote:
> This patch adds support for the Svinval extension as defined in the
> Risc V Privileged specification.
Do you have benchmarks (like we asked for here
<https://lore.kernel.org/all/CAN37VV40msnohyJqkwW_YkUmXmEL1yztk+ZQhTeA6feS-W0S2g@mail.gmail.com/>)?
>
> Changes in v5:
> - Reduce tlb flush threshold to 64
> - Improve implementation of local_flush_tlb* functions
>
> Changes in v4:
> - Rebase and refactor as per latest changes on torvalds/master
> - Drop patch 1 in the series
>
> Changes in v3:
> - Fix incorrect vma used for sinval instructions
> - Use unified static key mechanism for svinval
> - Rebased on torvalds/master
>
> Changes in v2:
> - Rebased on 5.18-rc3
> - update riscv_fill_hwcap to probe Svinval extension
>
>
> Mayuresh Chitale (1):
> riscv: mm: use svinval instructions instead of sfence.vma
>
> arch/riscv/include/asm/tlbflush.h | 1 +
> arch/riscv/mm/tlbflush.c | 66 +++++++++++++++++++++++++++----
> 2 files changed, 59 insertions(+), 8 deletions(-)
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-09-25 15:12 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-23 12:38 [PATCH v5 0/1] Risc-V Svinval support Mayuresh Chitale
2023-06-23 12:38 ` [PATCH v5 1/1] riscv: mm: use svinval instructions instead of sfence.vma Mayuresh Chitale
2023-06-24 11:04 ` Andrew Jones
2023-09-25 15:12 ` [PATCH v5 0/1] Risc-V Svinval support Palmer Dabbelt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox